Purpura, Giulia; Cioni, Giovanni; Tinelli, Francesca
2018-07-01
Object recognition is a long and complex adaptive process and its full maturation requires combination of many different sensory experiences as well as cognitive abilities to manipulate previous experiences in order to develop new percepts and subsequently to learn from the environment. It is well recognized that the transfer of visual and haptic information facilitates object recognition in adults, but less is known about development of this ability. In this study, we explored the developmental course of object recognition capacity in children using unimodal visual information, unimodal haptic information, and visuo-haptic information transfer in children from 4 years to 10 years and 11 months of age. Participants were tested through a clinical protocol, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects, and visuo-haptic transfer of these two types of information. Results show an age-dependent development of object recognition abilities for visual, haptic, and visuo-haptic modalities. A significant effect of time on development of unimodal and crossmodal recognition skills was found. Moreover, our data suggest that multisensory processes for common object recognition are active at 4 years of age. They facilitate recognition of common objects, and, although not fully mature, are significant in adaptive behavior from the first years of age. The study of typical development of visuo-haptic processes in childhood is a starting point for future studies regarding object recognition in impaired populations.
Aging and solid shape recognition: Vision and haptics.
Norman, J Farley; Cheeseman, Jacob R; Adkins, Olivia C; Cox, Andrea G; Rogers, Connor E; Dowell, Catherine J; Baxter, Michael W; Norman, Hideko F; Reyes, Cecia M
2015-10-01
The ability of 114 younger and older adults to recognize naturally-shaped objects was evaluated in three experiments. The participants viewed or haptically explored six randomly-chosen bell peppers (Capsicum annuum) in a study session and were later required to judge whether each of twelve bell peppers was "old" (previously presented during the study session) or "new" (not presented during the study session). When recognition memory was tested immediately after study, the younger adults' (Experiment 1) performance for vision and haptics was identical when the individual study objects were presented once. Vision became superior to haptics, however, when the individual study objects were presented multiple times. When 10- and 20-min delays (Experiment 2) were inserted in between study and test sessions, no significant differences occurred between vision and haptics: recognition performance in both modalities was comparable. When the recognition performance of older adults was evaluated (Experiment 3), a negative effect of age was found for visual shape recognition (younger adults' overall recognition performance was 60% higher). There was no age effect, however, for haptic shape recognition. The results of the present experiments indicate that the visual recognition of natural object shape is different from haptic recognition in multiple ways: visual shape recognition can be superior to that of haptics and is affected by aging, while haptic shape recognition is less accurate and unaffected by aging. Copyright © 2015 Elsevier Ltd. All rights reserved.
Preserved Haptic Shape Processing after Bilateral LOC Lesions.
Snow, Jacqueline C; Goodale, Melvyn A; Culham, Jody C
2015-10-07
The visual and haptic perceptual systems are understood to share a common neural representation of object shape. A region thought to be critical for recognizing visual and haptic shape information is the lateral occipital complex (LOC). We investigated whether LOC is essential for haptic shape recognition in humans by studying behavioral responses and brain activation for haptically explored objects in a patient (M.C.) with bilateral lesions of the occipitotemporal cortex, including LOC. Despite severe deficits in recognizing objects using vision, M.C. was able to accurately recognize objects via touch. M.C.'s psychophysical response profile to haptically explored shapes was also indistinguishable from controls. Using fMRI, M.C. showed no object-selective visual or haptic responses in LOC, but her pattern of haptic activation in other brain regions was remarkably similar to healthy controls. Although LOC is routinely active during visual and haptic shape recognition tasks, it is not essential for haptic recognition of object shape. The lateral occipital complex (LOC) is a brain region regarded to be critical for recognizing object shape, both in vision and in touch. However, causal evidence linking LOC with haptic shape processing is lacking. We studied recognition performance, psychophysical sensitivity, and brain response to touched objects, in a patient (M.C.) with extensive lesions involving LOC bilaterally. Despite being severely impaired in visual shape recognition, M.C. was able to identify objects via touch and she showed normal sensitivity to a haptic shape illusion. M.C.'s brain response to touched objects in areas of undamaged cortex was also very similar to that observed in neurologically healthy controls. These results demonstrate that LOC is not necessary for recognizing objects via touch. Copyright © 2015 the authors 0270-6474/15/3513745-16$15.00/0.
1988-04-30
side it necessary and Identify’ by’ block n~nmbot) haptic hand, touch , vision, robot, object recognition, categorization 20. AGSTRPACT (Continue an...established that the haptic system has remarkable capabilities for object recognition. We define haptics as purposive touch . The basic tactual system...gathered ratings of the importance of dimensions for categorizing common objects by touch . Texture and hardness ratings strongly co-vary, which is
Size-Sensitive Perceptual Representations Underlie Visual and Haptic Object Recognition
Craddock, Matt; Lawson, Rebecca
2009-01-01
A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations. PMID:19956685
Kitada, Ryo; Johnsrude, Ingrid S; Kochiyama, Takanori; Lederman, Susan J
2009-10-01
Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.
ERIC Educational Resources Information Center
Lawson, Rebecca
2009-01-01
A sequential matching task was used to compare how the difficulty of shape discrimination influences the achievement of object constancy for depth rotations across haptic and visual object recognition. Stimuli were nameable, 3-dimensional plastic models of familiar objects (e.g., bed, chair) and morphs midway between these endpoint shapes (e.g., a…
Lawson, Rebecca
2014-02-01
The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.
Fragility of haptic memory in human full-term newborns.
Lejeune, Fleur; Borradori Tolsa, Cristina; Gentaz, Edouard; Barisnikov, Koviljka
2018-05-31
Numerous studies have established that newborns can memorize tactile information about the specific features of an object with their hands and detect differences with another object. However, the robustness of haptic memory abilities has already been examined in preterm newborns and in full-term infants, but not yet in full-term newborns. This research is aimed to better understand the robustness of haptic memory abilities at birth by examining the effects of a change in the objects' temperature and haptic interference. Sixty-eight full-term newborns (mean postnatal age: 2.5 days) were included. The two experiments were conducted in three phases: habituation (repeated presentation of the same object, a prism or cylinder in the newborn's hand), discrimination (presentation of a novel object), and recognition (presentation of the familiar object). In Experiment 1, the change in the objects' temperature was controlled during the three phases. Results reveal that newborns can memorize specific features that differentiate prism and cylinder shapes by touch, and discriminate between them, but surprisingly they did not show evidence of recognizing them after interference. As no significant effect of the temperature condition was observed in habituation, discrimination and recognition abilities, these findings suggest that discrimination abilities in newborns may be determined by the detection of shape differences. Overall, it seems that the ontogenesis of haptic recognition memory is not linear. The developmental schedule is likely crucial for haptic development between 34 and 40 GW. Copyright © 2018 Elsevier Inc. All rights reserved.
The effects of perceptual priming on 4-year-olds' haptic-to-visual cross-modal transfer.
Kalagher, Hilary
2013-01-01
Four-year-old children often have difficulty visually recognizing objects that were previously experienced only haptically. This experiment attempts to improve their performance in these haptic-to-visual transfer tasks. Sixty-two 4-year-old children participated in priming trials in which they explored eight unfamiliar objects visually, haptically, or visually and haptically together. Subsequently, all children participated in the same haptic-to-visual cross-modal transfer task. In this task, children haptically explored the objects that were presented in the priming phase and then visually identified a match from among three test objects, each matching the object on only one dimension (shape, texture, or color). Children in all priming conditions predominantly made shape-based matches; however, the most shape-based matches were made in the Visual and Haptic condition. All kinds of priming provided the necessary memory traces upon which subsequent haptic exploration could build a strong enough representation to enable subsequent visual recognition. Haptic exploration patterns during the cross-modal transfer task are discussed and the detailed analyses provide a unique contribution to our understanding of the development of haptic exploratory procedures.
The Role of Visual Experience on the Representation and Updating of Novel Haptic Scenes
ERIC Educational Resources Information Center
Pasqualotto, Achille; Newell, Fiona N.
2007-01-01
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to…
Modal-Power-Based Haptic Motion Recognition
NASA Astrophysics Data System (ADS)
Kasahara, Yusuke; Shimono, Tomoyuki; Kuwahara, Hiroaki; Sato, Masataka; Ohnishi, Kouhei
Motion recognition based on sensory information is important for providing assistance to human using robots. Several studies have been carried out on motion recognition based on image information. However, in the motion of humans contact with an object can not be evaluated precisely by image-based recognition. This is because the considering force information is very important for describing contact motion. In this paper, a modal-power-based haptic motion recognition is proposed; modal power is considered to reveal information on both position and force. Modal power is considered to be one of the defining features of human motion. A motion recognition algorithm based on linear discriminant analysis is proposed to distinguish between similar motions. Haptic information is extracted using a bilateral master-slave system. Then, the observed motion is decomposed in terms of primitive functions in a modal space. The experimental results show the effectiveness of the proposed method.
Motor skills, haptic perception and social abilities in children with mild speech disorders.
Müürsepp, Iti; Aibast, Herje; Gapeyeva, Helena; Pääsuke, Mati
2012-02-01
The aim of the study was to evaluate motor skills, haptic object recognition and social interaction in 5-year-old children with mild specific expressive language impairment (expressive-SLI) and articulation disorder (AD) in comparison of age- and gender matched healthy children. Twenty nine children (23 boys and 6 girls) with expressive-SLI, 27 children (20 boys and 7 girls) with AD and 30 children (23 boys and 7 girls) with typically developing language as controls participated in our study. The children were examined for manual dexterity, ball skills, static and dynamic balance by M-ABC test, haptic object recognition and for social interaction by questionnaire completed by teachers. Children with mild expressive-SLI demonstrated significantly poorer results in all subtests of motor skills (p<0.05), in haptic object recognition and social interaction (p<0.01) compared to controls. There were no statistically significant differences (p>0.05) in measured parameters between children with AD and controls. Children with expressive-SLI performed considerably poorer compared to AD group in balance subtest (p<0.05), and in overall M-ABC test (p<0.01). In children with mild expressive-SLI the functional motor performance, haptic perception and social interaction are considerably more affected than in children with AD. Although motor difficulties in speech production are prevalent in AD, it is localised and does not involve children's general motor skills, haptic perception or social interaction. Copyright © 2011 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Ballesteros, Soledad; Reales, José Manuel
2004-01-01
This study is the first to report complete priming in Alzheimer's disease (AD) patients and older control subjects for objects presented haptically. To investigate possible dissociations between implicit and explicit objects representations, young adults, Alzheimer's patients, and older controls performed a speeded object naming task followed by a recognition task. Similar haptic priming was exhibited by the three groups, although young adults responded faster than the two older groups. Furthermore, there was no difference in performance between the two healthy groups. On the other hand, younger and older healthy adults did not differ on explicit recognition while, as expected, AD patients were highly impaired. The double dissociation suggests that different memory systems mediate both types of memory tasks. The preservation of intact haptic priming in AD provides strong support to the idea that object implicit memory is mediated by a memory system that is different from the medial-temporal diencephalic system underlying explicit memory, which is impaired early in AD. Recent imaging and behavioral studies suggest that the implicit memory system may depend on extrastriate areas of the occipital cortex although somatosensory cortical mechanisms may also be involved.
Object Recognition and Localization: The Role of Tactile Sensors
Aggarwal, Achint; Kirchner, Frank
2014-01-01
Tactile sensors, because of their intrinsic insensitivity to lighting conditions and water turbidity, provide promising opportunities for augmenting the capabilities of vision sensors in applications involving object recognition and localization. This paper presents two approaches for haptic object recognition and localization for ground and underwater environments. The first approach called Batch Ransac and Iterative Closest Point augmented Particle Filter (BRICPPF) is based on an innovative combination of particle filters, Iterative-Closest-Point algorithm, and a feature-based Random Sampling and Consensus (RANSAC) algorithm for database matching. It can handle a large database of 3D-objects of complex shapes and performs a complete six-degree-of-freedom localization of static objects. The algorithms are validated by experimentation in ground and underwater environments using real hardware. To our knowledge this is the first instance of haptic object recognition and localization in underwater environments. The second approach is biologically inspired, and provides a close integration between exploration and recognition. An edge following exploration strategy is developed that receives feedback from the current state of recognition. A recognition by parts approach is developed which uses the BRICPPF for object sub-part recognition. Object exploration is either directed to explore a part until it is successfully recognized, or is directed towards new parts to endorse the current recognition belief. This approach is validated by simulation experiments. PMID:24553087
Monaco, Simona; Gallivan, Jason P; Figley, Teresa D; Singhal, Anthony; Culham, Jody C
2017-11-29
The role of the early visual cortex and higher-order occipitotemporal cortex has been studied extensively for visual recognition and to a lesser degree for haptic recognition and visually guided actions. Using a slow event-related fMRI experiment, we investigated whether tactile and visual exploration of objects recruit the same "visual" areas (and in the case of visual cortex, the same retinotopic zones) and if these areas show reactivation during delayed actions in the dark toward haptically explored objects (and if so, whether this reactivation might be due to imagery). We examined activation during visual or haptic exploration of objects and action execution (grasping or reaching) separated by an 18 s delay. Twenty-nine human volunteers (13 females) participated in this study. Participants had their eyes open and fixated on a point in the dark. The objects were placed below the fixation point and accordingly visual exploration activated the cuneus, which processes retinotopic locations in the lower visual field. Strikingly, the occipital pole (OP), representing foveal locations, showed higher activation for tactile than visual exploration, although the stimulus was unseen and location in the visual field was peripheral. Moreover, the lateral occipital tactile-visual area (LOtv) showed comparable activation for tactile and visual exploration. Psychophysiological interaction analysis indicated that the OP showed stronger functional connectivity with anterior intraparietal sulcus and LOtv during the haptic than visual exploration of shapes in the dark. After the delay, the cuneus, OP, and LOtv showed reactivation that was independent of the sensory modality used to explore the object. These results show that haptic actions not only activate "visual" areas during object touch, but also that this information appears to be used in guiding grasping actions toward targets after a delay. SIGNIFICANCE STATEMENT Visual presentation of an object activates shape-processing areas and retinotopic locations in early visual areas. Moreover, if the object is grasped in the dark after a delay, these areas show "reactivation." Here, we show that these areas are also activated and reactivated for haptic object exploration and haptically guided grasping. Touch-related activity occurs not only in the retinotopic location of the visual stimulus, but also at the occipital pole (OP), corresponding to the foveal representation, even though the stimulus was unseen and located peripherally. That is, the same "visual" regions are implicated in both visual and haptic exploration; however, touch also recruits high-acuity central representation within early visual areas during both haptic exploration of objects and subsequent actions toward them. Functional connectivity analysis shows that the OP is more strongly connected with ventral and dorsal stream areas when participants explore an object in the dark than when they view it. Copyright © 2017 the authors 0270-6474/17/3711572-20$15.00/0.
The contributions of vision and haptics to reaching and grasping
Stone, Kayla D.; Gonzalez, Claudia L. R.
2015-01-01
This review aims to provide a comprehensive outlook on the sensory (visual and haptic) contributions to reaching and grasping. The focus is on studies in developing children, normal, and neuropsychological populations, and in sensory-deprived individuals. Studies have suggested a right-hand/left-hemisphere specialization for visually guided grasping and a left-hand/right-hemisphere specialization for haptically guided object recognition. This poses the interesting possibility that when vision is not available and grasping relies heavily on the haptic system, there is an advantage to use the left hand. We review the evidence for this possibility and dissect the unique contributions of the visual and haptic systems to grasping. We ultimately discuss how the integration of these two sensory modalities shape hand preference. PMID:26441777
Object recognition for autonomous robot utilizing distributed knowledge database
NASA Astrophysics Data System (ADS)
Takatori, Jiro; Suzuki, Kenji; Hartono, Pitoyo; Hashimoto, Shuji
2003-10-01
In this paper we present a novel method of object recognition utilizing a remote knowledge database for an autonomous robot. The developed robot has three robot arms with different sensors; two CCD cameras and haptic sensors. It can see, touch and move the target object from different directions. Referring to remote knowledge database of geometry and material, the robot observes and handles the objects to understand them including their physical characteristics.
Command Recognition of Robot with Low Dimension Whole-Body Haptic Sensor
NASA Astrophysics Data System (ADS)
Ito, Tatsuya; Tsuji, Toshiaki
The authors have developed “haptic armor”, a whole-body haptic sensor that has an ability to estimate contact position. Although it is developed for safety assurance of robots in human environment, it can also be used as an interface. This paper proposes a command recognition method based on finger trace information. This paper also discusses some technical issues for improving recognition accuracy of this system.
Touch influences perceived gloss
Adams, Wendy J.; Kerrigan, Iona S.; Graf, Erich W.
2016-01-01
Identifying an object’s material properties supports recognition and action planning: we grasp objects according to how heavy, hard or slippery we expect them to be. Visual cues to material qualities such as gloss have recently received attention, but how they interact with haptic (touch) information has been largely overlooked. Here, we show that touch modulates gloss perception: objects that feel slippery are perceived as glossier (more shiny).Participants explored virtual objects that varied in look and feel. A discrimination paradigm (Experiment 1) revealed that observers integrate visual gloss with haptic information. Observers could easily detect an increase in glossiness when it was paired with a decrease in friction. In contrast, increased glossiness coupled with decreased slipperiness produced a small perceptual change: the visual and haptic changes counteracted each other. Subjective ratings (Experiment 2) reflected a similar interaction – slippery objects were rated as glossier and vice versa. The sensory system treats visual gloss and haptic friction as correlated cues to surface material. Although friction is not a perfect predictor of gloss, the visual system appears to know and use a probabilistic relationship between these variables to bias perception – a sensible strategy given the ambiguity of visual clues to gloss. PMID:26915492
Enhanced visuo-haptic integration for the non-dominant hand.
Yalachkov, Yavor; Kaiser, Jochen; Doehrmann, Oliver; Naumer, Marcus J
2015-07-21
Visuo-haptic integration contributes essentially to object shape recognition. Although there has been a considerable advance in elucidating the neural underpinnings of multisensory perception, it is still unclear whether seeing an object and exploring it with the dominant hand elicits the same brain response as compared to the non-dominant hand. Using fMRI to measure brain activation in right-handed participants, we found that for both left- and right-hand stimulation the left lateral occipital complex (LOC) and anterior cerebellum (aCER) were involved in visuo-haptic integration of familiar objects. These two brain regions were then further investigated in another study, where unfamiliar, novel objects were presented to a different group of right-handers. Here the left LOC and aCER were more strongly activated by bimodal than unimodal stimuli only when the left but not the right hand was used. A direct comparison indicated that the multisensory gain of the fMRI activation was significantly higher for the left than the right hand. These findings are in line with the principle of "inverse effectiveness", implying that processing of bimodally presented stimuli is particularly enhanced when the unimodal stimuli are weak. This applies also when right-handed subjects see and simultaneously touch unfamiliar objects with their non-dominant left hand. Thus, the fMRI signal in the left LOC and aCER induced by visuo-haptic stimulation is dependent on which hand was employed for haptic exploration. Copyright © 2015 Elsevier B.V. All rights reserved.
Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T.; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J.; Sadato, Norihiro
2012-01-01
Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience. PMID:23372547
Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro
2013-01-01
Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.
Pattern Perception and Pictures for the Blind
ERIC Educational Resources Information Center
Heller, Morton A.; McCarthy, Melissa; Clark, Ashley
2005-01-01
This article reviews recent research on perception of tangible pictures in sighted and blind people. Haptic picture naming accuracy is dependent upon familiarity and access to semantic memory, just as in visual recognition. Performance is high when haptic picture recognition tasks do not depend upon semantic memory. Viewpoint matters for the ease…
Aging and the Haptic Perception of Material Properties.
Norman, J Farley; Adkins, Olivia C; Hoyng, Stevie C; Dowell, Catherine J; Pedersen, Lauren E; Gilliam, Ashley N
2016-12-01
The ability of 26 younger (mean age was 22.5 years) and older adults (mean age was 72.6 years) to haptically perceive material properties was evaluated. The participants manually explored (for 5 seconds) 42 surfaces twice and placed each of these 84 experimental stimuli into one of seven categories: paper, plastic, metal, wood, stone, fabric, and fur/leather. In general, the participants were best able to identify fur/leather and wood materials; in contrast, recognition performance was worst for stone and paper. Despite similar overall patterns of performance for younger and older participants, the younger adults' recognition accuracies were 26.5% higher. The participants' tactile acuities (assessed by tactile grating orientation discrimination) affected their ability to identify surface material. In particular, the Pearson r correlation coefficient relating the participants' grating orientation thresholds and their material identification performance was -0.8: The higher the participants' thresholds, the lower the material recognition ability. While older adults are able to effectively perceive the solid shape of environmental objects using the sense of touch, their ability to perceive surface materials is significantly compromised.
Human haptic perception is interrupted by explorative stops of milliseconds
Grunwald, Martin; Muniyandi, Manivannan; Kim, Hyun; Kim, Jung; Krause, Frank; Mueller, Stephanie; Srinivasan, Mandayam A.
2014-01-01
Introduction: The explorative scanning movements of the hands have been compared to those of the eyes. The visual process is known to be composed of alternating phases of saccadic eye movements and fixation pauses. Descriptive results suggest that during the haptic exploration of objects short movement pauses occur as well. The goal of the present study was to detect these “explorative stops” (ES) during one-handed and two-handed haptic explorations of various objects and patterns, and to measure their duration. Additionally, the associations between the following variables were analyzed: (a) between mean exploration time and duration of ES, (b) between certain stimulus features and ES frequency, and (c) the duration of ES during the course of exploration. Methods: Five different Experiments were used. The first two Experiments were classical recognition tasks of unknown haptic stimuli (A) and of common objects (B). In Experiment C space-position information of angle legs had to be perceived and reproduced. For Experiments D and E the PHANToM haptic device was used for the exploration of virtual (D) and real (E) sunken reliefs. Results: In each Experiment we observed explorative stops of different average durations. For Experiment A: 329.50 ms, Experiment B: 67.47 ms, Experiment C: 189.92 ms, Experiment D: 186.17 ms and Experiment E: 140.02 ms. Significant correlations were observed between exploration time and the duration of the ES. Also, ES occurred more frequently, but not exclusively, at defined stimulus features like corners, curves and the endpoints of lines. However, explorative stops do not occur every time a stimulus feature is explored. Conclusions: We assume that ES are a general aspect of human haptic exploration processes. We have tried to interpret the occurrence and duration of ES with respect to the Hypotheses-Rebuild-Model and the Limited Capacity Control System theory. PMID:24782797
Haptic Classification of Common Objects: Knowledge-Driven Exploration.
ERIC Educational Resources Information Center
Lederman, Susan J.; Klatzky, Roberta L.
1990-01-01
Theoretical and empirical issues relating to haptic exploration and the representation of common objects during haptic classification were investigated in 3 experiments involving a total of 112 college students. Results are discussed in terms of a computational model of human haptic object classification with implications for dextrous robot…
Zahabi, Maryam; Zhang, Wenjuan; Pankok, Carl; Lau, Mei Ying; Shirley, James; Kaber, David
2017-11-01
Many occupations require both physical exertion and cognitive task performance. Knowledge of any interaction between physical demands and modalities of cognitive task information presentation can provide a basis for optimising performance. This study examined the effect of physical exertion and modality of information presentation on pattern recognition and navigation-related information processing. Results indicated males of equivalent high fitness, between the ages of 18 and 34, rely more on visual cues vs auditory or haptic for pattern recognition when exertion level is high. We found that navigation response time was shorter under low and medium exertion levels as compared to high intensity. Navigation accuracy was lower under high level exertion compared to medium and low levels. In general, findings indicated that use of the haptic modality for cognitive task cueing decreased accuracy in pattern recognition responses. Practitioner Summary: An examination was conducted on the effect of physical exertion and information presentation modality in pattern recognition and navigation. In occupations requiring information presentation to workers, who are simultaneously performing a physical task, the visual modality appears most effective under high level exertion while haptic cueing degrades performance.
Perception of synchronization errors in haptic and visual communications
NASA Astrophysics Data System (ADS)
Kameyama, Seiji; Ishibashi, Yutaka
2006-10-01
This paper deals with a system which conveys the haptic sensation experimented by a user to a remote user. In the system, the user controls a haptic interface device with another remote haptic interface device while watching video. Haptic media and video of a real object which the user is touching are transmitted to another user. By subjective assessment, we investigate the allowable range and imperceptible range of synchronization error between haptic media and video. We employ four real objects and ask each subject whether the synchronization error is perceived or not for each object in the assessment. Assessment results show that we can more easily perceive the synchronization error in the case of haptic media ahead of video than in the case of the haptic media behind the video.
Perceptualization of geometry using intelligent haptic and visual sensing
NASA Astrophysics Data System (ADS)
Weng, Jianguang; Zhang, Hui
2013-01-01
We present a set of paradigms for investigating geometric structures using haptic and visual sensing. Our principal test cases include smoothly embedded geometry shapes such as knotted curves embedded in 3D and knotted surfaces in 4D, that contain massive intersections when projected to one lower dimension. One can exploit a touch-responsive 3D interactive probe to haptically override this conflicting evidence in the rendered images, by forcing continuity in the haptic representation to emphasize the true topology. In our work, we exploited a predictive haptic guidance, a "computer-simulated hand" with supplementary force suggestion, to support intelligent exploration of geometry shapes that will smooth and maximize the probability of recognition. The cognitive load can be reduced further when enabling an attention-driven visual sensing during the haptic exploration. Our methods combine to reveal the full richness of the haptic exploration of geometric structures, and to overcome the limitations of traditional 4D visualization.
Norman, J Farley; Phillips, Flip; Holmin, Jessica S; Norman, Hideko F; Beers, Amanda M; Boswell, Alexandria M; Cheeseman, Jacob R; Stethen, Angela G; Ronning, Cecilia
2012-10-01
A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.
Algorithms for Haptic Rendering of 3D Objects
NASA Technical Reports Server (NTRS)
Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam
2003-01-01
Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).
Sebastián, Manuel; Reales, José M; Ballesteros, Soledad
2011-12-01
In this electrophysiological study, we investigated the effects of ageing on recognition memory for three-dimensional (3D) familiar objects presented to touch in a continuous paradigm. To examine changes in event-related potentials (ERPs) and brain oscillations, we recorded the EEGs of healthy groups of young (n=14; mean age=32.3 years) and older adults (n=14; mean age=65.1). Both age groups exhibited similar accuracy and exploration times when making old-new judgments. Young and older participants showed a marginally significant ERP old/new effect widely distributed over the scalp between 550-750 ms. In addition, the elders showed lower amplitude than younger participants within 1200-1500 ms. There were age-related differences in brain oscillations as measured by event-related spectral perturbation (ERSP). Older adults showed greater alpha and beta power reductions than young participants, suggesting the recruitment of additional neural resources. In contrast, the two age groups showed a reliable old/new effect in the theta band that temporarily overlapped the ERP old/new effect. The present results suggest that despite similar behavioral performance, the young and older adults recruited different neural resources to perform a haptic recognition task. Copyright © 2011 Elsevier Ltd. All rights reserved.
Zenner, Andre; Kruger, Antonio
2017-04-01
We define the concept of Dynamic Passive Haptic Feedback (DPHF) for virtual reality by introducing the weight-shifting physical DPHF proxy object Shifty. This concept combines actuators known from active haptics and physical proxies known from passive haptics to construct proxies that automatically adapt their passive haptic feedback. We describe the concept behind our ungrounded weight-shifting DPHF proxy Shifty and the implementation of our prototype. We then investigate how Shifty can, by automatically changing its internal weight distribution, enhance the user's perception of virtual objects interacted with in two experiments. In a first experiment, we show that Shifty can enhance the perception of virtual objects changing in shape, especially in length and thickness. Here, Shifty was shown to increase the user's fun and perceived realism significantly, compared to an equivalent passive haptic proxy. In a second experiment, Shifty is used to pick up virtual objects of different virtual weights. The results show that Shifty enhances the perception of weight and thus the perceived realism by adapting its kinesthetic feedback to the picked-up virtual object. In the same experiment, we additionally show that specific combinations of haptic, visual and auditory feedback during the pick-up interaction help to compensate for visual-haptic mismatch perceived during the shifting process.
Cognitive aspects of haptic form recognition by blind and sighted subjects.
Bailes, S M; Lambert, R M
1986-11-01
Studies using haptic form recognition tasks have generally concluded that the adventitiously blind perform better than the congenitally blind, implicating the importance of early visual experience in improved spatial functioning. The hypothesis was tested that the adventitiously blind have retained some ability to encode successive information obtained haptically in terms of a global visual representation, while the congenitally blind use a coding system based on successive inputs. Eighteen blind (adventitiously and congenitally) and 18 sighted (blindfolded and performing with vision) subjects were tested on their recognition of raised line patterns when the standard was presented in segments: in immediate succession, or with unfilled intersegmental delays of 5, 10, or 15 seconds. The results did not support the above hypothesis. Three main findings were obtained: normally sighted subjects were both faster and more accurate than the other groups; all groups improved in accuracy of recognition as a function of length of interstimulus interval; sighted subjects tended to report using strategies with a strong verbal component while the blind tended to rely on imagery coding. These results are explained in terms of information-processing theory consistent with dual encoding systems in working memory.
Effects of Motion and Figural Goodness on Haptic Object Perception in Infancy.
ERIC Educational Resources Information Center
Streri, Arlette; Spelke, Elizabeth S.
1989-01-01
After haptic habituation to a ring display, infants perceived the rings in two experiments as parts of one connected object. In both haptic and visual modes, infants appeared to perceive object unity by analyzing motion but not by analyzing figural goodness. (RH)
ERIC Educational Resources Information Center
Locher, Paul J.; Simmons, Roger W.
Two experiments were conducted to investigate the perceptual processes involved in haptic exploration of randomly generated shapes. Experiment one required subjects to detect symmetrical or asymmetrical characteristics of individually presented plastic shapes, also varying in complexity. Scanning time for both symmetrical and asymmetrical shapes…
Visuo-Haptic Mixed Reality with Unobstructed Tool-Hand Integration.
Cosco, Francesco; Garre, Carlos; Bruno, Fabio; Muzzupappa, Maurizio; Otaduy, Miguel A
2013-01-01
Visuo-haptic mixed reality consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects, and haptic devices for adding haptic interaction with the virtual objects. Unfortunately, the use of commodity haptic devices poses obstruction and misalignment issues that complicate the correct integration of a virtual tool and the user's real hand in the mixed reality scene. In this work, we propose a novel mixed reality paradigm where it is possible to touch and see virtual objects in combination with a real scene, using commodity haptic devices, and with a visually consistent integration of the user's hand and the virtual tool. We discuss the visual obstruction and misalignment issues introduced by commodity haptic devices, and then propose a solution that relies on four simple technical steps: color-based segmentation of the hand, tracking-based segmentation of the haptic device, background repainting using image-based models, and misalignment-free compositing of the user's hand. We have developed a successful proof-of-concept implementation, where a user can touch virtual objects and interact with them in the context of a real scene, and we have evaluated the impact on user performance of obstruction and misalignment correction.
Lee Masson, Haemy; Bulthé, Jessica; Op de Beeck, Hans P; Wallraven, Christian
2016-08-01
Humans are highly adept at multisensory processing of object shape in both vision and touch. Previous studies have mostly focused on where visually perceived object-shape information can be decoded, with haptic shape processing receiving less attention. Here, we investigate visuo-haptic shape processing in the human brain using multivoxel correlation analyses. Importantly, we use tangible, parametrically defined novel objects as stimuli. Two groups of participants first performed either a visual or haptic similarity-judgment task. The resulting perceptual object-shape spaces were highly similar and matched the physical parameter space. In a subsequent fMRI experiment, objects were first compared within the learned modality and then in the other modality in a one-back task. When correlating neural similarity spaces with perceptual spaces, visually perceived shape was decoded well in the occipital lobe along with the ventral pathway, whereas haptically perceived shape information was mainly found in the parietal lobe, including frontal cortex. Interestingly, ventrolateral occipito-temporal cortex decoded shape in both modalities, highlighting this as an area capable of detailed visuo-haptic shape processing. Finally, we found haptic shape representations in early visual cortex (in the absence of visual input), when participants switched from visual to haptic exploration, suggesting top-down involvement of visual imagery on haptic shape processing. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Visual and haptic integration in the estimation of softness of deformable objects
Cellini, Cristiano; Kaim, Lukas; Drewing, Knut
2013-01-01
Softness perception intrinsically relies on haptic information. However, through everyday experiences we learn correspondences between felt softness and the visual effects of exploratory movements that are executed to feel softness. Here, we studied how visual and haptic information is integrated to assess the softness of deformable objects. Participants discriminated between the softness of two softer or two harder objects using only-visual, only-haptic or both visual and haptic information. We assessed the reliabilities of the softness judgments using the method of constant stimuli. In visuo-haptic trials, discrepancies between the two senses' information allowed us to measure the contribution of the individual senses to the judgments. Visual information (finger movement and object deformation) was simulated using computer graphics; input in visual trials was taken from previous visuo-haptic trials. Participants were able to infer softness from vision alone, and vision considerably contributed to bisensory judgments (∼35%). The visual contribution was higher than predicted from models of optimal integration (senses are weighted according to their reliabilities). Bisensory judgments were less reliable than predicted from optimal integration. We conclude that the visuo-haptic integration of softness information is biased toward vision, rather than being optimal, and might even be guided by a fixed weighting scheme. PMID:25165510
Takahashi, Chie; Watt, Simon J.
2014-01-01
When we hold an object while looking at it, estimates from visual and haptic cues to size are combined in a statistically optimal fashion, whereby the “weight” given to each signal reflects their relative reliabilities. This allows object properties to be estimated more precisely than would otherwise be possible. Tools such as pliers and tongs systematically perturb the mapping between object size and the hand opening. This could complicate visual-haptic integration because it may alter the reliability of the haptic signal, thereby disrupting the determination of appropriate signal weights. To investigate this we first measured the reliability of haptic size estimates made with virtual pliers-like tools (created using a stereoscopic display and force-feedback robots) with different “gains” between hand opening and object size. Haptic reliability in tool use was straightforwardly determined by a combination of sensitivity to changes in hand opening and the effects of tool geometry. The precise pattern of sensitivity to hand opening, which violated Weber's law, meant that haptic reliability changed with tool gain. We then examined whether the visuo-motor system accounts for these reliability changes. We measured the weight given to visual and haptic stimuli when both were available, again with different tool gains, by measuring the perceived size of stimuli in which visual and haptic sizes were varied independently. The weight given to each sensory cue changed with tool gain in a manner that closely resembled the predictions of optimal sensory integration. The results are consistent with the idea that different tool geometries are modeled by the brain, allowing it to calculate not only the distal properties of objects felt with tools, but also the certainty with which those properties are known. These findings highlight the flexibility of human sensory integration and tool-use, and potentially provide an approach for optimizing the design of visual-haptic devices. PMID:24592245
Grasping and fingering (active or haptic touch) in healthy newborns.
Adamson-Macedo, Elvidina Nabuco; Barnes, Christopher R
2004-12-01
The traditional view that the activity of the baby's hands are triggered by a stimulus in an automatic, compulsory, stereotyped way and persisting view that fingering does not occur prior to 4 months of age, have led perception researchers to the assumption that the processing, encoding, and retainment of sensory information could not take place through the manual mode. This study aims to investigate whether fingering and different types of grasping occur before 3 months of age and can be modulated by surface texture of three objects. Using naturalistic observations, this small sample developmental study applied the AB experimental design to achieve aims above. Babies were video taped every week for 12 weeks. Three special manual stimuli were developed for this study. Focal sampling method with either zero-sampling or instantaneous sampling recording rules were used to analyse data with the Observer Video Pro. Each session comprising baseline and 3 experimental conditions lasted for four minutes. Fingering or 'proto fingering' as it is suggested in this article emerges as early as the first week of postnatal life; texture of a handled object modulates both 'proto-palm' and hand-grasp behaviour of healthy newborns. Results suggest that texture also modulates 'proto-fingering' and challenge persisting current assumption that fingering does not occur before four months of age, and further validates the phrase 'neo-haptic' touch to describe hands-on exploration of the newborn. The author suggests that some 'mental representation' of the stimulus is present during 'neo-haptic' recognition of the objects which is in accordance to a constructivist approach to (touch) perception.
Haptograph Representation of Real-World Haptic Information by Wideband Force Control
NASA Astrophysics Data System (ADS)
Katsura, Seiichiro; Irie, Kouhei; Ohishi, Kiyoshi
Artificial acquisition and reproduction of human sensations are basic technologies of communication engineering. For example, auditory information is obtained by a microphone, and a speaker reproduces it by artificial means. Furthermore, a video camera and a television make it possible to transmit visual sensation by broadcasting. On the contrary, since tactile or haptic information is subject to the Newton's “law of action and reaction” in the real world, a device which acquires, transmits, and reproduces the information has not been established. From the point of view, real-world haptics is the key technology for future haptic communication engineering. This paper proposes a novel acquisition method of haptic information named “haptograph”. The haptograph visualizes the haptic information like photograph. The proposed haptograph is applied to haptic recognition of the contact environment. A linear motor contacts to the surface of the environment and its reaction force is used to make a haptograph. A robust contact motion and sensor-less sensing of the reaction force are attained by using a disturbance observer. As a result, an encyclopedia of contact environment is attained. Since temporal and spatial analyses are conducted to represent haptic information as the haptograph, it is possible to be recognized and to be evaluated intuitively.
ERIC Educational Resources Information Center
Stock, Oliver; Roder, Brigitte; Burke, Michael; Bien, Siegfried; Rosler, Frank
2009-01-01
The present study used functional magnetic resonance imaging to delineate cortical networks that are activated when objects or spatial locations encoded either visually (visual encoding group, n = 10) or haptically (haptic encoding group, n = 10) had to be retrieved from long-term memory. Participants learned associations between auditorily…
Hilsenrat, Marcos; Reiner, Miriam
2011-06-30
It is well known that unaware exposure to a visual stimulus increases the preferability of the associated object. In this study we examine whether the same phenomena occur for haptic stimuli. Using a touch-enabled virtual environment, we tested whether people that touch two virtual surfaces, which differ by imperceptible differences in roughness or compliance, tend to choose rougher or smoother, softer or stiffer surfaces, in accordance with their natural tendency. In forced choice preference tests, participants were first asked to choose between two surfaces that differ by roughness/stiffness. Stimuli strength was above the aware perception limit. Then, the same test was performed for differences in stimuli strength, which was below the limit of awareness. Finally, we carried out a recognition test: participants were asked to choose between the surfaces presented in the previous step, and point at the smoother or softer surface, respectively. For each stimulus, two groups of 26 subjects participated. Results show that in the unaware preference tests, participants selected the surface in accordance with the aware preference tests, with significant difference from chance (59.5%, and 60.2% for roughness and compliance as a stimulus, respectively). The recognition tests in both experiments were at chance level, suggesting that participants were unaware of the difference in stimuli. These results show that subliminal perception of roughness and compliance strength affects texture preferences. Research data suggest that the amygdala is central in regulating emotional processing of visual stimuli, even if it is presented subliminally. Thus, the results of this study raise the question whether the amygdala also modulates emotional haptic stimuli when they are subliminally perceived. Copyright © 2011 Elsevier Inc. All rights reserved.
Haptic perception and body representation in lateral and medial occipito-temporal cortices.
Costantini, Marcello; Urgesi, Cosimo; Galati, Gaspare; Romani, Gian Luca; Aglioti, Salvatore M
2011-04-01
Although vision is the primary sensory modality that humans and other primates use to identify objects in the environment, we can recognize crucial object features (e.g., shape, size) using the somatic modality. Previous studies have shown that the occipito-temporal areas dedicated to the visual processing of object forms, faces and bodies also show category-selective responses when the preferred stimuli are haptically explored out of view. Visual processing of human bodies engages specific areas in lateral (extrastriate body area, EBA) and medial (fusiform body area, FBA) occipito-temporal cortex. This study aimed at exploring the relative involvement of EBA and FBA in the haptic exploration of body parts. During fMRI scanning, participants were asked to haptically explore either real-size fake body parts or objects. We found a selective activation of right and left EBA, but not of right FBA, while participants haptically explored body parts as compared to real objects. This suggests that EBA may integrate visual body representations with somatosensory information regarding body parts and form a multimodal representation of the body. Furthermore, both left and right EBA showed a comparable level of body selectivity during haptic perception and visual imagery. However, right but not left EBA was more activated during haptic exploration than visual imagery of body parts, ruling out that the response to haptic body exploration was entirely due to the use of visual imagery. Overall, the results point to the existence of different multimodal body representations in the occipito-temporal cortex which are activated during perception and imagery of human body parts. Copyright © 2011 Elsevier Ltd. All rights reserved.
Design of a 4-DOF MR haptic master for application to robot surgery: virtual environment work
NASA Astrophysics Data System (ADS)
Oh, Jong-Seok; Choi, Seung-Hyun; Choi, Seung-Bok
2014-09-01
This paper presents the design and control performance of a novel type of 4-degrees-of-freedom (4-DOF) haptic master in cyberspace for a robot-assisted minimally invasive surgery (RMIS) application. By using a controllable magnetorheological (MR) fluid, the proposed haptic master can have a feedback function for a surgical robot. Due to the difficulty in utilizing real human organs in the experiment, the cyberspace that features the virtual object is constructed to evaluate the performance of the haptic master. In order to realize the cyberspace, a volumetric deformable object is represented by a shape-retaining chain-linked (S-chain) model, which is a fast volumetric model and is suitable for real-time applications. In the haptic architecture for an RMIS application, the desired torque and position induced from the virtual object of the cyberspace and the haptic master of real space are transferred to each other. In order to validate the superiority of the proposed master and volumetric model, a tracking control experiment is implemented with a nonhomogenous volumetric cubic object to demonstrate that the proposed model can be utilized in real-time haptic rendering architecture. A proportional-integral-derivative (PID) controller is then designed and empirically implemented to accomplish the desired torque trajectories. It has been verified from the experiment that tracking the control performance for torque trajectories from a virtual slave can be successfully achieved.
Haptic shape discrimination and interhemispheric communication.
Dowell, Catherine J; Norman, J Farley; Moment, Jackie R; Shain, Lindsey M; Norman, Hideko F; Phillips, Flip; Kappers, Astrid M L
2018-01-10
In three experiments participants haptically discriminated object shape using unimanual (single hand explored two objects) and bimanual exploration (both hands were used, but each hand, left or right, explored a separate object). Such haptic exploration (one versus two hands) requires somatosensory processing in either only one or both cerebral hemispheres; previous studies related to the perception of shape/curvature found superior performance for unimanual exploration, indicating that shape comparison is more effective when only one hemisphere is utilized. The current results, obtained for naturally shaped solid objects (bell peppers, Capsicum annuum) and simple cylindrical surfaces demonstrate otherwise: bimanual haptic exploration can be as effective as unimanual exploration, showing that there is no necessary reduction in ability when haptic shape comparison requires interhemispheric communication. We found that while successive bimanual exploration produced high shape discriminability, the participants' bimanual performance deteriorated for simultaneous shape comparisons. This outcome suggests that either interhemispheric interference or the need to attend to multiple objects simultaneously reduces shape discrimination ability. The current results also reveal a significant effect of age: older adults' shape discrimination abilities are moderately reduced relative to younger adults, regardless of how objects are manipulated (left hand only, right hand only, or bimanual exploration).
Differential effects of delay upon visually and haptically guided grasping and perceptual judgments.
Pettypiece, Charles E; Culham, Jody C; Goodale, Melvyn A
2009-05-01
Experiments with visual illusions have revealed a dissociation between the systems that mediate object perception and those responsible for object-directed action. More recently, an experiment on a haptic version of the visual size-contrast illusion has provided evidence for the notion that the haptic modality shows a similar dissociation when grasping and estimating the size of objects in real-time. Here we present evidence suggesting that the similarities between the two modalities begin to break down once a delay is introduced between when people feel the target object and when they perform the grasp or estimation. In particular, when grasping after a delay in a haptic paradigm, people scale their grasps differently when the target is presented with a flanking object of a different size (although the difference does not reflect a size-contrast effect). When estimating after a delay, however, it appears that people ignore the size of the flanking objects entirely. This does not fit well with the results commonly found in visual experiments. Thus, introducing a delay reveals important differences in the way in which haptic and visual memories are stored and accessed.
Norman, J Farley; Phillips, Flip; Cheeseman, Jacob R; Thomason, Kelsey E; Ronning, Cecilia; Behari, Kriti; Kleinman, Kayla; Calloway, Autum B; Lamirande, Davora
2016-01-01
It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped "glaven") for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object's shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions-e.g., the participants' performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.
Haptic identification of objects and their depictions.
Klatzky, R L; Loomis, J M; Lederman, S J; Wake, H; Fujita, N
1993-08-01
Haptic identification of real objects is superior to that of raised two-dimensional (2-D) depictions. Three explanations of real-object superiority were investigated: contribution of material information, contribution of 3-D shape and size, and greater potential for integration across the fingers. In Experiment 1, subjects, while wearing gloves that gently attenuated material information, haptically identified real objects that provided reduced cues to compliance, mass, and part motion. The gloves permitted exploration with free hand movement, a single outstretched finger, or five outstretched fingers. Performance decreased over these three conditions but was superior to identification of pictures of the same objects in all cases, indicating the contribution of 3-D structure and integration across the fingers. Picture performance was also better with five fingers than with one. In Experiment 2, the subjects wore open-fingered gloves, which provided them with material information. Consequently, the effect of type of exploration was substantially reduced but not eliminated. Material compensates somewhat for limited access to object structure but is not the primary basis for haptic object identification.
Cheeseman, Jacob R.; Thomason, Kelsey E.; Ronning, Cecilia; Behari, Kriti; Kleinman, Kayla; Calloway, Autum B.; Lamirande, Davora
2016-01-01
It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped “glaven”) for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object’s shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions–e.g., the participants’ performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision. PMID:26863531
Evaluation of Wearable Haptic Systems for the Fingers in Augmented Reality Applications.
Maisto, Maurizio; Pacchierotti, Claudio; Chinello, Francesco; Salvietti, Gionata; De Luca, Alessandro; Prattichizzo, Domenico
2017-01-01
Although Augmented Reality (AR) has been around for almost five decades, only recently we have witnessed AR systems and applications entering in our everyday life. Representative examples of this technological revolution are the smartphone games "Pokémon GO" and "Ingress" or the Google Translate real-time sign interpretation app. Even if AR applications are already quite compelling and widespread, users are still not able to physically interact with the computer-generated reality. In this respect, wearable haptics can provide the compelling illusion of touching the superimposed virtual objects without constraining the motion or the workspace of the user. In this paper, we present the experimental evaluation of two wearable haptic interfaces for the fingers in three AR scenarios, enrolling 38 participants. In the first experiment, subjects were requested to write on a virtual board using a real chalk. The haptic devices provided the interaction forces between the chalk and the board. In the second experiment, subjects were asked to pick and place virtual and real objects. The haptic devices provided the interaction forces due to the weight of the virtual objects. In the third experiment, subjects were asked to balance a virtual sphere on a real cardboard. The haptic devices provided the interaction forces due to the weight of the virtual sphere rolling on the cardboard. Providing haptic feedback through the considered wearable device significantly improved the performance of all the considered tasks. Moreover, subjects significantly preferred conditions providing wearable haptic feedback.
Comparative study on collaborative interaction in non-immersive and immersive systems
NASA Astrophysics Data System (ADS)
Shahab, Qonita M.; Kwon, Yong-Moo; Ko, Heedong; Mayangsari, Maria N.; Yamasaki, Shoko; Nishino, Hiroaki
2007-09-01
This research studies the Virtual Reality simulation for collaborative interaction so that different people from different places can interact with one object concurrently. Our focus is the real-time handling of inputs from multiple users, where object's behavior is determined by the combination of the multiple inputs. Issues addressed in this research are: 1) The effects of using haptics on a collaborative interaction, 2) The possibilities of collaboration between users from different environments. We conducted user tests on our system in several cases: 1) Comparison between non-haptics and haptics collaborative interaction over LAN, 2) Comparison between non-haptics and haptics collaborative interaction over Internet, and 3) Analysis of collaborative interaction between non-immersive and immersive display environments. The case studies are the interaction of users in two cases: collaborative authoring of a 3D model by two users, and collaborative haptic interaction by multiple users. In Virtual Dollhouse, users can observe physics law while constructing a dollhouse using existing building blocks, under gravity effects. In Virtual Stretcher, multiple users can collaborate on moving a stretcher together while feeling each other's haptic motions.
Mnemonic neuronal activity in somatosensory cortex.
Zhou, Y D; Fuster, J M
1996-01-01
Single-unit activity was recorded from the hand areas of the somatosensory cortex of monkeys trained to perform a haptic delayed matching to sample task with objects of identical dimensions but different surface features. During the memory retention period of the task (delay), many units showed sustained firing frequency change, either excitation or inhibition. In some cases, firing during that period was significantly higher after one sample object than after another. These observations indicate the participation of somatosensory neurons not only in the perception but in the short-term memory of tactile stimuli. Neurons most directly implicated in tactile memory are (i) those with object-selective delay activity, (ii) those with nondifferential delay activity but without activity related to preparation for movement, and (iii) those with delay activity in the haptic-haptic delayed matching task but no such activity in a control visuo-haptic delayed matching task. The results indicate that cells in early stages of cortical somatosensory processing participate in haptic short-term memory. PMID:8927629
Vision-Based Haptic Feedback for Remote Micromanipulation in-SEM Environment
NASA Astrophysics Data System (ADS)
Bolopion, Aude; Dahmen, Christian; Stolle, Christian; Haliyo, Sinan; Régnier, Stéphane; Fatikow, Sergej
2012-07-01
This article presents an intuitive environment for remote micromanipulation composed of both haptic feedback and virtual reconstruction of the scene. To enable nonexpert users to perform complex teleoperated micromanipulation tasks, it is of utmost importance to provide them with information about the 3-D relative positions of the objects and the tools. Haptic feedback is an intuitive way to transmit such information. Since position sensors are not available at this scale, visual feedback is used to derive information about the scene. In this work, three different techniques are implemented, evaluated, and compared to derive the object positions from scanning electron microscope images. The modified correlation matching with generated template algorithm is accurate and provides reliable detection of objects. To track the tool, a marker-based approach is chosen since fast detection is required for stable haptic feedback. Information derived from these algorithms is used to propose an intuitive remote manipulation system that enables users situated in geographically distant sites to benefit from specific equipments, such as SEMs. Stability of the haptic feedback is ensured by the minimization of the delays, the computational efficiency of vision algorithms, and the proper tuning of the haptic coupling. Virtual guides are proposed to avoid any involuntary collisions between the tool and the objects. This approach is validated by a teleoperation involving melamine microspheres with a diameter of less than 2 μ m between Paris, France and Oldenburg, Germany.
Dibble, Edward; Zivanovic, Aleksandar; Davies, Brian
2004-01-01
This paper presents the results of several early studies relating to human haptic perception sensitivity when probing a virtual object. A 1 degree of freedom (DoF) rotary haptic system, that was designed and built for this purpose, is also presented. The experiments were to assess the maximum forces applied in a minimally invasive surgery (MIS) procedure, quantify the compliance sensitivity threshold when probing virtual tissue and identify the haptic system loop rate necessary for haptic feedback to feel realistic.
Haptics – Touchfeedback Technology Widening the Horizon of Medicine
Kapoor, Shalini; Arora, Pallak; Kapoor, Vikas; Jayachandran, Mahesh; Tiwari, Manish
2014-01-01
Haptics, or touchsense haptic technology is a major breakthrough in medical and dental interventions. Haptic perception is the process of recognizing objects through touch. Haptic sensations are created by actuators or motors which generate vibrations to the users and are controlled by embedded software which is integrated into the device. It takes the advantage of a combination of somatosensory pattern of skin and proprioception of hand position. Anatomical and diagnostic knowledge, when it is combined with this touch sense technology, has revolutionized medical education. This amalgamation of the worlds of diagnosis and surgical intervention adds precise robotic touch to the skill of the surgeon. A systematic literature review was done by using MEDLINE, GOOGLE SEARCH AND PubMed. The aim of this article was to introduce the fundamentals of haptic technology, its current applications in medical training and robotic surgeries, limitations of haptics and future aspects of haptics in medicine. PMID:24783164
Investigating Students' Ideas About Buoyancy and the Influence of Haptic Feedback
NASA Astrophysics Data System (ADS)
Minogue, James; Borland, David
2016-04-01
While haptics (simulated touch) represents a potential breakthrough technology for science teaching and learning, there is relatively little research into its differential impact in the context of teaching and learning. This paper describes the testing of a haptically enhanced simulation (HES) for learning about buoyancy. Despite a lifetime of everyday experiences, a scientifically sound explanation of buoyancy remains difficult to construct for many. It requires the integration of domain-specific knowledge regarding density, fluid, force, gravity, mass, weight, and buoyancy. Prior studies suggest that novices often focus on only one dimension of the sinking and floating phenomenon. Our HES was designed to promote the integration of the subconcepts of density and buoyant forces and stresses the relationship between the object itself and the surrounding fluid. The study employed a randomized pretest-posttest control group research design and a suite of measures including an open-ended prompt and objective content questions to provide insights into the influence of haptic feedback on undergraduate students' thinking about buoyancy. A convenience sample (n = 40) was drawn from a university's population of undergraduate elementary education majors. Two groups were formed from haptic feedback (n = 22) and no haptic feedback (n = 18). Through content analysis, discernible differences were seen in the posttest explanations sinking and floating across treatment groups. Learners that experienced the haptic feedback made more frequent use of "haptically grounded" terms (e.g., mass, gravity, buoyant force, pushing), leading us to begin to build a local theory of language-mediated haptic cognition.
Study on Collaborative Object Manipulation in Virtual Environment
NASA Astrophysics Data System (ADS)
Mayangsari, Maria Niken; Yong-Moo, Kwon
This paper presents comparative study on network collaboration performance in different immersion. Especially, the relationship between user collaboration performance and degree of immersion provided by the system is addressed and compared based on several experiments. The user tests on our system include several cases: 1) Comparison between non-haptics and haptics collaborative interaction over LAN, 2) Comparison between non-haptics and haptics collaborative interaction over Internet, and 3) Analysis of collaborative interaction between non-immersive and immersive display environments.
Ballesteros, Soledad; Reales, José M; Mayas, Julia; Heller, Morton A
2008-08-01
In two experiments, we examined the effect of selective attention at encoding on repetition priming in normal aging and Alzheimer's disease (AD) patients for objects presented visually (experiment 1) or haptically (experiment 2). We used a repetition priming paradigm combined with a selective attention procedure at encoding. Reliable priming was found for both young adults and healthy older participants for visually presented pictures (experiment 1) as well as for haptically presented objects (experiment 2). However, this was only found for attended and not for unattended stimuli. The results suggest that independently of the perceptual modality, repetition priming requires attention at encoding and that perceptual facilitation is maintained in normal aging. However, AD patients did not show priming for attended stimuli, or for unattended visual or haptic objects. These findings suggest an early deficit of selective attention in AD. Results are discussed from a cognitive neuroscience approach.
Evaluation of Pseudo-Haptic Interactions with Soft Objects in Virtual Environments.
Li, Min; Sareh, Sina; Xu, Guanghua; Ridzuan, Maisarah Binti; Luo, Shan; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar
2016-01-01
This paper proposes a pseudo-haptic feedback method conveying simulated soft surface stiffness information through a visual interface. The method exploits a combination of two feedback techniques, namely visual feedback of soft surface deformation and control of the indenter avatar speed, to convey stiffness information of a simulated surface of a soft object in virtual environments. The proposed method was effective in distinguishing different sizes of virtual hard nodules integrated into the simulated soft bodies. To further improve the interactive experience, the approach was extended creating a multi-point pseudo-haptic feedback system. A comparison with regards to (a) nodule detection sensitivity and (b) elapsed time as performance indicators in hard nodule detection experiments to a tablet computer incorporating vibration feedback was conducted. The multi-point pseudo-haptic interaction is shown to be more time-efficient than the single-point pseudo-haptic interaction. It is noted that multi-point pseudo-haptic feedback performs similarly well when compared to a vibration-based feedback method based on both performance measures elapsed time and nodule detection sensitivity. This proves that the proposed method can be used to convey detailed haptic information for virtual environmental tasks, even subtle ones, using either a computer mouse or a pressure sensitive device as an input device. This pseudo-haptic feedback method provides an opportunity for low-cost simulation of objects with soft surfaces and hard inclusions, as, for example, occurring in ever more realistic video games with increasing emphasis on interaction with the physical environment and minimally invasive surgery in the form of soft tissue organs with embedded cancer nodules. Hence, the method can be used in many low-budget applications where haptic sensation is required, such as surgeon training or video games, either using desktop computers or portable devices, showing reasonably high fidelity in conveying stiffness perception to the user.
Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing
Invitto, Sara; Faggiano, Chiara; Sammarco, Silvia; De Luca, Valerio; De Paolis, Lucio T.
2016-01-01
In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user’s hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning. PMID:26999151
Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing.
Invitto, Sara; Faggiano, Chiara; Sammarco, Silvia; De Luca, Valerio; De Paolis, Lucio T
2016-03-18
In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user's hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning.
Haptic interfaces: Hardware, software and human performance
NASA Technical Reports Server (NTRS)
Srinivasan, Mandayam A.
1995-01-01
Virtual environments are computer-generated synthetic environments with which a human user can interact to perform a wide variety of perceptual and motor tasks. At present, most of the virtual environment systems engage only the visual and auditory senses, and not the haptic sensorimotor system that conveys the sense of touch and feel of objects in the environment. Computer keyboards, mice, and trackballs constitute relatively simple haptic interfaces. Gloves and exoskeletons that track hand postures have more interaction capabilities and are available in the market. Although desktop and wearable force-reflecting devices have been built and implemented in research laboratories, the current capabilities of such devices are quite limited. To realize the full promise of virtual environments and teleoperation of remote systems, further developments of haptic interfaces are critical. In this paper, the status and research needs in human haptics, technology development and interactions between the two are described. In particular, the excellent performance characteristics of Phantom, a haptic interface recently developed at MIT, are highlighted. Realistic sensations of single point of contact interactions with objects of variable geometry (e.g., smooth, textured, polyhedral) and material properties (e.g., friction, impedance) in the context of a variety of tasks (e.g., needle biopsy, switch panels) achieved through this device are described and the associated issues in haptic rendering are discussed.
Aging and Haptic-Visual Solid Shape Matching.
Norman, J Farley; Adkins, Olivia C; Dowell, Catherine J; Hoyng, Stevie C; Gilliam, Ashley N; Pedersen, Lauren E
2017-08-01
A total of 36 younger (mean age = 21.3 years) and older adults (mean age = 73.8 years) haptically explored plastic copies of naturally shaped objects (bell peppers, Capsicum annuum) one at a time for 7 s each. The participants' task was to then choose which of 12 concurrently visible objects had the same solid shape as the one they felt. The younger and older participants explored the object shapes using either one, three, or five fingers (there were six participants for each combination of number of fingers and age group). The outcome was different from that of previous research conducted with manmade objects. Unlike Jansson and Monaci (2006) , we found that for most objects, our participants' performance was unaffected by variations in the number of fingers used for haptic exploration. While there was no significant overall effect of the number of fingers, there was a significant main effect of age. The younger adults' shape matching performance was 48.6% higher than that of the older adults. When perceiving naturally shaped objects such as bell peppers, it appears that the usage of a single finger can be as effective as haptic exploration with a whole complement of five fingers.
Material recognition based on thermal cues: Mechanisms and applications.
Ho, Hsin-Ni
2018-01-01
Some materials feel colder to the touch than others, and we can use this difference in perceived coldness for material recognition. This review focuses on the mechanisms underlying material recognition based on thermal cues. It provides an overview of the physical, perceptual, and cognitive processes involved in material recognition. It also describes engineering domains in which material recognition based on thermal cues have been applied. This includes haptic interfaces that seek to reproduce the sensations associated with contact in virtual environments and tactile sensors aim for automatic material recognition. The review concludes by considering the contributions of this line of research in both science and engineering.
Material recognition based on thermal cues: Mechanisms and applications
Ho, Hsin-Ni
2018-01-01
ABSTRACT Some materials feel colder to the touch than others, and we can use this difference in perceived coldness for material recognition. This review focuses on the mechanisms underlying material recognition based on thermal cues. It provides an overview of the physical, perceptual, and cognitive processes involved in material recognition. It also describes engineering domains in which material recognition based on thermal cues have been applied. This includes haptic interfaces that seek to reproduce the sensations associated with contact in virtual environments and tactile sensors aim for automatic material recognition. The review concludes by considering the contributions of this line of research in both science and engineering. PMID:29687043
Emotion Telepresence: Emotion Augmentation through Affective Haptics and Visual Stimuli
NASA Astrophysics Data System (ADS)
Tsetserukou, D.; Neviarouskaya, A.
2012-03-01
The paper focuses on a novel concept of emotional telepresence. The iFeel_IM! system which is in the vanguard of this technology integrates 3D virtual world Second Life, intelligent component for automatic emotion recognition from text messages, and innovative affective haptic interfaces providing additional nonverbal communication channels through simulation of emotional feedback and social touch (physical co-presence). Users can not only exchange messages but also emotionally and physically feel the presence of the communication partner (e.g., family member, friend, or beloved person). The next prototype of the system will include the tablet computer. The user can realize haptic interaction with avatar, and thus influence its mood and emotion of the partner. The finger gesture language will be designed for communication with avatar. This will bring new level of immersion of on-line communication.
NASA Astrophysics Data System (ADS)
Oh, Jong-Seok; Choi, Seung-Hyun; Choi, Seung-Bok
2014-01-01
This paper presents control performances of a new type of four-degrees-of-freedom (4-DOF) haptic master that can be used for robot-assisted minimally invasive surgery (RMIS). By adopting a controllable electrorheological (ER) fluid, the function of the proposed master is realized as a haptic feedback as well as remote manipulation. In order to verify the efficacy of the proposed master and method, an experiment is conducted with deformable objects featuring human organs. Since the use of real human organs is difficult for control due to high cost and moral hazard, an excellent alternative method, the virtual reality environment, is used for control in this work. In order to embody a human organ in the virtual space, the experiment adopts a volumetric deformable object represented by a shape-retaining chain linked (S-chain) model which has salient properties such as fast and realistic deformation of elastic objects. In haptic architecture for RMIS, the desired torque/force and desired position originating from the object of the virtual slave and operator of the haptic master are transferred to each other. In order to achieve the desired torque/force trajectories, a sliding mode controller (SMC) which is known to be robust to uncertainties is designed and empirically implemented. Tracking control performances for various torque/force trajectories from the virtual slave are evaluated and presented in the time domain.
NASA Astrophysics Data System (ADS)
Valenza, G.; Greco, A.; Citi, L.; Bianchi, M.; Barbieri, R.; Scilingo, E. P.
2016-06-01
This study proposes the application of a comprehensive signal processing framework, based on inhomogeneous point-process models of heartbeat dynamics, to instantaneously assess affective haptic perception using electrocardiogram-derived information exclusively. The framework relies on inverse-Gaussian point-processes with Laguerre expansion of the nonlinear Wiener-Volterra kernels, accounting for the long-term information given by the past heartbeat events. Up to cubic-order nonlinearities allow for an instantaneous estimation of the dynamic spectrum and bispectrum of the considered cardiovascular dynamics, as well as for instantaneous measures of complexity, through Lyapunov exponents and entropy. Short-term caress-like stimuli were administered for 4.3-25 seconds on the forearms of 32 healthy volunteers (16 females) through a wearable haptic device, by selectively superimposing two levels of force, 2 N and 6 N, and two levels of velocity, 9.4 mm/s and 65 mm/s. Results demonstrated that our instantaneous linear and nonlinear features were able to finely characterize the affective haptic perception, with a recognition accuracy of 69.79% along the force dimension, and 81.25% along the velocity dimension.
Absence of modulatory action on haptic height perception with musical pitch
Geronazzo, Michele; Avanzini, Federico; Grassi, Massimo
2015-01-01
Although acoustic frequency is not a spatial property of physical objects, in common language, pitch, i.e., the psychological correlated of frequency, is often labeled spatially (i.e., “high in pitch” or “low in pitch”). Pitch-height is known to modulate (and interact with) the response of participants when they are asked to judge spatial properties of non-auditory stimuli (e.g., visual) in a variety of behavioral tasks. In the current study we investigated whether the modulatory action of pitch-height extended to the haptic estimation of height of a virtual step. We implemented a HW/SW setup which is able to render virtual 3D objects (stair-steps) haptically through a PHANTOM device, and to provide real-time continuous auditory feedback depending on the user interaction with the object. The haptic exploration was associated with a sinusoidal tone whose pitch varied as a function of the interaction point's height within (i) a narrower and (ii) a wider pitch range, or (iii) a random pitch variation acting as a control audio condition. Explorations were also performed with no sound (haptic only). Participants were instructed to explore the virtual step freely, and to communicate height estimation by opening their thumb and index finger to mimic the step riser height, or verbally by reporting the height in centimeters of the step riser. We analyzed the role of musical expertise by dividing participants into non-musicians and musicians. Results showed no effects of musical pitch on high-realistic haptic feedback. Overall there is no difference between the two groups in the proposed multimodal conditions. Additionally, we observed a different haptic response distribution between musicians and non-musicians when estimations of the auditory conditions are matched with estimations in the no sound condition. PMID:26441745
3D imaging, 3D printing and 3D virtual planning in endodontics.
Shah, Pratik; Chong, B S
2018-03-01
The adoption and adaptation of recent advances in digital technology, such as three-dimensional (3D) printed objects and haptic simulators, in dentistry have influenced teaching and/or management of cases involving implant, craniofacial, maxillofacial, orthognathic and periodontal treatments. 3D printed models and guides may help operators plan and tackle complicated non-surgical and surgical endodontic treatment and may aid skill acquisition. Haptic simulators may assist in the development of competency in endodontic procedures through the acquisition of psycho-motor skills. This review explores and discusses the potential applications of 3D printed models and guides, and haptic simulators in the teaching and management of endodontic procedures. An understanding of the pertinent technology related to the production of 3D printed objects and the operation of haptic simulators are also presented.
Optimal visual-haptic integration with articulated tools.
Takahashi, Chie; Watt, Simon J
2017-05-01
When we feel and see an object, the nervous system integrates visual and haptic information optimally, exploiting the redundancy in multiple signals to estimate properties more precisely than is possible from either signal alone. We examined whether optimal integration is similarly achieved when using articulated tools. Such tools (tongs, pliers, etc) are a defining characteristic of human hand function, but complicate the classical sensory 'correspondence problem' underlying multisensory integration. Optimal integration requires establishing the relationship between signals acquired by different sensors (hand and eye) and, therefore, in fundamentally unrelated units. The system must also determine when signals refer to the same property of the world-seeing and feeling the same thing-and only integrate those that do. This could be achieved by comparing the pattern of current visual and haptic input to known statistics of their normal relationship. Articulated tools disrupt this relationship, however, by altering the geometrical relationship between object properties and hand posture (the haptic signal). We examined whether different tool configurations are taken into account in visual-haptic integration. We indexed integration by measuring the precision of size estimates, and compared our results to optimal predictions from a maximum-likelihood integrator. Integration was near optimal, independent of tool configuration/hand posture, provided that visual and haptic signals referred to the same object in the world. Thus, sensory correspondence was determined correctly (trial-by-trial), taking tool configuration into account. This reveals highly flexible multisensory integration underlying tool use, consistent with the brain constructing internal models of tools' properties.
Menon, Samir; Brantner, Gerald; Aholt, Chris; Kay, Kendrick; Khatib, Oussama
2013-01-01
A challenging problem in motor control neuroimaging studies is the inability to perform complex human motor tasks given the Magnetic Resonance Imaging (MRI) scanner's disruptive magnetic fields and confined workspace. In this paper, we propose a novel experimental platform that combines Functional MRI (fMRI) neuroimaging, haptic virtual simulation environments, and an fMRI-compatible haptic device for real-time haptic interaction across the scanner workspace (above torso ∼ .65×.40×.20m(3)). We implement this Haptic fMRI platform with a novel haptic device, the Haptic fMRI Interface (HFI), and demonstrate its suitability for motor neuroimaging studies. HFI has three degrees-of-freedom (DOF), uses electromagnetic motors to enable high-fidelity haptic rendering (>350Hz), integrates radio frequency (RF) shields to prevent electromagnetic interference with fMRI (temporal SNR >100), and is kinematically designed to minimize currents induced by the MRI scanner's magnetic field during motor displacement (<2cm). HFI possesses uniform inertial and force transmission properties across the workspace, and has low friction (.05-.30N). HFI's RF noise levels, in addition, are within a 3 Tesla fMRI scanner's baseline noise variation (∼.85±.1%). Finally, HFI is haptically transparent and does not interfere with human motor tasks (tested for .4m reaches). By allowing fMRI experiments involving complex three-dimensional manipulation with haptic interaction, Haptic fMRI enables-for the first time-non-invasive neuroscience experiments involving interactive motor tasks, object manipulation, tactile perception, and visuo-motor integration.
Perception of force and stiffness in the presence of low-frequency haptic noise
Gurari, Netta; Okamura, Allison M.; Kuchenbecker, Katherine J.
2017-01-01
Objective This work lays the foundation for future research on quantitative modeling of human stiffness perception. Our goal was to develop a method by which a human’s ability to perceive suprathreshold haptic force stimuli and haptic stiffness stimuli can be affected by adding haptic noise. Methods Five human participants performed a same-different task with a one-degree-of-freedom force-feedback device. Participants used the right index finger to actively interact with variations of force (∼5 and ∼8 N) and stiffness (∼290 N/m) stimuli that included one of four scaled amounts of haptically rendered noise (None, Low, Medium, High). The haptic noise was zero-mean Gaussian white noise that was low-pass filtered with a 2 Hz cut-off frequency; the resulting low-frequency signal was added to the force rendered while the participant interacted with the force and stiffness stimuli. Results We found that the precision with which participants could identify the magnitude of both the force and stiffness stimuli was affected by the magnitude of the low-frequency haptically rendered noise added to the haptic stimulus, as well as the magnitude of the haptic stimulus itself. The Weber fraction strongly correlated with the standard deviation of the low-frequency haptic noise with a Pearson product-moment correlation coefficient of ρ > 0.83. The mean standard deviation of the low-frequency haptic noise in the haptic stimuli ranged from 0.184 N to 1.111 N across the four haptically rendered noise levels, and the corresponding mean Weber fractions spanned between 0.042 and 0.101. Conclusions The human ability to perceive both suprathreshold haptic force and stiffness stimuli degrades in the presence of added low-frequency haptic noise. Future work can use the reported methods to investigate how force perception and stiffness perception may relate, with possible applications in haptic watermarking and in the assessment of the functionality of peripheral pathways in individuals with haptic impairments. PMID:28575068
Pawluk, D; Kitada, R; Abramowicz, A; Hamilton, C; Lederman, S J
2011-01-01
The current study addresses the well-known "figure/ground" problem in human perception, a fundamental topic that has received surprisingly little attention from touch scientists to date. Our approach is grounded in, and directly guided by, current knowledge concerning the nature of haptic processing. Given inherent figure/ground ambiguity in natural scenes and limited sensory inputs from first contact (a "haptic glance"), we consider first whether people are even capable of differentiating figure from ground (Experiments 1 and 2). Participants were required to estimate the strength of their subjective impression that they were feeling an object (i.e., figure) as opposed to just the supporting structure (i.e., ground). Second, we propose a tripartite factor classification scheme to further assess the influence of kinetic, geometric (Experiments 1 and 2), and material (Experiment 2) factors on haptic figure/ground segmentation, complemented by more open-ended subjective responses obtained at the end of the experiment. Collectively, the results indicate that under certain conditions it is possible to segment figure from ground via a single haptic glance with a reasonable degree of certainty, and that all three factor classes influence the estimated likelihood that brief, spatially distributed fingertip contacts represent contact with an object and/or its background supporting structure.
Culbertson, Heather; Kuchenbecker, Katherine J
2017-01-01
Interacting with physical objects through a tool elicits tactile and kinesthetic sensations that comprise your haptic impression of the object. These cues, however, are largely missing from interactions with virtual objects, yielding an unrealistic user experience. This article evaluates the realism of virtual surfaces rendered using haptic models constructed from data recorded during interactions with real surfaces. The models include three components: surface friction, tapping transients, and texture vibrations. We render the virtual surfaces on a SensAble Phantom Omni haptic interface augmented with a Tactile Labs Haptuator for vibration output. We conducted a human-subject study to assess the realism of these virtual surfaces and the importance of the three model components. Following a perceptual discrepancy paradigm, subjects compared each of 15 real surfaces to a full rendering of the same surface plus versions missing each model component. The realism improvement achieved by including friction, tapping, or texture in the rendering was found to directly relate to the intensity of the surface's property in that domain (slipperiness, hardness, or roughness). A subsequent analysis of forces and vibrations measured during interactions with virtual surfaces indicated that the Omni's inherent mechanical properties corrupted the user's haptic experience, decreasing realism of the virtual surface.
Role of combined tactile and kinesthetic feedback in minimally invasive surgery.
Lim, Soo-Chul; Lee, Hyung-Kew; Park, Joonah
2014-10-18
Haptic feedback is of critical importance in surgical tasks. However, conventional surgical robots do not provide haptic feedback to surgeons during surgery. Thus, in this study, a combined tactile and kinesthetic feedback system was developed to provide haptic feedback to surgeons during robotic surgery. To assess haptic feasibility, the effects of two types of haptic feedback were examined empirically - kinesthetic and tactile feedback - to measure object-pulling force with a telesurgery robotics system at two desired pulling forces (1 N and 2 N). Participants answered a set of questionnaires after experiments. The experimental results reveal reductions in force error (39.1% and 40.9%) when using haptic feedback during 1 N and 2 N pulling tasks. Moreover, survey analyses show the effectiveness of the haptic feedback during teleoperation. The combined tactile and kinesthetic feedback of the master device in robotic surgery improves the surgeon's ability to control the interaction force applied to the tissue. Copyright © 2014 John Wiley & Sons, Ltd. Copyright © 2014 John Wiley & Sons, Ltd.
The mere exposure effect in the domain of haptics.
Jakesch, Martina; Carbon, Claus-Christian
2012-01-01
Zajonc showed that the attitude towards stimuli that one had been previously exposed to is more positive than towards novel stimuli. This mere exposure effect (MEE) has been tested extensively using various visual stimuli. Research on the MEE is sparse, however, for other sensory modalities. We used objects of two material categories (stone and wood) and two complexity levels (simple and complex) to test the influence of exposure frequency (F0 = novel stimuli, F2 = stimuli exposed twice, F10 = stimuli exposed ten times) under two sensory modalities (haptics only and haptics & vision). Effects of exposure frequency were found for high complex stimuli with significantly increasing liking from F0 to F2 and F10, but only for the stone category. Analysis of "Need for Touch" data showed the MEE in participants with high need for touch, which suggests different sensitivity or saturation levels of MEE. This different sensitivity or saturation levels might also reflect the effects of expertise on the haptic evaluation of objects. It seems that haptic and cross-modal MEEs are influenced by factors similar to those in the visual domain indicating a common cognitive basis.
Marangon, Mattia; Kubiak, Agnieszka; Króliczak, Gregory
2016-01-01
The neural bases of haptically-guided grasp planning and execution are largely unknown, especially for stimuli having no visual representations. Therefore, we used functional magnetic resonance imaging (fMRI) to monitor brain activity during haptic exploration of novel 3D complex objects, subsequent grasp planning, and the execution of the pre-planned grasps. Haptic object exploration, involving extraction of shape, orientation, and length of the to-be-grasped targets, was associated with the fronto-parietal, temporo-occipital, and insular cortex activity. Yet, only the anterior divisions of the posterior parietal cortex (PPC) of the right hemisphere were significantly more engaged in exploration of complex objects (vs. simple control disks). None of these regions were re-recruited during the planning phase. Even more surprisingly, the left-hemisphere intraparietal, temporal, and occipital areas that were significantly invoked for grasp planning did not show sensitivity to object features. Finally, grasp execution, involving the re-recruitment of the critical right-hemisphere PPC clusters, was also significantly associated with two kinds of bilateral parieto-frontal processes. The first represents transformations of grasp-relevant target features and is linked to the dorso-dorsal (lateral and medial) parieto-frontal networks. The second monitors grasp kinematics and belongs to the ventro-dorsal networks. Indeed, signal modulations associated with these distinct functions follow dorso-ventral gradients, with left aIPS showing significant sensitivity to both target features and the characteristics of the required grasp. Thus, our results from the haptic domain are consistent with the notion that the parietal processing for action guidance reflects primarily transformations from object-related to effector-related coding, and these mechanisms are rather independent of sensory input modality. PMID:26779002
Marangon, Mattia; Kubiak, Agnieszka; Króliczak, Gregory
2015-01-01
The neural bases of haptically-guided grasp planning and execution are largely unknown, especially for stimuli having no visual representations. Therefore, we used functional magnetic resonance imaging (fMRI) to monitor brain activity during haptic exploration of novel 3D complex objects, subsequent grasp planning, and the execution of the pre-planned grasps. Haptic object exploration, involving extraction of shape, orientation, and length of the to-be-grasped targets, was associated with the fronto-parietal, temporo-occipital, and insular cortex activity. Yet, only the anterior divisions of the posterior parietal cortex (PPC) of the right hemisphere were significantly more engaged in exploration of complex objects (vs. simple control disks). None of these regions were re-recruited during the planning phase. Even more surprisingly, the left-hemisphere intraparietal, temporal, and occipital areas that were significantly invoked for grasp planning did not show sensitivity to object features. Finally, grasp execution, involving the re-recruitment of the critical right-hemisphere PPC clusters, was also significantly associated with two kinds of bilateral parieto-frontal processes. The first represents transformations of grasp-relevant target features and is linked to the dorso-dorsal (lateral and medial) parieto-frontal networks. The second monitors grasp kinematics and belongs to the ventro-dorsal networks. Indeed, signal modulations associated with these distinct functions follow dorso-ventral gradients, with left aIPS showing significant sensitivity to both target features and the characteristics of the required grasp. Thus, our results from the haptic domain are consistent with the notion that the parietal processing for action guidance reflects primarily transformations from object-related to effector-related coding, and these mechanisms are rather independent of sensory input modality.
Patient DF's visual brain in action: Visual feedforward control in visual form agnosia.
Whitwell, Robert L; Milner, A David; Cavina-Pratesi, Cristiana; Barat, Masihullah; Goodale, Melvyn A
2015-05-01
Patient DF, who developed visual form agnosia following ventral-stream damage, is unable to discriminate the width of objects, performing at chance, for example, when asked to open her thumb and forefinger a matching amount. Remarkably, however, DF adjusts her hand aperture to accommodate the width of objects when reaching out to pick them up (grip scaling). While this spared ability to grasp objects is presumed to be mediated by visuomotor modules in her relatively intact dorsal stream, it is possible that it may rely abnormally on online visual or haptic feedback. We report here that DF's grip scaling remained intact when her vision was completely suppressed during grasp movements, and it still dissociated sharply from her poor perceptual estimates of target size. We then tested whether providing trial-by-trial haptic feedback after making such perceptual estimates might improve DF's performance, but found that they remained significantly impaired. In a final experiment, we re-examined whether DF's grip scaling depends on receiving veridical haptic feedback during grasping. In one condition, the haptic feedback was identical to the visual targets. In a second condition, the haptic feedback was of a constant intermediate width while the visual target varied trial by trial. Despite this incongruent feedback, DF still scaled her grip aperture to the visual widths of the target blocks, showing only normal adaptation to the false haptically-experienced width. Taken together, these results strengthen the view that DF's spared grasping relies on a normal mode of dorsal-stream functioning, based chiefly on visual feedforward processing. Copyright © 2014 Elsevier B.V. All rights reserved.
Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.
Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M
2015-01-01
This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.
Giordano, Bruno L; Visell, Yon; Yao, Hsin-Yun; Hayward, Vincent; Cooperstock, Jeremy R; McAdams, Stephen
2012-05-01
Locomotion generates multisensory information about walked-upon objects. How perceptual systems use such information to get to know the environment remains unexplored. The ability to identify solid (e.g., marble) and aggregate (e.g., gravel) walked-upon materials was investigated in auditory, haptic or audio-haptic conditions, and in a kinesthetic condition where tactile information was perturbed with a vibromechanical noise. Overall, identification performance was better than chance in all experimental conditions and for both solids and the better identified aggregates. Despite large mechanical differences between the response of solids and aggregates to locomotion, for both material categories discrimination was at its worst in the auditory and kinesthetic conditions and at its best in the haptic and audio-haptic conditions. An analysis of the dominance of sensory information in the audio-haptic context supported a focus on the most accurate modality, haptics, but only for the identification of solid materials. When identifying aggregates, response biases appeared to produce a focus on the least accurate modality--kinesthesia. When walking on loose materials such as gravel, individuals do not perceive surfaces by focusing on the most accurate modality, but by focusing on the modality that would most promptly signal postural instabilities.
Palpation simulator with stable haptic feedback.
Kim, Sang-Youn; Ryu, Jee-Hwan; Lee, WooJeong
2015-01-01
The main difficulty in constructing palpation simulators is to compute and to generate stable and realistic haptic feedback without vibration. When a user haptically interacts with highly non-homogeneous soft tissues through a palpation simulator, a sudden change of stiffness in target tissues causes unstable interaction with the object. We propose a model consisting of a virtual adjustable damper and an energy measuring element. The energy measuring element gauges energy which is stored in a palpation simulator and the virtual adjustable damper dissipates the energy to achieve stable haptic interaction. To investigate the haptic behavior of the proposed method, impulse and continuous inputs are provided to target tissues. If a haptic interface point meets with the hardest portion in the target tissues modeled with a conventional method, we observe unstable motion and feedback force. However, when the target tissues are modeled with the proposed method, a palpation simulator provides stable interaction without vibration. The proposed method overcomes a problem in conventional haptic palpation simulators where unstable force or vibration can be generated if there is a big discrepancy in material property between an element and its neighboring elements in target tissues.
Valenza, G.; Greco, A.; Citi, L.; Bianchi, M.; Barbieri, R.; Scilingo, E. P.
2016-01-01
This study proposes the application of a comprehensive signal processing framework, based on inhomogeneous point-process models of heartbeat dynamics, to instantaneously assess affective haptic perception using electrocardiogram-derived information exclusively. The framework relies on inverse-Gaussian point-processes with Laguerre expansion of the nonlinear Wiener-Volterra kernels, accounting for the long-term information given by the past heartbeat events. Up to cubic-order nonlinearities allow for an instantaneous estimation of the dynamic spectrum and bispectrum of the considered cardiovascular dynamics, as well as for instantaneous measures of complexity, through Lyapunov exponents and entropy. Short-term caress-like stimuli were administered for 4.3–25 seconds on the forearms of 32 healthy volunteers (16 females) through a wearable haptic device, by selectively superimposing two levels of force, 2 N and 6 N, and two levels of velocity, 9.4 mm/s and 65 mm/s. Results demonstrated that our instantaneous linear and nonlinear features were able to finely characterize the affective haptic perception, with a recognition accuracy of 69.79% along the force dimension, and 81.25% along the velocity dimension. PMID:27357966
Liu, Juan; Ando, Hiroshi
2016-01-01
Most real-world events stimulate multiple sensory modalities simultaneously. Usually, the stiffness of an object is perceived haptically. However, auditory signals also contain stiffness-related information, and people can form impressions of stiffness from the different impact sounds of metal, wood, or glass. To understand whether there is any interaction between auditory and haptic stiffness perception, and if so, whether the inferred material category is the most relevant auditory information, we conducted experiments using a force-feedback device and the modal synthesis method to present haptic stimuli and impact sound in accordance with participants’ actions, and to modulate low-level acoustic parameters, i.e., frequency and damping, without changing the inferred material categories of sound sources. We found that metal sounds consistently induced an impression of stiffer surfaces than did drum sounds in the audio-only condition, but participants haptically perceived surfaces with modulated metal sounds as significantly softer than the same surfaces with modulated drum sounds, which directly opposes the impression induced by these sounds alone. This result indicates that, although the inferred material category is strongly associated with audio-only stiffness perception, low-level acoustic parameters, especially damping, are more tightly integrated with haptic signals than the material category is. Frequency played an important role in both audio-only and audio-haptic conditions. Our study provides evidence that auditory information influences stiffness perception differently in unisensory and multisensory tasks. Furthermore, the data demonstrated that sounds with higher frequency and/or shorter decay time tended to be judged as stiffer, and contact sounds of stiff objects had no effect on the haptic perception of soft surfaces. We argue that the intrinsic physical relationship between object stiffness and acoustic parameters may be applied as prior knowledge to achieve robust estimation of stiffness in multisensory perception. PMID:27902718
Sensorimotor Interactions in the Haptic Perception of Virtual Objects
1997-01-01
the human user. 2 Compared to our understanding of vision and audition , our knowledge of the human haptic perception is very limited. Many basic...modalities such as vision and audition on haptic perception of viscosity or mass, for example. 116 Some preliminary work has already been done in this...string[3]; *posx="x" *forf="f’ *velv="v" * acca ="a" trial[64]; resp[64]; /* random number */ /* trial number */ /* index */ /* array holding stim
Multimodal Virtual Environments: MAGIC Toolkit and Visual-Haptic Interaction Paradigms
1998-01-01
2.7.3 Load/Save Options ..... 2.7.4 Information Display .... 2.8 Library Files. 2.9 Evaluation .............. 3 Visual-Haptic Interactions 3.1...Northwestern University[ Colgate , 1994]. It is possible for a user to touch one side of a thin object and be propelled out the opposite side, because...when there is a high correlation in motion and force between the visual and haptic realms. * Chapter 7 concludes with an evaluation of the application
Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto
2013-01-01
In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape. PMID:24113680
The Mere Exposure Effect in the Domain of Haptics
Jakesch, Martina; Carbon, Claus-Christian
2012-01-01
Background Zajonc showed that the attitude towards stimuli that one had been previously exposed to is more positive than towards novel stimuli. This mere exposure effect (MEE) has been tested extensively using various visual stimuli. Research on the MEE is sparse, however, for other sensory modalities. Methodology/Principal Findings We used objects of two material categories (stone and wood) and two complexity levels (simple and complex) to test the influence of exposure frequency (F0 = novel stimuli, F2 = stimuli exposed twice, F10 = stimuli exposed ten times) under two sensory modalities (haptics only and haptics & vision). Effects of exposure frequency were found for high complex stimuli with significantly increasing liking from F0 to F2 and F10, but only for the stone category. Analysis of “Need for Touch” data showed the MEE in participants with high need for touch, which suggests different sensitivity or saturation levels of MEE. Conclusions/Significance This different sensitivity or saturation levels might also reflect the effects of expertise on the haptic evaluation of objects. It seems that haptic and cross-modal MEEs are influenced by factors similar to those in the visual domain indicating a common cognitive basis. PMID:22347451
Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto
2013-10-09
In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape.
Design of a haptic device with grasp and push-pull force feedback for a master-slave surgical robot.
Hu, Zhenkai; Yoon, Chae-Hyun; Park, Samuel Byeongjun; Jo, Yung-Ho
2016-07-01
We propose a portable haptic device providing grasp (kinesthetic) and push-pull (cutaneous) sensations for optical-motion-capture master interfaces. Although optical-motion-capture master interfaces for surgical robot systems can overcome the stiffness, friction, and coupling problems of mechanical master interfaces, it is difficult to add haptic feedback to an optical-motion-capture master interface without constraining the free motion of the operator's hands. Therefore, we utilized a Bowden cable-driven mechanism to provide the grasp and push-pull sensation while retaining the free hand motion of the optical-motion capture master interface. To evaluate the haptic device, we construct a 2-DOF force sensing/force feedback system. We compare the sensed force and the reproduced force of the haptic device. Finally, a needle insertion test was done to evaluate the performance of the haptic interface in the master-slave system. The results demonstrate that both the grasp force feedback and the push-pull force feedback provided by the haptic interface closely matched with the sensed forces of the slave robot. We successfully apply our haptic interface in the optical-motion-capture master-slave system. The results of the needle insertion test showed that our haptic feedback can provide more safety than merely visual observation. We develop a suitable haptic device to produce both kinesthetic grasp force feedback and cutaneous push-pull force feedback. Our future research will include further objective performance evaluations of the optical-motion-capture master-slave robot system with our haptic interface in surgical scenarios.
Shadow-driven 4D haptic visualization.
Zhang, Hui; Hanson, Andrew
2007-01-01
Just as we can work with two-dimensional floor plans to communicate 3D architectural design, we can exploit reduced-dimension shadows to manipulate the higher-dimensional objects generating the shadows. In particular, by taking advantage of physically reactive 3D shadow-space controllers, we can transform the task of interacting with 4D objects to a new level of physical reality. We begin with a teaching tool that uses 2D knot diagrams to manipulate the geometry of 3D mathematical knots via their projections; our unique 2D haptic interface allows the user to become familiar with sketching, editing, exploration, and manipulation of 3D knots rendered as projected imageson a 2D shadow space. By combining graphics and collision-sensing haptics, we can enhance the 2D shadow-driven editing protocol to successfully leverage 2D pen-and-paper or blackboard skills. Building on the reduced-dimension 2D editing tool for manipulating 3D shapes, we develop the natural analogy to produce a reduced-dimension 3D tool for manipulating 4D shapes. By physically modeling the correct properties of 4D surfaces, their bending forces, and their collisions in the 3D haptic controller interface, we can support full-featured physical exploration of 4D mathematical objects in a manner that is otherwise far beyond the experience accessible to human beings. As far as we are aware, this paper reports the first interactive system with force-feedback that provides "4D haptic visualization" permitting the user to model and interact with 4D cloth-like objects.
Petrini, Karin; Remark, Alicia; Smith, Louise; Nardini, Marko
2014-05-01
When visual information is available, human adults, but not children, have been shown to reduce sensory uncertainty by taking a weighted average of sensory cues. In the absence of reliable visual information (e.g. extremely dark environment, visual disorders), the use of other information is vital. Here we ask how humans combine haptic and auditory information from childhood. In the first experiment, adults and children aged 5 to 11 years judged the relative sizes of two objects in auditory, haptic, and non-conflicting bimodal conditions. In , different groups of adults and children were tested in non-conflicting and conflicting bimodal conditions. In , adults reduced sensory uncertainty by integrating the cues optimally, while children did not. In , adults and children used similar weighting strategies to solve audio-haptic conflict. These results suggest that, in the absence of visual information, optimal integration of cues for discrimination of object size develops late in childhood. © 2014 The Authors. Developmental Science Published by John Wiley & Sons Ltd.
Collision detection and modeling of rigid and deformable objects in laparoscopic simulator
NASA Astrophysics Data System (ADS)
Dy, Mary-Clare; Tagawa, Kazuyoshi; Tanaka, Hiromi T.; Komori, Masaru
2015-03-01
Laparoscopic simulators are viable alternatives for surgical training and rehearsal. Haptic devices can also be incorporated with virtual reality simulators to provide additional cues to the users. However, to provide realistic feedback, the haptic device must be updated by 1kHz. On the other hand, realistic visual cues, that is, the collision detection and deformation between interacting objects must be rendered at least 30 fps. Our current laparoscopic simulator detects the collision between a point on the tool tip, and on the organ surfaces, in which haptic devices are attached on actual tool tips for realistic tool manipulation. The triangular-mesh organ model is rendered using a mass spring deformation model, or finite element method-based models. In this paper, we investigated multi-point-based collision detection on the rigid tool rods. Based on the preliminary results, we propose a method to improve the collision detection scheme, and speed up the organ deformation reaction. We discuss our proposal for an efficient method to compute simultaneous multiple collision between rigid (laparoscopic tools) and deformable (organs) objects, and perform the subsequent collision response, with haptic feedback, in real-time.
NASA Astrophysics Data System (ADS)
Han, Young-Min; Choi, Seung-Bok
2008-12-01
This paper presents the control performance of an electrorheological (ER) fluid-based haptic master device connected to a virtual slave environment that can be used for minimally invasive surgery (MIS). An already developed haptic joint featuring controllable ER fluid and a spherical joint mechanism is adopted for the master system. Medical forceps and an angular position measuring device are devised and integrated with the joint to establish the MIS master system. In order to embody a human organ in virtual space, a volumetric deformable object is used. The virtual object is then mathematically formulated by a shape-retaining chain-linked (S-chain) model. After evaluating the reflection force, computation time and compatibility with real-time control, the haptic architecture for MIS is established by incorporating the virtual slave with the master device so that the reflection force for the object of the virtual slave and the desired position for the master operator are transferred to each other. In order to achieve the desired force trajectories, a sliding mode controller is formulated and then experimentally realized. Tracking control performances for various force trajectories are evaluated and presented in the time domain.
Learning, retention, and generalization of haptic categories
NASA Astrophysics Data System (ADS)
Do, Phuong T.
This dissertation explored how haptic concepts are learned, retained, and generalized to the same or different modality. Participants learned to classify objects into three categories either visually or haptically via different training procedures, followed by an immediate or delayed transfer test. Experiment I involved visual versus haptic learning and transfer. Intermodal matching between vision and haptics was investigated in Experiment II. Experiments III and IV examined intersensory conflict in within- and between-category bimodal situations to determine the degree of perceptual dominance between sight and touch. Experiment V explored the intramodal relationship between similarity and categorization in a psychological space, as revealed by MDS analysis of similarity judgments. Major findings were: (1) visual examination resulted in relatively higher performance accuracy than haptic learning; (2) systematic training produced better category learning of haptic concepts across all modality conditions; (3) the category prototypes were rated newer than any transfer stimulus followed learning both immediately and after a week delay; and, (4) although they converged at the apex of two transformational trajectories, the category prototypes became more central to their respective categories and increasingly structured as a function of learning. Implications for theories of multimodal similarity and categorization behavior are discussed in terms of discrimination learning, sensory integration, and dominance relation.
Subthalamic nucleus deep brain stimulation improves somatosensory function in Parkinson's disease.
Aman, Joshua E; Abosch, Aviva; Bebler, Maggie; Lu, Chia-Hao; Konczak, Jürgen
2014-02-01
An established treatment for the motor symptoms of Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Mounting evidence suggests that PD is also associated with somatosensory deficits, yet the effect of STN-DBS on somatosensory processing is largely unknown. This study investigated whether STN-DBS affects somatosensory processing, specifically the processing of tactile and proprioceptive cues, by systematically examining the accuracy of haptic perception of object size. (Haptic perception refers to one's ability to extract object features such as shape and size by active touch.) Without vision, 13 PD patients with implanted STN-DBS and 13 healthy controls haptically explored the heights of 2 successively presented 3-dimensional (3D) blocks using a precision grip. Participants verbally indicated which block was taller and then used their nonprobing hand to motorically match the perceived size of the comparison block. Patients were tested during ON and OFF stimulation, following a 12-hour medication washout period. First, when compared to controls, the PD group's haptic discrimination threshold during OFF stimulation was elevated by 192% and mean hand aperture error was increased by 105%. Second, DBS lowered the haptic discrimination threshold by 26% and aperture error decreased by 20%. Third, during DBS ON, probing with the motorically more affected hand decreased haptic precision compared to probing with the less affected hand. This study offers the first evidence that STN-DBS improves haptic precision, further indicating that somatosensory function is improved by STN-DBS. We conclude that DBS-related improvements are not explained by improvements in motor function alone, but rather by enhanced somatosensory processing. © 2013 Movement Disorder Society.
Do Haptic Representations Help Complex Molecular Learning?
ERIC Educational Resources Information Center
Bivall, Petter; Ainsworth, Shaaron; Tibell, Lena A. E.
2011-01-01
This study explored whether adding a haptic interface (that provides users with somatosensory information about virtual objects by force and tactile feedback) to a three-dimensional (3D) chemical model enhanced students' understanding of complex molecular interactions. Two modes of the model were compared in a between-groups pre- and posttest…
Heterogeneous Deformable Modeling of Bio-Tissues and Haptic Force Rendering for Bio-Object Modeling
NASA Astrophysics Data System (ADS)
Lin, Shiyong; Lee, Yuan-Shin; Narayan, Roger J.
This paper presents a novel technique for modeling soft biological tissues as well as the development of an innovative interface for bio-manufacturing and medical applications. Heterogeneous deformable models may be used to represent the actual internal structures of deformable biological objects, which possess multiple components and nonuniform material properties. Both heterogeneous deformable object modeling and accurate haptic rendering can greatly enhance the realism and fidelity of virtual reality environments. In this paper, a tri-ray node snapping algorithm is proposed to generate a volumetric heterogeneous deformable model from a set of object interface surfaces between different materials. A constrained local static integration method is presented for simulating deformation and accurate force feedback based on the material properties of a heterogeneous structure. Biological soft tissue modeling is used as an example to demonstrate the proposed techniques. By integrating the heterogeneous deformable model into a virtual environment, users can both observe different materials inside a deformable object as well as interact with it by touching the deformable object using a haptic device. The presented techniques can be used for surgical simulation, bio-product design, bio-manufacturing, and medical applications.
Haptic guidance of overt visual attention.
List, Alexandra; Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru
2014-11-01
Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention.
Invited Article: A review of haptic optical tweezers for an interactive microworld exploration
NASA Astrophysics Data System (ADS)
Pacoret, Cécile; Régnier, Stéphane
2013-08-01
This paper is the first review of haptic optical tweezers, a new technique which associates force feedback teleoperation with optical tweezers. This technique allows users to explore the microworld by sensing and exerting picoNewton-scale forces with trapped microspheres. Haptic optical tweezers also allow improved dexterity of micromanipulation and micro-assembly. One of the challenges of this technique is to sense and magnify picoNewton-scale forces by a factor of 1012 to enable human operators to perceive interactions that they have never experienced before, such as adhesion phenomena, extremely low inertia, and high frequency dynamics of extremely small objects. The design of optical tweezers for high quality haptic feedback is challenging, given the requirements for very high sensitivity and dynamic stability. The concept, design process, and specification of optical tweezers reviewed here are focused on those intended for haptic teleoperation. In this paper, two new specific designs as well as the current state-of-the-art are presented. Moreover, the remaining important issues are identified for further developments. The initial results obtained are promising and demonstrate that optical tweezers have a significant potential for haptic exploration of the microworld. Haptic optical tweezers will become an invaluable tool for force feedback micromanipulation of biological samples and nano- and micro-assembly parts.
Jin, Seung-A Annie
2010-06-01
This study gauged the effects of force feedback in the Novint Falcon haptics system on the sensory and cognitive dimensions of a virtual test-driving experience. First, in order to explore the effects of tactile stimuli with force feedback on users' sensory experience, feelings of physical presence (the extent to which virtual physical objects are experienced as actual physical objects) were measured after participants used the haptics interface. Second, to evaluate the effects of force feedback on the cognitive dimension of consumers' virtual experience, this study investigated brand personality perception. The experiment utilized the Novint Falcon haptics controller to induce immersive virtual test-driving through tactile stimuli. The author designed a two-group (haptics stimuli with force feedback versus no force feedback) comparison experiment (N = 238) by manipulating the level of force feedback. Users in the force feedback condition were exposed to tactile stimuli involving various force feedback effects (e.g., terrain effects, acceleration, and lateral forces) while test-driving a rally car. In contrast, users in the control condition test-drove the rally car using the Novint Falcon but were not given any force feedback. Results of ANOVAs indicated that (a) users exposed to force feedback felt stronger physical presence than those in the no force feedback condition, and (b) users exposed to haptics stimuli with force feedback perceived the brand personality of the car to be more rugged than those in the control condition. Managerial implications of the study for product trial in the business world are discussed.
Postma, Albert; Zuidhoek, Sander; Noordzij, Matthijs L; Kappers, Astrid M L
2007-01-01
The roles of visual and haptic experience in different aspects of haptic processing of objects in peripersonal space are examined. In three trials, early-blind, late-blind, and blindfolded-sighted individuals had to match ten shapes haptically to the cut-outs in a board as fast as possible. Both blind groups were much faster than the sighted in all three trials. All three groups improved considerably from trial to trial. In particular, the sighted group showed a strong improvement from the first to the second trial. While superiority of the blind remained for speeded matching after rotation of the stimulus frame, coordinate positional-memory scores in a non-speeded free-recall trial showed no significant differences between the groups. Moreover, when assessed with a verbal response, categorical spatial-memory appeared strongest in the late-blind group. The role of haptic and visual experience thus appears to depend on the task aspect tested.
Object discrimination using optimized multi-frequency auditory cross-modal haptic feedback.
Gibson, Alison; Artemiadis, Panagiotis
2014-01-01
As the field of brain-machine interfaces and neuro-prosthetics continues to grow, there is a high need for sensor and actuation mechanisms that can provide haptic feedback to the user. Current technologies employ expensive, invasive and often inefficient force feedback methods, resulting in an unrealistic solution for individuals who rely on these devices. This paper responds through the development, integration and analysis of a novel feedback architecture where haptic information during the neural control of a prosthetic hand is perceived through multi-frequency auditory signals. Through representing force magnitude with volume and force location with frequency, the feedback architecture can translate the haptic experiences of a robotic end effector into the alternative sensory modality of sound. Previous research with the proposed cross-modal feedback method confirmed its learnability, so the current work aimed to investigate which frequency map (i.e. frequency-specific locations on the hand) is optimal in helping users distinguish between hand-held objects and tasks associated with them. After short use with the cross-modal feedback during the electromyographic (EMG) control of a prosthetic hand, testing results show that users are able to use audial feedback alone to discriminate between everyday objects. While users showed adaptation to three different frequency maps, the simplest map containing only two frequencies was found to be the most useful in discriminating between objects. This outcome provides support for the feasibility and practicality of the cross-modal feedback method during the neural control of prosthetics.
OzBot and haptics: remote surveillance to physical presence
NASA Astrophysics Data System (ADS)
Mullins, James; Fielding, Mick; Nahavandi, Saeid
2009-05-01
This paper reports on robotic and haptic technologies and capabilities developed for the law enforcement and defence community within Australia by the Centre for Intelligent Systems Research (CISR). The OzBot series of small and medium surveillance robots have been designed in Australia and evaluated by law enforcement and defence personnel to determine suitability and ruggedness in a variety of environments. Using custom developed digital electronics and featuring expandable data busses including RS485, I2C, RS232, video and Ethernet, the robots can be directly connected to many off the shelf payloads such as gas sensors, x-ray sources and camera systems including thermal and night vision. Differentiating the OzBot platform from its peers is its ability to be integrated directly with haptic technology or the 'haptic bubble' developed by CISR. Haptic interfaces allow an operator to physically 'feel' remote environments through position-force control and experience realistic force feedback. By adding the capability to remotely grasp an object, feel its weight, texture and other physical properties in real-time from the remote ground control unit, an operator's situational awareness is greatly improved through Haptic augmentation in an environment where remote-system feedback is often limited.
Haptics-based dynamic implicit solid modeling.
Hua, Jing; Qin, Hong
2004-01-01
This paper systematically presents a novel, interactive solid modeling framework, Haptics-based Dynamic Implicit Solid Modeling, which is founded upon volumetric implicit functions and powerful physics-based modeling. In particular, we augment our modeling framework with a haptic mechanism in order to take advantage of additional realism associated with a 3D haptic interface. Our dynamic implicit solids are semi-algebraic sets of volumetric implicit functions and are governed by the principles of dynamics, hence responding to sculpting forces in a natural and predictable manner. In order to directly manipulate existing volumetric data sets as well as point clouds, we develop a hierarchical fitting algorithm to reconstruct and represent discrete data sets using our continuous implicit functions, which permit users to further design and edit those existing 3D models in real-time using a large variety of haptic and geometric toolkits, and visualize their interactive deformation at arbitrary resolution. The additional geometric and physical constraints afford more sophisticated control of the dynamic implicit solids. The versatility of our dynamic implicit modeling enables the user to easily modify both the geometry and the topology of modeled objects, while the inherent physical properties can offer an intuitive haptic interface for direct manipulation with force feedback.
A pseudo-haptic knot diagram interface
NASA Astrophysics Data System (ADS)
Zhang, Hui; Weng, Jianguang; Hanson, Andrew J.
2011-01-01
To make progress in understanding knot theory, we will need to interact with the projected representations of mathematical knots which are of course continuous in 3D but significantly interrupted in the projective images. One way to achieve such a goal would be to design an interactive system that allows us to sketch 2D knot diagrams by taking advantage of a collision-sensing controller and explore their underlying smooth structures through a continuous motion. Recent advances of interaction techniques have been made that allow progress to be made in this direction. Pseudo-haptics that simulates haptic effects using pure visual feedback can be used to develop such an interactive system. This paper outlines one such pseudo-haptic knot diagram interface. Our interface derives from the familiar pencil-and-paper process of drawing 2D knot diagrams and provides haptic-like sensations to facilitate the creation and exploration of knot diagrams. A centerpiece of the interaction model simulates a "physically" reactive mouse cursor, which is exploited to resolve the apparent conflict between the continuous structure of the actual smooth knot and the visual discontinuities in the knot diagram representation. Another value in exploiting pseudo-haptics is that an acceleration (or deceleration) of the mouse cursor (or surface locator) can be used to indicate the slope of the curve (or surface) of whom the projective image is being explored. By exploiting these additional visual cues, we proceed to a full-featured extension to a pseudo-haptic 4D visualization system that simulates the continuous navigation on 4D objects and allows us to sense the bumps and holes in the fourth dimension. Preliminary tests of the software show that main features of the interface overcome some expected perceptual limitations in our interaction with 2D knot diagrams of 3D knots and 3D projective images of 4D mathematical objects.
Virtual Reality simulator for dental anesthesia training in the inferior alveolar nerve block.
Corrêa, Cléber Gimenez; Machado, Maria Aparecida de Andrade Moreira; Ranzini, Edith; Tori, Romero; Nunes, Fátima de Lourdes Santos
2017-01-01
This study shows the development and validation of a dental anesthesia-training simulator, specifically for the inferior alveolar nerve block (IANB). The system developed provides the tactile sensation of inserting a real needle in a human patient, using Virtual Reality (VR) techniques and a haptic device that can provide a perceived force feedback in the needle insertion task during the anesthesia procedure. To simulate a realistic anesthesia procedure, a Carpule syringe was coupled to a haptic device. The Volere method was used to elicit requirements from users in the Dentistry area; Repeated Measures Two-Way ANOVA (Analysis of Variance), Tukey post-hoc test and averages for the results' analysis. A questionnaire-based subjective evaluation method was applied to collect information about the simulator, and 26 people participated in the experiments (12 beginners, 12 at intermediate level, and 2 experts). The questionnaire included profile, preferences (number of viewpoints, texture of the objects, and haptic device handler), as well as visual (appearance, scale, and position of objects) and haptic aspects (motion space, tactile sensation, and motion reproduction). The visual aspect was considered appropriate and the haptic feedback must be improved, which the users can do by calibrating the virtual tissues' resistance. The evaluation of visual aspects was influenced by the participants' experience, according to ANOVA test (F=15.6, p=0.0002, with p<0.01). The user preferences were the simulator with two viewpoints, objects with texture based on images and the device with a syringe coupled to it. The simulation was considered thoroughly satisfactory for the anesthesia training, considering the needle insertion task, which includes the correct insertion point and depth, as well as the perception of tissues resistances during the insertion.
A Study on Immersion and Presence of a Portable Hand Haptic System for Immersive Virtual Reality
Kim, Mingyu; Jeon, Changyu; Kim, Jinmo
2017-01-01
This paper proposes a portable hand haptic system using Leap Motion as a haptic interface that can be used in various virtual reality (VR) applications. The proposed hand haptic system was designed as an Arduino-based sensor architecture to enable a variety of tactile senses at low cost, and is also equipped with a portable wristband. As a haptic system designed for tactile feedback, the proposed system first identifies the left and right hands and then sends tactile senses (vibration and heat) to each fingertip (thumb and index finger). It is incorporated into a wearable band-type system, making its use easy and convenient. Next, hand motion is accurately captured using the sensor of the hand tracking system and is used for virtual object control, thus achieving interaction that enhances immersion. A VR application was designed with the purpose of testing the immersion and presence aspects of the proposed system. Lastly, technical and statistical tests were carried out to assess whether the proposed haptic system can provide a new immersive presence to users. According to the results of the presence questionnaire and the simulator sickness questionnaire, we confirmed that the proposed hand haptic system, in comparison to the existing interaction that uses only the hand tracking system, provided greater presence and a more immersive environment in the virtual reality. PMID:28513545
A Study on Immersion and Presence of a Portable Hand Haptic System for Immersive Virtual Reality.
Kim, Mingyu; Jeon, Changyu; Kim, Jinmo
2017-05-17
This paper proposes a portable hand haptic system using Leap Motion as a haptic interface that can be used in various virtual reality (VR) applications. The proposed hand haptic system was designed as an Arduino-based sensor architecture to enable a variety of tactile senses at low cost, and is also equipped with a portable wristband. As a haptic system designed for tactile feedback, the proposed system first identifies the left and right hands and then sends tactile senses (vibration and heat) to each fingertip (thumb and index finger). It is incorporated into a wearable band-type system, making its use easy and convenient. Next, hand motion is accurately captured using the sensor of the hand tracking system and is used for virtual object control, thus achieving interaction that enhances immersion. A VR application was designed with the purpose of testing the immersion and presence aspects of the proposed system. Lastly, technical and statistical tests were carried out to assess whether the proposed haptic system can provide a new immersive presence to users. According to the results of the presence questionnaire and the simulator sickness questionnaire, we confirmed that the proposed hand haptic system, in comparison to the existing interaction that uses only the hand tracking system, provided greater presence and a more immersive environment in the virtual reality.
Hongyi Xu; Barbic, Jernej
2017-01-01
We present an algorithm for fast continuous collision detection between points and signed distance fields, and demonstrate how to robustly use it for 6-DoF haptic rendering of contact between objects with complex geometry. Continuous collision detection is often needed in computer animation, haptics, and virtual reality applications, but has so far only been investigated for polygon (triangular) geometry representations. We demonstrate how to robustly and continuously detect intersections between points and level sets of the signed distance field. We suggest using an octree subdivision of the distance field for fast traversal of distance field cells. We also give a method to resolve continuous collisions between point clouds organized into a tree hierarchy and a signed distance field, enabling rendering of contact between rigid objects with complex geometry. We investigate and compare two 6-DoF haptic rendering methods now applicable to point-versus-distance field contact for the first time: continuous integration of penalty forces, and a constraint-based method. An experimental comparison to discrete collision detection demonstrates that the continuous method is more robust and can correctly resolve collisions even under high velocities and during complex contact.
Aging and the haptic perception of 3D surface shape.
Norman, J Farley; Kappers, Astrid M L; Beers, Amanda M; Scott, A Kate; Norman, Hideko F; Koenderink, Jan J
2011-04-01
Two experiments evaluated the ability of older and younger adults to perceive the three-dimensional (3D) shape of object surfaces from active touch (haptics). The ages of the older adults ranged from 64 to 84 years, while those of the younger adults ranged from 18 to 27 years. In Experiment 1, the participants haptically judged the shape of large (20 cm diameter) surfaces with an entire hand. In contrast, in Experiment 2, the participants explored the shape of small (5 cm diameter) surfaces with a single finger. The haptic surfaces varied in shape index (Koenderink, Solid shape, 1990; Koenderink, Image and Vision Computing, 10, 557-564, 1992) from -1.0 to +1.0 in steps of 0.25. For both types of surfaces (large and small), the participants were able to judge surface shape reliably. The older participants' judgments of surface shape were just as accurate and precise as those of the younger participants. The results of the current study demonstrate that while older adults do possess reductions in tactile sensitivity and acuity, they nevertheless can effectively perceive 3D surface shape from haptic exploration.
Neurosurgical tactile discrimination training with haptic-based virtual reality simulation.
Patel, Achal; Koshy, Nick; Ortega-Barnett, Juan; Chan, Hoi C; Kuo, Yong-Fan; Luciano, Cristian; Rizzi, Silvio; Matulyauskas, Martin; Kania, Patrick; Banerjee, Pat; Gasco, Jaime
2014-12-01
To determine if a computer-based simulation with haptic technology can help surgical trainees improve tactile discrimination using surgical instruments. Twenty junior medical students participated in the study and were randomized into two groups. Subjects in Group A participated in virtual simulation training using the ImmersiveTouch simulator (ImmersiveTouch, Inc., Chicago, IL, USA) that required differentiating the firmness of virtual spheres using tactile and kinesthetic sensation via haptic technology. Subjects in Group B did not undergo any training. With their visual fields obscured, subjects in both groups were then evaluated on their ability to use the suction and bipolar instruments to find six elastothane objects with areas ranging from 1.5 to 3.5 cm2 embedded in a urethane foam brain cavity model while relying on tactile and kinesthetic sensation only. A total of 73.3% of the subjects in Group A (simulation training) were able to find the brain cavity objects in comparison to 53.3% of the subjects in Group B (no training) (P = 0.0183). There was a statistically significant difference in the total number of Group A subjects able to find smaller brain cavity objects (size ≤ 2.5 cm2) compared to that in Group B (72.5 vs. 40%, P = 0.0032). On the other hand, no significant difference in the number of subjects able to detect larger objects (size ≧ 3 cm2) was found between Groups A and B (75 vs. 80%, P = 0.7747). Virtual computer-based simulators with integrated haptic technology may improve tactile discrimination required for microsurgical technique.
Haptic exploratory behavior during object discrimination: a novel automatic annotation method.
Jansen, Sander E M; Bergmann Tiest, Wouter M; Kappers, Astrid M L
2015-01-01
In order to acquire information concerning the geometry and material of handheld objects, people tend to execute stereotypical hand movement patterns called haptic Exploratory Procedures (EPs). Manual annotation of haptic exploration trials with these EPs is a laborious task that is affected by subjectivity, attentional lapses, and viewing angle limitations. In this paper we propose an automatic EP annotation method based on position and orientation data from motion tracking sensors placed on both hands and inside a stimulus. A set of kinematic variables is computed from these data and compared to sets of predefined criteria for each of four EPs. Whenever all criteria for a specific EP are met, it is assumed that that particular hand movement pattern was performed. This method is applied to data from an experiment where blindfolded participants haptically discriminated between objects differing in hardness, roughness, volume, and weight. In order to validate the method, its output is compared to manual annotation based on video recordings of the same trials. Although mean pairwise agreement is less between human-automatic pairs than between human-human pairs (55.7% vs 74.5%), the proposed method performs much better than random annotation (2.4%). Furthermore, each EP is linked to a specific object property for which it is optimal (e.g., Lateral Motion for roughness). We found that the percentage of trials where the expected EP was found does not differ between manual and automatic annotation. For now, this method cannot yet completely replace a manual annotation procedure. However, it could be used as a starting point that can be supplemented by manual annotation.
Sensory modality of smoking cues modulates neural cue reactivity.
Yalachkov, Yavor; Kaiser, Jochen; Görres, Andreas; Seehaus, Arne; Naumer, Marcus J
2013-01-01
Behavioral experiments have demonstrated that the sensory modality of presentation modulates drug cue reactivity. The present study on nicotine addiction tested whether neural responses to smoking cues are modulated by the sensory modality of stimulus presentation. We measured brain activation using functional magnetic resonance imaging (fMRI) in 15 smokers and 15 nonsmokers while they viewed images of smoking paraphernalia and control objects and while they touched the same objects without seeing them. Haptically presented, smoking-related stimuli induced more pronounced neural cue reactivity than visual cues in the left dorsal striatum in smokers compared to nonsmokers. The severity of nicotine dependence correlated positively with the preference for haptically explored smoking cues in the left inferior parietal lobule/somatosensory cortex, right fusiform gyrus/inferior temporal cortex/cerebellum, hippocampus/parahippocampal gyrus, posterior cingulate cortex, and supplementary motor area. These observations are in line with the hypothesized role of the dorsal striatum for the expression of drug habits and the well-established concept of drug-related automatized schemata, since haptic perception is more closely linked to the corresponding object-specific action pattern than visual perception. Moreover, our findings demonstrate that with the growing severity of nicotine dependence, brain regions involved in object perception, memory, self-processing, and motor control exhibit an increasing preference for haptic over visual smoking cues. This difference was not found for control stimuli. Considering the sensory modality of the presented cues could serve to develop more reliable fMRI-specific biomarkers, more ecologically valid experimental designs, and more effective cue-exposure therapies of addiction.
Pure associative tactile agnosia for the left hand: clinical and anatomo-functional correlations.
Veronelli, Laura; Ginex, Valeria; Dinacci, Daria; Cappa, Stefano F; Corbo, Massimo
2014-09-01
Associative tactile agnosia (TA) is defined as the inability to associate information about object sensory properties derived through tactile modality with previously acquired knowledge about object identity. The impairment is often described after a lesion involving the parietal cortex (Caselli, 1997; Platz, 1996). We report the case of SA, a right-handed 61-year-old man affected by first ever right hemispheric hemorrhagic stroke. The neurological examination was normal, excluding major somaesthetic and motor impairment; a brain magnetic resonance imaging (MRI) confirmed the presence of a right subacute hemorrhagic lesion limited to the post-central and supra-marginal gyri. A comprehensive neuropsychological evaluation detected a selective inability to name objects when handled with the left hand in the absence of other cognitive deficits. A series of experiments were conducted in order to assess each stage of tactile recognition processing using the same stimulus sets: materials, 3D geometrical shapes, real objects and letters. SA and seven matched controls underwent the same experimental tasks during four sessions in consecutive days. Tactile discrimination, recognition, pantomime, drawing after haptic exploration out of vision and tactile-visual matching abilities were assessed. In addition, we looked for the presence of a supra-modal impairment of spatial perception and of specific difficulties in programming exploratory movements during recognition. Tactile discrimination was intact for all the stimuli tested. In contrast, SA was able neither to recognize nor to pantomime real objects manipulated with the left hand out of vision, while he identified them with the right hand without hesitations. Tactile-visual matching was intact. Furthermore, SA was able to grossly reproduce the global shape in drawings but failed to extract details of objects after left-hand manipulation, and he could not identify objects after looking at his own drawings. This case confirms the existence of selective associative TA as a left hand-specific deficit in recognizing objects. This deficit is not related to spatial perception or to the programming of exploratory movements. The cross-modal transfer of information via visual perception permits the activation of a partially degraded image, which alone does not allow the proper recognition of the initial tactile stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.
Girod, Sabine; Schvartzman, Sara C; Gaudilliere, Dyani; Salisbury, Kenneth; Silva, Rebeka
2016-01-01
Computer-assisted surgical (CAS) planning tools are available for craniofacial surgery, but are usually based on computer-aided design (CAD) tools that lack the ability to detect the collision of virtual objects (i.e., fractured bone segments). We developed a CAS system featuring a sense of touch (haptic) that enables surgeons to physically interact with individual, patient-specific anatomy and immerse in a three-dimensional virtual environment. In this study, we evaluated initial user experience with our novel system compared to an existing CAD system. Ten surgery resident trainees received a brief verbal introduction to both the haptic and CAD systems. Users simulated mandibular fracture reduction in three clinical cases within a 15 min time limit for each system and completed a questionnaire to assess their subjective experience. We compared standard landmarks and linear and angular measurements between the simulated results and the actual surgical outcome and found that haptic simulation results were not significantly different from actual postoperative outcomes. In contrast, CAD results significantly differed from both the haptic simulation and actual postoperative results. In addition to enabling a more accurate fracture repair, the haptic system provided a better user experience than the CAD system in terms of intuitiveness and self-reported quality of repair.
A review of haptic simulator for oral and maxillofacial surgery based on virtual reality.
Chen, Xiaojun; Hu, Junlei
2018-06-01
Traditional medical training in oral and maxillofacial surgery (OMFS) may be limited by its low efficiency and high price due to the shortage of cadaver resources. With the combination of visual rendering and feedback force, surgery simulators become increasingly popular in hospitals and medical schools as an alternative to the traditional training. Areas covered: The major goal of this review is to provide a comprehensive reference source of current and future developments of haptic OMFS simulators based on virtual reality (VR) for relevant researchers. Expert commentary: Visual rendering, haptic rendering, tissue deformation, and evaluation are key components of haptic surgery simulator based on VR. Compared with traditional medical training, virtual and tactical fusion of virtual environment in surgery simulator enables considerably vivid sensation, and the operators have more opportunities to practice surgical skills and receive objective evaluation as reference.
Modeling of Explorative Procedures for Remote Object Identification
1991-09-01
haptic sensory system and the simulated foveal component of the visual system. Eventually it will allow multiple applications in remote sensing and...superposition of sensory channels. The use of a force reflecting telemanipulator and computer simulated visual foveal component are the tools which...representation of human search models is achieved by using the proprioceptive component of the haptic sensory system and the simulated foveal component of the
Haptic exploration of fingertip-sized geometric features using a multimodal tactile sensor
NASA Astrophysics Data System (ADS)
Ponce Wong, Ruben D.; Hellman, Randall B.; Santos, Veronica J.
2014-06-01
Haptic perception remains a grand challenge for artificial hands. Dexterous manipulators could be enhanced by "haptic intelligence" that enables identification of objects and their features via touch alone. Haptic perception of local shape would be useful when vision is obstructed or when proprioceptive feedback is inadequate, as observed in this study. In this work, a robot hand outfitted with a deformable, bladder-type, multimodal tactile sensor was used to replay four human-inspired haptic "exploratory procedures" on fingertip-sized geometric features. The geometric features varied by type (bump, pit), curvature (planar, conical, spherical), and footprint dimension (1.25 - 20 mm). Tactile signals generated by active fingertip motions were used to extract key parameters for use as inputs to supervised learning models. A support vector classifier estimated order of curvature while support vector regression models estimated footprint dimension once curvature had been estimated. A distal-proximal stroke (along the long axis of the finger) enabled estimation of order of curvature with an accuracy of 97%. Best-performing, curvature-specific, support vector regression models yielded R2 values of at least 0.95. While a radial-ulnar stroke (along the short axis of the finger) was most helpful for estimating feature type and size for planar features, a rolling motion was most helpful for conical and spherical features. The ability to haptically perceive local shape could be used to advance robot autonomy and provide haptic feedback to human teleoperators of devices ranging from bomb defusal robots to neuroprostheses.
The effect of haptic guidance and visual feedback on learning a complex tennis task.
Marchal-Crespo, Laura; van Raai, Mark; Rauter, Georg; Wolf, Peter; Riener, Robert
2013-11-01
While haptic guidance can improve ongoing performance of a motor task, several studies have found that it ultimately impairs motor learning. However, some recent studies suggest that the haptic demonstration of optimal timing, rather than movement magnitude, enhances learning in subjects trained with haptic guidance. Timing of an action plays a crucial role in the proper accomplishment of many motor skills, such as hitting a moving object (discrete timing task) or learning a velocity profile (time-critical tracking task). The aim of the present study is to evaluate which feedback conditions-visual or haptic guidance-optimize learning of the discrete and continuous elements of a timing task. The experiment consisted in performing a fast tennis forehand stroke in a virtual environment. A tendon-based parallel robot connected to the end of a racket was used to apply haptic guidance during training. In two different experiments, we evaluated which feedback condition was more adequate for learning: (1) a time-dependent discrete task-learning to start a tennis stroke and (2) a tracking task-learning to follow a velocity profile. The effect that the task difficulty and subject's initial skill level have on the selection of the optimal training condition was further evaluated. Results showed that the training condition that maximizes learning of the discrete time-dependent motor task depends on the subjects' initial skill level. Haptic guidance was especially suitable for less-skilled subjects and in especially difficult discrete tasks, while visual feedback seems to benefit more skilled subjects. Additionally, haptic guidance seemed to promote learning in a time-critical tracking task, while visual feedback tended to deteriorate the performance independently of the task difficulty and subjects' initial skill level. Haptic guidance outperformed visual feedback, although additional studies are needed to further analyze the effect of other types of feedback visualization on motor learning of time-critical tasks.
NASA Technical Reports Server (NTRS)
1997-01-01
Session MP4 includes short reports on: (1) Face Recognition in Microgravity: Is Gravity Direction Involved in the Inversion Effect?; (2) Motor Timing under Microgravity; (3) Perceived Self-Motion Assessed by Computer-Generated Animations: Complexity and Reliability; (4) Prolonged Weightlessness Reference Frames and Visual Symmetry Detection; (5) Mental Representation of Gravity During a Locomotor Task; and (6) Haptic Perception in Weightlessness: A Sense of Force or a Sense of Effort?
Sensing design and workmanship: the haptic skills of shoppers in eighteenth-century London.
Smith, Kate
2012-01-01
This article explores how eighteenth-century shoppers understood the material world around them. It argues that retail experiences exposed shoppers to different objects, which subsequently shaped their understanding of this world. This article builds on recent research that highlights the importance of shop environments and browsing in consumer choice. More particularly, it differentiates itself by examining the practice of handling goods in shops and arguing that sensory interaction with multiple goods was one of the key means by which shoppers comprehended concepts of design and workmanship. In doing so, it affirms the importance of sensory research to design history. The article focuses on consumer purchases of ceramic objects and examines a variety of sources to demonstrate the role of haptic skills in this act. It shows how different literary sources described browsing for goods in gendered and satirical terms and then contrasts these readings against visual evidence to illustrate how handling goods was also represented as a positive act. It reads browsing as a valued practice requiring competence, patience and haptic skills. Through an examination of diary sources, letters and objects this article asks what information shoppers gained from touching various objects. It concludes by demonstrating how repetitive handling in search of quality meant that shoppers acquired their own conception of what constituted workmanship and design.
AR Feels "Softer" than VR: Haptic Perception of Stiffness in Augmented versus Virtual Reality.
Gaffary, Yoren; Le Gouis, Benoit; Marchal, Maud; Argelaguet, Ferran; Arnaldi, Bruno; Lecuyer, Anatole
2017-11-01
Does it feel the same when you touch an object in Augmented Reality (AR) or in Virtual Reality (VR)? In this paper we study and compare the haptic perception of stiffness of a virtual object in two situations: (1) a purely virtual environment versus (2) a real and augmented environment. We have designed an experimental setup based on a Microsoft HoloLens and a haptic force-feedback device, enabling to press a virtual piston, and compare its stiffness successively in either Augmented Reality (the virtual piston is surrounded by several real objects all located inside a cardboard box) or in Virtual Reality (the same virtual piston is displayed in a fully virtual scene composed of the same other objects). We have conducted a psychophysical experiment with 12 participants. Our results show a surprising bias in perception between the two conditions. The virtual piston is on average perceived stiffer in the VR condition compared to the AR condition. For instance, when the piston had the same stiffness in AR and VR, participants would select the VR piston as the stiffer one in 60% of cases. This suggests a psychological effect as if objects in AR would feel "softer" than in pure VR. Taken together, our results open new perspectives on perception in AR versus VR, and pave the way to future studies aiming at characterizing potential perceptual biases.
Analysis of haptic information in the cerebral cortex
2016-01-01
Haptic sensing of objects acquires information about a number of properties. This review summarizes current understanding about how these properties are processed in the cerebral cortex of macaques and humans. Nonnoxious somatosensory inputs, after initial processing in primary somatosensory cortex, are partially segregated into different pathways. A ventrally directed pathway carries information about surface texture into parietal opercular cortex and thence to medial occipital cortex. A dorsally directed pathway transmits information regarding the location of features on objects to the intraparietal sulcus and frontal eye fields. Shape processing occurs mainly in the intraparietal sulcus and lateral occipital complex, while orientation processing is distributed across primary somatosensory cortex, the parietal operculum, the anterior intraparietal sulcus, and a parieto-occipital region. For each of these properties, the respective areas outside primary somatosensory cortex also process corresponding visual information and are thus multisensory. Consistent with the distributed neural processing of haptic object properties, tactile spatial acuity depends on interaction between bottom-up tactile inputs and top-down attentional signals in a distributed neural network. Future work should clarify the roles of the various brain regions and how they interact at the network level. PMID:27440247
Effects of Grip-Force, Contact, and Acceleration Feedback on a Teleoperated Pick-and-Place Task.
Khurshid, Rebecca P; Fitter, Naomi T; Fedalei, Elizabeth A; Kuchenbecker, Katherine J
2017-01-01
The multifaceted human sense of touch is fundamental to direct manipulation, but technical challenges prevent most teleoperation systems from providing even a single modality of haptic feedback, such as force feedback. This paper postulates that ungrounded grip-force, fingertip-contact-and-pressure, and high-frequency acceleration haptic feedback will improve human performance of a teleoperated pick-and-place task. Thirty subjects used a teleoperation system consisting of a haptic device worn on the subject's right hand, a remote PR2 humanoid robot, and a Vicon motion capture system to move an object to a target location. Each subject completed the pick-and-place task 10 times under each of the eight haptic conditions obtained by turning on and off grip-force feedback, contact feedback, and acceleration feedback. To understand how object stiffness affects the utility of the feedback, half of the subjects completed the task with a flexible plastic cup, and the others used a rigid plastic block. The results indicate that the addition of grip-force feedback with gain switching enables subjects to hold both the flexible and rigid objects more stably, and it also allowed subjects who manipulated the rigid block to hold the object more delicately and to better control the motion of the remote robot's hand. Contact feedback improved the ability of subjects who manipulated the flexible cup to move the robot's arm in space, but it deteriorated this ability for subjects who manipulated the rigid block. Contact feedback also caused subjects to hold the flexible cup less stably, but the rigid block more securely. Finally, adding acceleration feedback slightly improved the subject's performance when setting the object down, as originally hypothesized; interestingly, it also allowed subjects to feel vibrations produced by the robot's motion, causing them to be more careful when completing the task. This study supports the utility of grip-force and high-frequency acceleration feedback in teleoperation systems and motivates further improvements to fingertip-contact-and-pressure feedback.
Mechanically Compliant Electronic Materials for Wearable Photovoltaics and Human-Machine Interfaces
NASA Astrophysics Data System (ADS)
O'Connor, Timothy Francis, III
Applications of stretchable electronic materials for human-machine interfaces are described herein. Intrinsically stretchable organic conjugated polymers and stretchable electronic composites were used to develop stretchable organic photovoltaics (OPVs), mechanically robust wearable OPVs, and human-machine interfaces for gesture recognition, American Sign Language Translation, haptic control of robots, and touch emulation for virtual reality, augmented reality, and the transmission of touch. The stretchable and wearable OPVs comprise active layers of poly-3-alkylthiophene:phenyl-C61-butyric acid methyl ester (P3AT:PCBM) and transparent conductive electrodes of poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) (PEDOT:PSS) and devices could only be fabricated through a deep understanding of the connection between molecular structure and the co-engineering of electronic performance with mechanical resilience. The talk concludes with the use of composite piezoresistive sensors two smart glove prototypes. The first integrates stretchable strain sensors comprising a carbon-elastomer composite, a wearable microcontroller, low energy Bluetooth, and a 6-axis accelerometer/gyroscope to construct a fully functional gesture recognition glove capable of wirelessly translating American Sign Language to text on a cell phone screen. The second creates a system for the haptic control of a 3D printed robot arm, as well as the transmission of touch and temperature information.
A perspective on the role and utility of haptic feedback in laparoscopic skills training.
Singapogu, Ravikiran; Burg, Timothy; Burg, Karen J L; Smith, Dane E; Eckenrode, Amanda H
2014-01-01
Laparoscopic surgery is a minimally invasive surgical technique with significant potential benefits to the patient, including shorter recovery time, less scarring, and decreased costs. There is a growing need to teach surgical trainees this emerging surgical technique. Simulators, ranging from simple "box" trainers to complex virtual reality (VR) trainers, have emerged as the most promising method for teaching basic laparoscopic surgical skills. Current box trainers require oversight from an expert surgeon for both training and assessing skills. VR trainers decrease the dependence on expert teachers during training by providing objective, real-time feedback and automatic skills evaluation. However, current VR trainers generally have limited credibility as a means to prepare new surgeons and have often fallen short of educators' expectations. Several researchers have speculated that the missing component in modern VR trainers is haptic feedback, which refers to the range of touch sensations encountered during surgery. These force types and ranges need to be adequately rendered by simulators for a more complete training experience. This article presents a perspective of the role and utility of haptic feedback during laparoscopic surgery and laparoscopic skills training by detailing the ranges and types of haptic sensations felt by the operating surgeon, along with quantitative studies of how this feedback is used. Further, a number of research studies that have documented human performance effects as a result of the presence of haptic feedback are critically reviewed. Finally, key research directions in using haptic feedback for laparoscopy training simulators are identified.
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes. PMID:29872389
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot.
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback-Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes.
Griffiths, Paul G; Gillespie, R Brent
2005-01-01
This paper describes a paradigm for human/automation control sharing in which the automation acts through a motor coupled to a machine's manual control interface. The manual interface becomes a haptic display, continually informing the human about automation actions. While monitoring by feel, users may choose either to conform to the automation or override it and express their own control intentions. This paper's objective is to demonstrate that adding automation through haptic display can be used not only to improve performance on a primary task but also to reduce perceptual demands or free attention for a secondary task. Results are presented from three experiments in which 11 participants completed a lane-following task using a motorized steering wheel on a fixed-base driving simulator. The automation behaved like a copilot, assisting with lane following by applying torques to the steering wheel. Results indicate that haptic assist improves lane following by least 30%, p < .0001, while reducing visual demand by 29%, p < .0001, or improving reaction time in a secondary tone localization task by 18 ms, p = .0009. Potential applications of this research include the design of automation interfaces based on haptics that support human/automation control sharing better than traditional push-button automation interfaces.
Lin, Yanping; Chen, Huajiang; Yu, Dedong; Zhang, Ying; Yuan, Wen
2017-01-01
Bone drilling simulators with virtual and haptic feedback provide a safe, cost-effective and repeatable alternative to traditional surgical training methods. To develop such a simulator, accurate haptic rendering based on a force model is required to feedback bone drilling forces based on user input. Current predictive bone drilling force models based on bovine bones with various drilling conditions and parameters are not representative of the bone drilling process in bone surgery. The objective of this study was to provide a bone drilling force model for haptic rendering based on calibration and validation experiments in fresh cadaveric bones with different bone densities. Using a commonly used drill bit geometry (2 mm diameter), feed rates (20-60 mm/min) and spindle speeds (4000-6000 rpm) in orthognathic surgeries, the bone drilling forces of specimens from two groups were measured and the calibration coefficients of the specific normal and frictional pressures were determined. The comparison of the predicted forces and the measured forces from validation experiments with a large range of feed rates and spindle speeds demonstrates that the proposed bone drilling forces can predict the trends and average forces well. The presented bone drilling force model can be used for haptic rendering in surgical simulators.
Olsson, Pontus; Nysjo, Fredrik; Carlbom, Ingrid B; Johansson, Stefan
2016-01-01
Piezoelectric motors offer an attractive alternative to electromagnetic actuators in portable haptic interfaces: they are compact, have a high force-to-volume ratio, and can operate with limited or no gearing. However, the choice of a piezoelectric motor type is not obvious due to differences in performance characteristics. We present our evaluation of two commercial, operationally different, piezoelectric motors acting as actuators in two kinesthetic haptic grippers, a walking quasi-static motor and a traveling wave ultrasonic motor. We evaluate each gripper's ability to display common virtual objects including springs, dampers, and rigid walls, and conclude that the walking quasi-static motor is superior at low velocities. However, for applications where high velocity is required, traveling wave ultrasonic motors are a better option.
Human eye haptics-based multimedia.
Velandia, David; Uribe-Quevedo, Alvaro; Perez-Gutierrez, Byron
2014-01-01
Immersive and interactive multimedia applications offer complementary study tools in anatomy as users can explore 3D models while obtaining information about the organ, tissue or part being explored. Haptics increases the sense of interaction with virtual objects improving user experience in a more realistic manner. Common eye studying tools are books, illustrations, assembly models, and more recently these are being complemented with mobile apps whose 3D capabilities, computing power and customers are increasing. The goal of this project is to develop a complementary eye anatomy and pathology study tool using deformable models within a multimedia application, offering the students the opportunity for exploring the eye from up close and within with relevant information. Validation of the tool provided feedback on the potential of the development, along with suggestions on improving haptic feedback and navigation.
Multiple reference frames in haptic spatial processing
NASA Astrophysics Data System (ADS)
Volčič, R.
2008-08-01
The present thesis focused on haptic spatial processing. In particular, our interest was directed to the perception of spatial relations with the main focus on the perception of orientation. To this end, we studied haptic perception in different tasks, either in isolation or in combination with vision. The parallelity task, where participants have to match the orientations of two spatially separated bars, was used in its two-dimensional and three-dimensional versions in Chapter 2 and Chapter 3, respectively. The influence of non-informative vision and visual interference on performance in the parallelity task was studied in Chapter 4. A different task, the mental rotation task, was introduced in a purely haptic study in Chapter 5 and in a visuo-haptic cross-modal study in Chapter 6. The interaction of multiple reference frames and their influence on haptic spatial processing were the common denominators of these studies. In this thesis we approached the problems of which reference frames play the major role in haptic spatial processing and how the relative roles of distinct reference frames change depending on the available information and the constraints imposed by different tasks. We found that the influence of a reference frame centered on the hand was the major cause of the deviations from veridicality observed in both the two-dimensional and three-dimensional studies. The results were described by a weighted average model, in which the hand-centered egocentric reference frame is supposed to have a biasing influence on the allocentric reference frame. Performance in haptic spatial processing has been shown to depend also on sources of information or processing that are not strictly connected to the task at hand. When non-informative vision was provided, a beneficial effect was observed in the haptic performance. This improvement was interpreted as a shift from the egocentric to the allocentric reference frame. Moreover, interfering visual information presented in the vicinity of the haptic stimuli parametrically modulated the magnitude of the deviations. The influence of the hand-centered reference frame was shown also in the haptic mental rotation task where participants were quicker in judging the parity of objects when these were aligned with respect to the hands than when they were physically aligned. Similarly, in the visuo-haptic cross-modal mental rotation task the parity judgments were influenced by the orientation of the exploring hand with respect to the viewing direction. This effect was shown to be modulated also by an intervening temporal delay that supposedly counteracts the influence of the hand-centered reference frame. We suggest that the hand-centered reference frame is embedded in a hierarchical structure of reference frames where some of these emerge depending on the demands and the circumstances of the surrounding environment and the needs of an active perceiver.
The role of connectedness in haptic object perception.
Plaisier, Myrthe A; van Polanen, Vonne; Kappers, Astrid M L
2017-03-02
We can efficiently detect whether there is a rough object among a set of smooth objects using our sense of touch. We can also quickly determine the number of rough objects in our hand. In this study, we investigated whether the perceptual processing of rough and smooth objects is influenced if these objects are connected. In Experiment 1, participants were asked to identify whether there were exactly two rough target spheres among smooth distractor spheres, while we recorded their response times. The spheres were connected to form pairs: rough spheres were paired together and smooth spheres were paired together ('within pairs arrangement'), or a rough and a smooth sphere were connected ('between pairs arrangement'). Participants responded faster when the spheres in a pair were identical. In Experiment 2, we found that the advantage for within pairs arrangements was not driven by feature saliency. Overall our results show that haptic information is processed faster when targets were connected together compared to when targets were connected to distractors.
Ehrampoosh, Shervin; Dave, Mohit; Kia, Michael A; Rablau, Corneliu; Zadeh, Mehrdad H
2013-01-01
This paper presents an enhanced haptic-enabled master-slave teleoperation system which can be used to provide force feedback to surgeons in minimally invasive surgery (MIS). One of the research goals was to develop a combined-control architecture framework that included both direct force reflection (DFR) and position-error-based (PEB) control strategies. To achieve this goal, it was essential to measure accurately the direct contact forces between deformable bodies and a robotic tool tip. To measure the forces at a surgical tool tip and enhance the performance of the teleoperation system, an optical force sensor was designed, prototyped, and added to a robot manipulator. The enhanced teleoperation architecture was formulated by developing mathematical models for the optical force sensor, the extended slave robot manipulator, and the combined-control strategy. Human factor studies were also conducted to (a) examine experimentally the performance of the enhanced teleoperation system with the optical force sensor, and (b) study human haptic perception during the identification of remote object deformability. The first experiment was carried out to discriminate deformability of objects when human subjects were in direct contact with deformable objects by means of a laparoscopic tool. The control parameters were then tuned based on the results of this experiment using a gain-scheduling method. The second experiment was conducted to study the effectiveness of the force feedback provided through the enhanced teleoperation system. The results show that the force feedback increased the ability of subjects to correctly identify materials of different deformable types. In addition, the virtual force feedback provided by the teleoperation system comes close to the real force feedback experienced in direct MIS. The experimental results provide design guidelines for choosing and validating the control architecture and the optical force sensor.
Learning to perceive differences in solid shape through vision and touch.
Norman, J Farley; Clayton, Anna Marie; Norman, Hideko F; Crabtree, Charles E
2008-01-01
A single experiment was designed to investigate perceptual learning and the discrimination of 3-D object shape. Ninety-six observers were presented with naturally shaped solid objects either visually, haptically, or across the modalities of vision and touch. The observers' task was to judge whether the two sequentially presented objects on any given trial possessed the same or different 3-D shapes. The results of the experiment revealed that significant perceptual learning occurred in all modality conditions, both unimodal and cross-modal. The amount of the observers' perceptual learning, as indexed by increases in hit rate and d', was similar for all of the modality conditions. The observers' hit rates were highest for the unimodal conditions and lowest in the cross-modal conditions. Lengthening the inter-stimulus interval from 3 to 15 s led to increases in hit rates and decreases in response bias. The results also revealed the existence of an asymmetry between two otherwise equivalent cross-modal conditions: in particular, the observers' perceptual sensitivity was higher for the vision-haptic condition and lower for the haptic-vision condition. In general, the results indicate that effective cross-modal shape comparisons can be made between the modalities of vision and active touch, but that complete information transfer does not occur.
Sorgini, Francesca; Massari, Luca; D’Abbraccio, Jessica; Petrovic, Petar B.; Carrozza, Maria Chiara; Newell, Fiona N.
2018-01-01
We present a tactile telepresence system for real-time transmission of information about object stiffness to the human fingertips. Experimental tests were performed across two laboratories (Italy and Ireland). In the Italian laboratory, a mechatronic sensing platform indented different rubber samples. Information about rubber stiffness was converted into on-off events using a neuronal spiking model and sent to a vibrotactile glove in the Irish laboratory. Participants discriminated the variation of the stiffness of stimuli according to a two-alternative forced choice protocol. Stiffness discrimination was based on the variation of the temporal pattern of spikes generated during the indentation of the rubber samples. The results suggest that vibrotactile stimulation can effectively simulate surface stiffness when using neuronal spiking models to trigger vibrations in the haptic interface. Specifically, fractional variations of stiffness down to 0.67 were significantly discriminated with the developed neuromorphic haptic interface. This is a performance comparable, though slightly worse, to the threshold obtained in a benchmark experiment evaluating the same set of stimuli naturally with the own hand. Our paper presents a bioinspired method for delivering sensory feedback about object properties to human skin based on contingency–mimetic neuronal models, and can be useful for the design of high performance haptic devices. PMID:29342076
How Haptic Size Sensations Improve Distance Perception
Battaglia, Peter W.; Kersten, Daniel; Schrater, Paul R.
2011-01-01
Determining distances to objects is one of the most ubiquitous perceptual tasks in everyday life. Nevertheless, it is challenging because the information from a single image confounds object size and distance. Though our brains frequently judge distances accurately, the underlying computations employed by the brain are not well understood. Our work illuminates these computions by formulating a family of probabilistic models that encompass a variety of distinct hypotheses about distance and size perception. We compare these models' predictions to a set of human distance judgments in an interception experiment and use Bayesian analysis tools to quantitatively select the best hypothesis on the basis of its explanatory power and robustness over experimental data. The central question is: whether, and how, human distance perception incorporates size cues to improve accuracy. Our conclusions are: 1) humans incorporate haptic object size sensations for distance perception, 2) the incorporation of haptic sensations is suboptimal given their reliability, 3) humans use environmentally accurate size and distance priors, 4) distance judgments are produced by perceptual “posterior sampling”. In addition, we compared our model's estimated sensory and motor noise parameters with previously reported measurements in the perceptual literature and found good correspondence between them. Taken together, these results represent a major step forward in establishing the computational underpinnings of human distance perception and the role of size information. PMID:21738457
Character Recognition Method by Time-Frequency Analyses Using Writing Pressure
NASA Astrophysics Data System (ADS)
Watanabe, Tatsuhito; Katsura, Seiichiro
With the development of information and communication technology, personal verification becomes more and more important. In the future ubiquitous society, the development of terminals handling personal information requires the personal verification technology. The signature is one of the personal verification methods; however, the number of characters is limited in the case of the signature and therefore false signature is used easily. Thus, personal identification is difficult from handwriting. This paper proposes a “haptic pen” that extracts the writing pressure, and shows a character recognition method by time-frequency analyses. Although the figures of characters written by different amanuenses are similar, the differences appear in the time-frequency domain. As a result, it is possible to use the proposed character recognition for personal identification more exactly. The experimental results showed the viability of the proposed method.
Erdogan, Goker; Yildirim, Ilker; Jacobs, Robert A.
2015-01-01
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models—that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model’s percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects’ ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception. PMID:26554704
Neurosurgery simulation using non-linear finite element modeling and haptic interaction
NASA Astrophysics Data System (ADS)
Lee, Huai-Ping; Audette, Michel; Joldes, Grand R.; Enquobahrie, Andinet
2012-02-01
Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems, and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element operations. We employ a virtual coupling method for separating deformable body simulation and collision detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation. The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic relaxation are required to improve the stability of the system.
Neodymium:YAG laser cutting of intraocular lens haptics in vitro and in vivo.
Feder, J M; Rosenberg, M A; Farber, M D
1989-09-01
Various complications following intraocular lens (IOL) surgery result in explantation of the lenses. Haptic fibrosis may necessitate cutting the IOL haptics prior to removal. In this study we used the neodymium: YAG (Nd:YAG) laser to cut polypropylene and poly(methyl methacrylate) (PMMA) haptics in vitro and in rabbit eyes. In vitro we were able to cut 100% of both haptic types successfully (28 PMMA and 30 polypropylene haptics). In rabbit eyes we were able to cut 50% of the PMMA haptics and 43% of the polypropylene haptics. Poly(methyl methacrylate) haptics were easier to cut in vitro and in vivo than polypropylene haptics, requiring fewer shots for transection. Complications of Nd:YAG laser use frequently interfered with haptic transections in rabbit eyes. Haptic transection may be more easily accomplished in human eyes.
Mixed reality temporal bone surgical dissector: mechanical design
2014-01-01
Objective The Development of a Novel Mixed Reality (MR) Simulation. An evolving training environment emphasizes the importance of simulation. Current haptic temporal bone simulators have difficulty representing realistic contact forces and while 3D printed models convincingly represent vibrational properties of bone, they cannot reproduce soft tissue. This paper introduces a mixed reality model, where the effective elements of both simulations are combined; haptic rendering of soft tissue directly interacts with a printed bone model. This paper addresses one aspect in a series of challenges, specifically the mechanical merger of a haptic device with an otic drill. This further necessitates gravity cancelation of the work assembly gripper mechanism. In this system, the haptic end-effector is replaced by a high-speed drill and the virtual contact forces need to be repositioned to the drill tip from the mid wand. Previous publications detail generation of both the requisite printed and haptic simulations. Method Custom software was developed to reposition the haptic interaction point to the drill tip. A custom fitting, to hold the otic drill, was developed and its weight was offset using the haptic device. The robustness of the system to disturbances and its stable performance during drilling were tested. The experiments were performed on a mixed reality model consisting of two drillable rapid-prototyped layers separated by a free-space. Within the free-space, a linear virtual force model is applied to simulate drill contact with soft tissue. Results Testing illustrated the effectiveness of gravity cancellation. Additionally, the system exhibited excellent performance given random inputs and during the drill’s passage between real and virtual components of the model. No issues with registration at model boundaries were encountered. Conclusion These tests provide a proof of concept for the initial stages in the development of a novel mixed-reality temporal bone simulator. PMID:25927300
Grasping trajectories in a virtual environment adhere to Weber's law.
Ozana, Aviad; Berman, Sigal; Ganel, Tzvi
2018-06-01
Virtual-reality and telerobotic devices simulate local motor control of virtual objects within computerized environments. Here, we explored grasping kinematics within a virtual environment and tested whether, as in normal 3D grasping, trajectories in the virtual environment are performed analytically, violating Weber's law with respect to object's size. Participants were asked to grasp a series of 2D objects using a haptic system, which projected their movements to a virtual space presented on a computer screen. The apparatus also provided object-specific haptic information upon "touching" the edges of the virtual targets. The results showed that grasping movements performed within the virtual environment did not produce the typical analytical trajectory pattern obtained during 3D grasping. Unlike as in 3D grasping, grasping trajectories in the virtual environment adhered to Weber's law, which indicates relative resolution in size processing. In addition, the trajectory patterns differed from typical trajectories obtained during 3D grasping, with longer times to complete the movement, and with maximum grip apertures appearing relatively early in the movement. The results suggest that grasping movements within a virtual environment could differ from those performed in real space, and are subjected to irrelevant effects of perceptual information. Such atypical pattern of visuomotor control may be mediated by the lack of complete transparency between the interface and the virtual environment in terms of the provided visual and haptic feedback. Possible implications of the findings to movement control within robotic and virtual environments are further discussed.
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
Encountered-Type Haptic Interface for Representation of Shape and Rigidity of 3D Virtual Objects.
Takizawa, Naoki; Yano, Hiroaki; Iwata, Hiroo; Oshiro, Yukio; Ohkohchi, Nobuhiro
2017-01-01
This paper describes the development of an encountered-type haptic interface that can generate the physical characteristics, such as shape and rigidity, of three-dimensional (3D) virtual objects using an array of newly developed non-expandable balloons. To alter the rigidity of each non-expandable balloon, the volume of air in it is controlled through a linear actuator and a pressure sensor based on Hooke's law. Furthermore, to change the volume of each balloon, its exposed surface area is controlled by using another linear actuator with a trumpet-shaped tube. A position control mechanism is constructed to display virtual objects using the balloons. The 3D position of each balloon is controlled using a flexible tube and a string. The performance of the system is tested and the results confirm the effectiveness of the proposed principle and interface.
Haptic feedback can provide an objective assessment of arthroscopic skills.
Chami, George; Ward, James W; Phillips, Roger; Sherman, Kevin P
2008-04-01
The outcome of arthroscopic procedures is related to the surgeon's skills in arthroscopy. Currently, evaluation of such skills relies on direct observation by a surgeon trainer. This type of assessment, by its nature, is subjective and time-consuming. The aim of our study was to identify whether haptic information generated from arthroscopic tools could distinguish between skilled and less skilled surgeons. A standard arthroscopic probe was fitted with a force/torque sensor. The probe was used by five surgeons with different levels of experience in knee arthroscopy performing 11 different tasks in 10 standard knee arthroscopies. The force/torque data from the hand and tool interface were recorded and synchronized with a video recording of the procedure. The torque magnitude and patterns generated were analyzed and compared. A computerized system was used to analyze the force/torque signature based on general principles for quality of performance using such measures as economy in movement, time efficiency, and consistency in performance. The results showed a considerable correlation between three haptic parameters and the surgeon's experience, which could be used in an automated objective assessment system for arthroscopic surgery. Level II, diagnostic study. See the Guidelines for Authors for a complete description of levels of evidence.
Fingerprint-Inspired Flexible Tactile Sensor for Accurately Discerning Surface Texture.
Cao, Yudong; Li, Tie; Gu, Yang; Luo, Hui; Wang, Shuqi; Zhang, Ting
2018-04-01
Inspired by the epidermal-dermal and outer microstructures of the human fingerprint, a novel flexible sensor device is designed to improve haptic perception and surface texture recognition, which is consisted of single-walled carbon nanotubes, polyethylene, and polydimethylsiloxane with interlocked and outer micropyramid arrays. The sensor shows high pressure sensitivity (-3.26 kPa -1 in the pressure range of 0-300 Pa), and it can detect the shear force changes induced by the dynamic interaction between the outer micropyramid structure on the sensor and the tested material surface, and the minimum dimension of the microstripe that can be discerned is as low as 15 µm × 15 µm (interval × width). To demonstrate the texture discrimination capability, the sensors are tested for accurately discerning various surface textures, such as the textures of different fabrics, Braille characters, the inverted pyramid patterns, which will have great potential in robot skins and haptic perception, etc. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Morphological Computation of Haptic Perception of a Controllable Stiffness Probe.
Sornkarn, Nantachai; Dasgupta, Prokar; Nanayakkara, Thrishantha
2016-01-01
When people are asked to palpate a novel soft object to discern its physical properties such as texture, elasticity, and even non-homogeneity, they not only regulate probing behaviors, but also the co-contraction level of antagonistic muscles to control the mechanical impedance of fingers. It is suspected that such behavior tries to enhance haptic perception by regulating the function of mechanoreceptors at different depths of the fingertips and proprioceptive sensors such as tendon and spindle sensors located in muscles. In this paper, we designed and fabricated a novel two-degree of freedom variable stiffness indentation probe to investigate whether the regulation of internal stiffness, indentation, and probe sweeping velocity (PSV) variables affect the accuracy of the depth estimation of stiff inclusions in an artificial silicon phantom using information gain metrics. Our experimental results provide new insights into not only the biological phenomena of haptic perception but also new opportunities to design and control soft robotic probes.
NASA Technical Reports Server (NTRS)
Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)
2001-01-01
We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.
Mechanical model of orthopaedic drilling for augmented-haptics-based training.
Pourkand, Ashkan; Zamani, Naghmeh; Grow, David
2017-10-01
In this study, augmented-haptic feedback is used to combine a physical object with virtual elements in order to simulate anatomic variability in bone. This requires generating levels of force/torque consistent with clinical bone drilling, which exceed the capabilities of commercially available haptic devices. Accurate total force generation is facilitated by a predictive model of axial force during simulated orthopaedic drilling. This model is informed by kinematic data collected while drilling into synthetic bone samples using an instrumented linkage attached to the orthopaedic drill. Axial force is measured using a force sensor incorporated into the bone fixture. A nonlinear function, relating force to axial position and velocity, was used to fit the data. The normalized root-mean-square error (RMSE) of forces predicted by the model compared to those measured experimentally was 0.11 N across various bones with significant differences in geometry and density. This suggests that a predictive model can be used to capture relevant variations in the thickness and hardness of cortical and cancellous bone. The practical performance of this approach is measured using the Phantom Premium haptic device, with some required customizations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kokes, Rebecca; Lister, Kevin; Gullapalli, Rao; Zhang, Bao; MacMillan, Alan; Richard, Howard; Desai, Jaydev P.
2009-01-01
Objective The purpose of this paper is to explore the feasibility of developing a MRI-compatible needle driver system for radiofrequency ablation (RFA) of breast tumors under continuous MRI imaging while being teleoperated by a haptic feedback device from outside the scanning room. The developed needle driver prototype was designed and tested for both tumor targeting capability as well as RFA. Methods The single degree-of-freedom (DOF) prototype was interfaced with a PHANToM haptic device controlled from outside the scanning room. Experiments were performed to demonstrate MRI-compatibility and position control accuracy with hydraulic actuation, along with an experiment to determine the PHANToM’s ability to guide the RFA tool to a tumor nodule within a phantom breast tissue model while continuously imaging within the MRI and receiving force feedback from the RFA tool. Results Hydraulic actuation is shown to be a feasible actuation technique for operation in an MRI environment. The design is MRI-compatible in all aspects except for force sensing in the directions perpendicular to the direction of motion. Experiments confirm that the user is able to detect healthy vs. cancerous tissue in a phantom model when provided with both visual (imaging) feedback and haptic feedback. Conclusion The teleoperated 1-DOF needle driver system presented in this paper demonstrates the feasibility of implementing a MRI-compatible robot for RFA of breast tumors with haptic feedback capability. PMID:19303805
Magdalon, Eliane C; Michaelsen, Stella M; Quevedo, Antonio A; Levin, Mindy F
2011-09-01
Virtual reality (VR) technology is being used with increasing frequency as a training medium for motor rehabilitation. However, before addressing training effectiveness in virtual environments (VEs), it is necessary to identify if movements made in such environments are kinematically similar to those made in physical environments (PEs) and the effect of provision of haptic feedback on these movement patterns. These questions are important since reach-to-grasp movements may be inaccurate when visual or haptic feedback is altered or absent. Our goal was to compare kinematics of reaching and grasping movements to three objects performed in an immersive three-dimensional (3D) VE with haptic feedback (cyberglove/grasp system) viewed through a head-mounted display to those made in an equivalent physical environment (PE). We also compared movements in PE made with and without wearing the cyberglove/grasp haptic feedback system. Ten healthy subjects (8 women, 62.1±8.8years) reached and grasped objects requiring 3 different grasp types (can, diameter 65.6mm, cylindrical grasp; screwdriver, diameter 31.6mm, power grasp; pen, diameter 7.5mm, precision grasp) in PE and visually similar virtual objects in VE. Temporal and spatial arm and trunk kinematics were analyzed. Movements were slower and grip apertures were wider when wearing the glove in both the PE and the VE compared to movements made in the PE without the glove. When wearing the glove, subjects used similar reaching trajectories in both environments, preserved the coordination between reaching and grasping and scaled grip aperture to object size for the larger object (cylindrical grasp). However, in VE compared to PE, movements were slower and had longer deceleration times, elbow extension was greater when reaching to the smallest object and apertures were wider for the power and precision grip tasks. Overall, the differences in spatial and temporal kinematics of movements between environments were greater than those due only to wearing the cyberglove/grasp system. Differences in movement kinematics due to the viewing environment were likely due to a lack of prior experience with the virtual environment, an uncertainty of object location and the restricted field-of-view when wearing the head-mounted display. The results can be used to inform the design and disposition of objects within 3D VEs for the study of the control of prehension and for upper limb rehabilitation. Copyright © 2011 Elsevier B.V. All rights reserved.
Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael
2011-01-01
This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation.
Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael
2011-01-01
This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation. PMID:22247677
Perception and Haptic Rendering of Friction Moments.
Kawasaki, H; Ohtuka, Y; Koide, S; Mouri, T
2011-01-01
This paper considers moments due to friction forces on the human fingertip. A computational technique called the friction moment arc method is presented. The method computes the static and/or dynamic friction moment independent of a friction force calculation. In addition, a new finger holder to display friction moment is presented. This device incorporates a small brushless motor and disk, and connects the human's finger to an interface finger of the five-fingered haptic interface robot HIRO II. Subjects' perception of friction moment while wearing the finger holder, as well as perceptions during object manipulation in a virtual reality environment, were evaluated experimentally.
Characteristic analysis and simulation for polysilicon comb micro-accelerometer
NASA Astrophysics Data System (ADS)
Liu, Fengli; Hao, Yongping
2008-10-01
High force update rate is a key factor for achieving high performance haptic rendering, which imposes a stringent real time requirement upon the execution environment of the haptic system. This requirement confines the haptic system to simplified environment for reducing the computation cost of haptic rendering algorithms. In this paper, we present a novel "hyper-threading" architecture consisting of several threads for haptic rendering. The high force update rate is achieved with relatively large computation time interval for each haptic loop. The proposed method was testified and proved to be effective with experiments on virtual wall prototype haptic system via Delta Haptic Device.
The impact of haptic feedback on students' conceptions of the cell
NASA Astrophysics Data System (ADS)
Minogue, James
2005-07-01
The purpose of this study was to investigate the efficacy of adding haptic (sense of touch) feedback to computer generated visualizations for use in middle school science instruction. Current technology allows for the simulation of tactile and kinesthetic sensations via haptic devices and a computer interface. This study, conducted with middle school students (n = 80), explored the cognitive and affective impacts of this innovative technology on students' conceptions of the cell and the process of passive transport. A pretest-posttest control group design was used and participants were randomly assigned to one of two treatment groups (n = 40 for each). Both groups experienced the same core computer-mediated instructional program. This Cell Exploration program engaged students in a 3-D immersive environment that allowed them to actively investigate the form and function of a typical animal cell including its major organelles. The program also engaged students in a study of the structure and function of the cell membrane as it pertains to the process of passive transport and the mechanisms behind the membrane's selective permeability. As they conducted their investigations, students in the experimental group received bi-modal visual and haptic (simulated tactile and kinesthetic) feedback whereas the control group students experienced the program with only visual stimuli. A battery of assessments, including objective and open-ended written response items as well as a haptic performance assessment, were used to gather quantitative and qualitative data regarding changes in students' understandings of the cell concepts prior to and following their completion of the instructional program. Additionally, the impact of haptics on the affective domain of students' learning was assessed using a post-experience semi-structured interview and an attitudinal survey. Results showed that students from both conditions (Visual-Only and Visual + Haptic) found the instructional program interesting and engaging. Additionally, the vast majority of the students reported that they learned a lot about and were more interested in the topic due to their participation. Moreover, students who received the bi-modal (Visual + Haptic) feedback indicated that they experienced lower levels of frustration and spatial disorientation as they conducted their investigations when compared to individuals that relied solely on vision. There were no significant differences measured across the treatment groups on the cognitive assessment items. Despite this finding, the study provided valuable insight into the theoretical and practical considerations involved in the development of multimodal instructional programs.
Alió, Jorge L; Plaza-Puche, Ana B; Javaloy, Jaime; Ayala, María José; Vega-Estrada, Alfredo
2013-04-01
To compare the visual and intraocular optical quality outcomes with different designs of the refractive rotationally asymmetric multifocal intraocular lens (MFIOL) (Lentis Mplus; Oculentis GmbH, Berlin, Germany) with or without capsular tension ring (CTR) implantation. One hundred thirty-five consecutive eyes of 78 patients with cataract (ages 36 to 82 years) were divided into three groups: 43 eyes implanted with the C-Loop haptic design without CTR (C-Loop haptic only group); 47 eyes implanted with the C-Loop haptic design with CTR (C-Loop haptic with CTR group); and 45 eyes implanted with the plate-haptic design (plate-haptic group). Visual acuity, contrast sensitivity, defocus curve, and ocular and intraocular optical quality were evaluated at 3 months postoperatively. Significant differences in the postoperative sphere were found (P = .01), with a more myopic postoperative refraction for the C-Loop haptic only group. No significant differences were detected in photopic and scotopic contrast sensitivity among groups (P ⩾ .05). Significantly better visual acuities were present in the C-Loop haptic with CTR group for the defocus levels of -2.0, -1.5, -1.0, and -0.50 D (P ⩽.03). Statistically significant differences among groups were found in total intraocular root mean square (RMS), high-order intraocular RMS, and intraocular coma-like RMS aberrations (P ⩽.04), with lower values from the plate-haptic group. The plate-haptic design and the C-Loop haptic design with CTR implantation both allow good visual rehabilitation. However, better refractive predictability and intraocular optical quality was obtained with the plate-haptic design without CTR implantation. The plate-haptic design seems to be a better design to support rotational asymmetric MFIOL optics. Copyright 2013, SLACK Incorporated.
Limited value of haptics in virtual reality laparoscopic cholecystectomy training.
Thompson, Jonathan R; Leonard, Anthony C; Doarn, Charles R; Roesch, Matt J; Broderick, Timothy J
2011-04-01
Haptics is an expensive addition to virtual reality (VR) simulators, and the added value to training has not been proven. This study evaluated the benefit of haptics in VR laparoscopic surgery training for novices. The Simbionix LapMentor II haptic VR simulator was used in the study. Randomly, 33 laparoscopic novice students were placed in one of three groups: control, haptics-trained, or nonhaptics-trained group. The control group performed nine basic laparoscopy tasks and four cholecystectomy procedural tasks one time with haptics engaged at the default setting. The haptics group was trained to proficiency in the basic tasks and then performed each of the procedural tasks one time with haptics engaged. The nonhaptics group used the same training protocol except that haptics was disengaged. The proficiency values used were previously published expert values. Each group was assessed in the performance of 10 laparoscopic cholecystectomies (alternating with and without haptics). Performance was measured via automatically collected simulator data. The three groups exhibited no differences in terms of sex, education level, hand dominance, video game experience, surgical experience, and nonsurgical simulator experience. The number of attempts required to reach proficiency did not differ between the haptics- and nonhaptics-training groups. The haptics and nonhaptics groups exhibited no difference in performance. Both training groups outperformed the control group in number of movements as well as path length of the left instrument. In addition, the nonhaptics group outperformed the control group in total time. Haptics does not improve the efficiency or effectiveness of LapMentor II VR laparoscopic surgery training. The limited benefit and the significant cost of haptics suggest that haptics should not be included routinely in VR laparoscopic surgery training.
Nardelli, Mimma; Greco, Alberto; Bianchi, Matteo; Scilingo, Enzo Pasquale; Valenza, Gaetano
2016-08-01
The hedonic attributes of cutaneous elicitation play a crucial role in everyday life, influencing our behavior and psychophysical state. However, the correlation between such a hedonic aspect of touch and the Autonomic Nervous System (ANS)-related physiological response, which is intimately connected to emotions, still needs to be deeply investigated. This study reports on caress-like stimuli conveyed to the forearm of 32 healthy subjects through different fabrics actuated by a haptic device, which can control both the strength (i.e. the normal force exerted by the fabric) and the velocity of the elicitation. The mimicked caresses were elicited with a fixed force of 6 N, two levels of velocity of 9.4 mm/sec and 65 mm/sec, and four different fabrics with different textures: burlap, hemp, velvet and silk. Participants were asked to score the caress-like stimuli in terms of arousal and valence through a self-assessment questionnaire. Heartbeat data related to the perceived most pleasant (silk) and unpleasant (burlap) fabrics were used as an input to an automatic pattern recognition procedure. Accordingly, considering gender differences, support vector machines using features extracted from linear and nonlinear heartbeat dynamics showed a recognition accuracy of 84.38% (men) and 78.13% (women) while discerning between burlap and silk elicitations at the higher velocity. Results suggest that the fabrics used for the caress-like stimulation significantly affect the nonlinear cardiovascular dynamics, also showing differences according to gender.
Creating objects and object categories for studying perception and perceptual learning.
Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay
2012-11-02
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties. Many innovative and useful methods currently exist for creating novel objects and object categories (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Self-Control of Haptic Assistance for Motor Learning: Influences of Frequency and Opinion of Utility
Williams, Camille K.; Tseung, Victrine; Carnahan, Heather
2017-01-01
Studies of self-controlled practice have shown benefits when learners controlled feedback schedule, use of assistive devices and task difficulty, with benefits attributed to information processing and motivational advantages of self-control. Although haptic assistance serves as feedback, aids task performance and modifies task difficulty, researchers have yet to explore whether self-control over haptic assistance could be beneficial for learning. We explored whether self-control of haptic assistance would be beneficial for learning a tracing task. Self-controlled participants selected practice blocks on which they would receive haptic assistance, while participants in a yoked group received haptic assistance on blocks determined by a matched self-controlled participant. We inferred learning from performance on retention tests without haptic assistance. From qualitative analysis of open-ended questions related to rationales for/experiences of the haptic assistance that was chosen/provided, themes emerged regarding participants’ views of the utility of haptic assistance for performance and learning. Results showed that learning was directly impacted by the frequency of haptic assistance for self-controlled participants only and view of haptic assistance. Furthermore, self-controlled participants’ views were significantly associated with their requested haptic assistance frequency. We discuss these findings as further support for the beneficial role of self-controlled practice for motor learning. PMID:29255438
Schweykart, N; Reinhard, T; Engelhardt, S; Sundmacher, R
1999-06-01
Ultrasound biomicroscopy (UBM) allows to determine the haptic position of posterior chamber lenses (PCL) in relation to adjacent structures. In transsclerally sutured PCLs, the comparison between intraoperatively endoscopically and postoperatively localized haptic positions via UBM showed a correspondence of only 81%. The different localisation of 19% of the examined haptic positions was explained with postoperative dislocation without any proof for this assumption. The purpose of this study therefore was the correlation of UBM results with simultaneously determined haptic positions via gonioscopy in aniridia after black diaphragm PCL implantation. The haptic positions of black diaphragm PCL implants in 20 patients with congenital and 13 patients with traumatic aniridia were determined via UBM (50-MHz-probe) and gonioscopy 44.4 (6-75) months postoperatively. 39/66 haptic positions could be localized in gonioscopy as well as in UBM. 38 haptics (97.4%) showed the same position in both examination techniques. Determination of the haptic position through one of the two examination techniques was impossible in 27/66 haptics (11 haptics in gonioscopy, 16 haptics in UBM). Reasons for this were primarily haptic position behind iris remnants and corneal opacities in gonioscopy and scarring of the ciliary body in UBM. The validity of UBM in localization of PCLs was confirmed gonioscopically, which also confirms our prior assumption of postoperative displacement of IOL-haptics after transscleral suturing in about 20% of cases. Scarring of the ciliary body was the most important obstacle in the determination of PCL haptic positions in relation to adjacent structures.
Cellini, Cristiano; Scocchia, Lisa; Drewing, Knut
2016-10-01
In the flash-lag illusion, a brief visual flash and a moving object presented at the same location appear to be offset with the flash trailing the moving object. A considerable amount of studies investigated the visual flash-lag effect, and flash-lag-like effects have also been observed in audition, and cross-modally between vision and audition. In the present study, we investigate whether a similar effect can also be observed when using only haptic stimuli. A fast vibration (or buzz, lasting less than 20 ms) was applied to the moving finger of the observers and employed as a "haptic flash." Participants performed a two-alternative forced-choice (2AFC) task where they had to judge whether the moving finger was located to the right or to the left of the stationary finger at the time of the buzz. We used two different movement velocities (Slow and Fast conditions). We found that the moving finger was systematically misperceived to be ahead of the stationary finger when the two were physically aligned. This result can be interpreted as a purely haptic analogue of the flash-lag effect, which we refer to as "buzz-lag effect." The buzz-lag effect can be well accounted for by the temporal-sampling explanation of flash-lag-like effects.
ProMIS augmented reality training of laparoscopic procedures face validity.
Botden, Sanne M B I; Buzink, Sonja N; Schijven, Marlies P; Jakimowicz, Jack J
2008-01-01
Conventional video trainers lack the ability to assess the trainee objectively, but offer modalities that are often missing in virtual reality simulation, such as realistic haptic feedback. The ProMIS augmented reality laparoscopic simulator retains the benefit of a traditional box trainer, by using original laparoscopic instruments and tactile tasks, but additionally generates objective measures of performance. Fifty-five participants performed a "basic skills" and "suturing and knot-tying" task on ProMIS, after which they filled out a questionnaire regarding realism, haptics, and didactic value of the simulator, on a 5-point-Likert scale. The participants were allotted to 2 experience groups: "experienced" (>50 procedures and >5 sutures; N = 27), and "moderately experienced" (<50 procedures and <5 sutures; N = 28). General consensus among all participants, particularly the experienced, was that ProMIS is a useful tool for training (mean: 4.67, SD: 0.48). It was considered very realistic (mean: 4.44, SD: 0.66), with good haptics (mean: 4.10, SD: 0.97) and didactic value (mean 4.10, SD: 0.65). This study established the face validity of the ProMIS augmented reality simulator for "basic skills" and "suturing and knot-tying" tasks. ProMIS was considered a good tool for training in laparoscopic skills for surgical residents and surgeons.
Aging and the visual, haptic, and cross-modal perception of natural object shape.
Norman, J Farley; Crabtree, Charles E; Norman, Hideko F; Moncrief, Brandon K; Herrmann, Molly; Kapley, Noah
2006-01-01
One hundred observers participated in two experiments designed to investigate aging and the perception of natural object shape. In the experiments, younger and older observers performed either a same/different shape discrimination task (experiment 1) or a cross-modal matching task (experiment 2). Quantitative effects of age were found in both experiments. The effect of age in experiment 1 was limited to cross-modal shape discrimination: there was no effect of age upon unimodal (ie within a single perceptual modality) shape discrimination. The effect of age in experiment 2 was eliminated when the older observers were either given an unlimited amount of time to perform the task or when the number of response alternatives was decreased. Overall, the results of the experiments reveal that older observers can effectively perceive 3-D shape from both vision and haptics.
NASA Astrophysics Data System (ADS)
Jones, M. G.; Andre, T.; Kubasko, D.; Bokinsky, A.; Tretter, T.; Negishi, A.; Taylor, R.; Superfine, R.
2004-01-01
This study examined hands-on experiences in the context of an investigation of viruses and explored how and why hands-on experiences may be effective. We sought to understand whether or not touching and manipulating materials and objects could lead to a deeper, more effective type of knowing than that we obtain from sight or sound alone. Four classes of high school biology students and four classes of seventh graders participated in the study that examined students' use of remote microscopy with a new scientific tool called the nanoManipulator, which enabled them to reach out and touch live viruses inside an atomic force microscope. Half of the students received full haptic (tactile and kinesthetic) feedback from a haptic joystick, whereas half of the students were able to use the haptic joystick to manipulate viruses but the tactile feedback was blocked. Results showed that there were significant gains from pre- to postinstruction across treatment groups for knowledge and attitudes. Students in both treatment groups developed conceptual models of viruses that were more consistent with current scientific research, including a move from a two-dimensional to a three-dimensional understanding of virus morphology. There were significant changes in students' understandings of scale; after instruction, students were more likely to identify examples of nanosized objects and be able to describe the degree to which a human would have to be shrunk to reach the size of a virus. Students who received full-haptic feedback had significantly better attitudes suggesting that the increased sensory feedback and stimulation may have made the experience more engaging and motivating to students.
A haptic sensing upgrade for the current EOD robotic fleet
NASA Astrophysics Data System (ADS)
Rowe, Patrick
2014-06-01
The past decade and a half has seen a tremendous rise in the use of mobile manipulator robotic platforms for bomb inspection and disposal, explosive ordnance disposal, and other extremely hazardous tasks in both military and civilian settings. Skilled operators are able to control these robotic vehicles in amazing ways given the very limited situational awareness obtained from a few on-board camera views. Future generations of robotic platforms will, no doubt, provide some sort of additional force or haptic sensor feedback to further enhance the operator's interaction with the robot, especially when dealing with fragile, unstable, and explosive objects. Unfortunately, the robot operators need this capability today. This paper discusses an approach to provide existing (and future) robotic mobile manipulator platforms, with which trained operators are already familiar and highly proficient, this desired haptic and force feedback capability. The goals of this technology are to be rugged, reliable, and affordable. It should also be able to be applied to a wide range of existing robots with a wide variety of manipulator/gripper sizes and styles. Finally, the presentation of the haptic information to the operator is discussed, given the fact that control devices that physically interact with the operators are not widely available and still in the research stages.
Short-term visual deprivation, tactile acuity, and haptic solid shape discrimination.
Crabtree, Charles E; Norman, J Farley
2014-01-01
Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task - perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task - the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation.
[Visual cuing effect for haptic angle judgment].
Era, Ataru; Yokosawa, Kazuhiko
2009-08-01
We investigated whether visual cues are useful for judging haptic angles. Participants explored three-dimensional angles with a virtual haptic feedback device. For visual cues, we use a location cue, which synchronizes haptic exploration, and a space cue, which specifies the haptic space. In Experiment 1, angles were judged more correctly with both cues, but were overestimated with a location cue only. In Experiment 2, the visual cues emphasized depth, and overestimation with location cues occurred, but space cues had no influence. The results showed that (a) when both cues are presented, haptic angles are judged more correctly. (b) Location cues facilitate only motion information, and not depth information. (c) Haptic angles are apt to be overestimated when there is both haptic and visual information.
Virtual Reality Cerebral Aneurysm Clipping Simulation With Real-time Haptic Feedback
Alaraj, Ali; Luciano, Cristian J.; Bailey, Daniel P.; Elsenousi, Abdussalam; Roitberg, Ben Z.; Bernardo, Antonio; Banerjee, P. Pat; Charbel, Fady T.
2014-01-01
Background With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. Objective To develop and evaluate the usefulness of a new haptic-based virtual reality (VR) simulator in the training of neurosurgical residents. Methods A real-time sensory haptic feedback virtual reality aneurysm clipping simulator was developed using the Immersive Touch platform. A prototype middle cerebral artery aneurysm simulation was created from a computed tomography angiogram. Aneurysm and vessel volume deformation and haptic feedback are provided in a 3-D immersive VR environment. Intraoperative aneurysm rupture was also simulated. Seventeen neurosurgery residents from three residency programs tested the simulator and provided feedback on its usefulness and resemblance to real aneurysm clipping surgery. Results Residents felt that the simulation would be useful in preparing for real-life surgery. About two thirds of the residents felt that the 3-D immersive anatomical details provided a very close resemblance to real operative anatomy and accurate guidance for deciding surgical approaches. They believed the simulation is useful for preoperative surgical rehearsal and neurosurgical training. One third of the residents felt that the technology in its current form provided very realistic haptic feedback for aneurysm surgery. Conclusion Neurosurgical residents felt that the novel immersive VR simulator is helpful in their training especially since they do not get a chance to perform aneurysm clippings until very late in their residency programs. PMID:25599200
Haptic discrimination of bilateral symmetry in 2-dimensional and 3-dimensional unfamiliar displays.
Ballesteros, S; Manga, D; Reales, J M
1997-01-01
In five experiments, we tested the accuracy and sensitivity of the haptic system in detecting bilateral symmetry of raised-line shapes (Experiments 1 and 2) and unfamiliar 3-D objects (Experiments 3-5) under different time constraints and different modes of exploration. Touch was moderately accurate for detecting this property in raised displays. Experiment 1 showed that asymmetric judgments were systematically more accurate than were symmetric judgements with scanning by one finger. Experiments 2 confirmed the results of Experiment 1 but also showed that bimanual exploration facilitated processing of symmetric shapes without improving asymmetric detections. Bimanual exploration of 3-D objects was very accurate and significantly facilitated processing of symmetric objects under different time constraints (Experiment 3). Unimanual exploration did not differ from bimanual exploration (Experiment 4), but restricting hand movements to one enclosure reduced performance significantly (Experiment 5). Spatial reference information, signal detection measures, and hand movements in processing bilateral symmetry by touch are discussed.
Morphological Computation of Haptic Perception of a Controllable Stiffness Probe
Sornkarn, Nantachai; Dasgupta, Prokar; Nanayakkara, Thrishantha
2016-01-01
When people are asked to palpate a novel soft object to discern its physical properties such as texture, elasticity, and even non-homogeneity, they not only regulate probing behaviors, but also the co-contraction level of antagonistic muscles to control the mechanical impedance of fingers. It is suspected that such behavior tries to enhance haptic perception by regulating the function of mechanoreceptors at different depths of the fingertips and proprioceptive sensors such as tendon and spindle sensors located in muscles. In this paper, we designed and fabricated a novel two-degree of freedom variable stiffness indentation probe to investigate whether the regulation of internal stiffness, indentation, and probe sweeping velocity (PSV) variables affect the accuracy of the depth estimation of stiff inclusions in an artificial silicon phantom using information gain metrics. Our experimental results provide new insights into not only the biological phenomena of haptic perception but also new opportunities to design and control soft robotic probes. PMID:27257814
a New ER Fluid Based Haptic Actuator System for Virtual Reality
NASA Astrophysics Data System (ADS)
Böse, H.; Baumann, M.; Monkman, G. J.; Egersdörfer, S.; Tunayar, A.; Freimuth, H.; Ermert, H.; Khaled, W.
The concept and some steps in the development of a new actuator system which enables the haptic perception of mechanically inhomogeneous virtual objects are introduced. The system consists of a two-dimensional planar array of actuator elements containing an electrorheological (ER) fluid. When a user presses his fingers onto the surface of the actuator array, he perceives locally variable resistance forces generated by vertical pistons which slide in the ER fluid through the gaps between electrode pairs. The voltage in each actuator element can be individually controlled by a novel sophisticated switching technology based on optoelectric gallium arsenide elements. The haptic information which is represented at the actuator array can be transferred from a corresponding sensor system based on ultrasonic elastography. The combined sensor-actuator system may serve as a technology platform for various applications in virtual reality, like telemedicine where the information on the consistency of tissue of a real patient is detected by the sensor part and recorded by the actuator part at a remote location.
NASA Technical Reports Server (NTRS)
Mavroidis, Constantinos; Pfeiffer, Charles; Paljic, Alex; Celestino, James; Lennon, Jamie; Bar-Cohen, Yoseph
2000-01-01
For many years, the robotic community sought to develop robots that can eventually operate autonomously and eliminate the need for human operators. However, there is an increasing realization that there are some tasks that human can perform significantly better but, due to associated hazards, distance, physical limitations and other causes, only robot can be employed to perform these tasks. Remotely performing these types of tasks requires operating robots as human surrogates. While current "hand master" haptic systems are able to reproduce the feeling of rigid objects, they present great difficulties in emulating the feeling of remote/virtual stiffness. In addition, they tend to be heavy, cumbersome and usually they only allow limited operator workspace. In this paper a novel haptic interface is presented to enable human-operators to "feel" and intuitively mirror the stiffness/forces at remote/virtual sites enabling control of robots as human-surrogates. This haptic interface is intended to provide human operators intuitive feeling of the stiffness and forces at remote or virtual sites in support of space robots performing dexterous manipulation tasks (such as operating a wrench or a drill). Remote applications are referred to the control of actual robots whereas virtual applications are referred to simulated operations. The developed haptic interface will be applicable to IVA operated robotic EVA tasks to enhance human performance, extend crew capability and assure crew safety. The electrically controlled stiffness is obtained using constrained ElectroRheological Fluids (ERF), which changes its viscosity under electrical stimulation. Forces applied at the robot end-effector due to a compliant environment will be reflected to the user using this ERF device where a change in the system viscosity will occur proportionally to the force to be transmitted. In this paper, we will present the results of our modeling, simulation, and initial testing of such an electrorheological fluid (ERF) based haptic device.
Active skin as new haptic interface
NASA Astrophysics Data System (ADS)
Vuong, Nguyen Huu Lam; Kwon, Hyeok Yong; Chuc, Nguyen Huu; Kim, Duksang; An, Kuangjun; Phuc, Vuong Hong; Moon, Hyungpil; Koo, Jachoon; Lee, Youngkwan; Nam, Jae-Do; Choi, Hyouk Ryeol
2010-04-01
In this paper, we present a new haptic interface, called "active skin", which is configured with a tactile sensor and a tactile stimulator in single haptic cell, and multiple haptic cells are embedded in a dielectric elastomer. The active skin generates a wide variety of haptic feel in response to the touch by synchronizing the sensor and the stimulator. In this paper, the design of the haptic cell is derived via iterative analysis and design procedures. A fabrication method dedicated to the proposed device is investigated and a controller to drive multiple haptic cells is developed. In addition, several experiments are performed to evaluate the performance of the active skin.
A comparison of haptic material perception in blind and sighted individuals.
Baumgartner, Elisabeth; Wiebel, Christiane B; Gegenfurtner, Karl R
2015-10-01
We investigated material perception in blind participants to explore the influence of visual experience on material representations and the relationship between visual and haptic material perception. In a previous study with sighted participants, we had found participants' visual and haptic judgments of material properties to be very similar (Baumgartner, Wiebel, & Gegenfurtner, 2013). In a categorization task, however, visual exploration had led to higher categorization accuracy than haptic exploration. Here, we asked congenitally blind participants to explore different materials haptically and rate several material properties in order to assess the role of the visual sense for the emergence of haptic material perception. Principal components analyses combined with a procrustes superimposition showed that the material representations of blind and blindfolded sighted participants were highly similar. We also measured haptic categorization performance, which was equal for the two groups. We conclude that haptic material representations can emerge independently of visual experience, and that there are no advantages for either group of observers in haptic categorization. Copyright © 2015 Elsevier Ltd. All rights reserved.
Functionalization of Tactile Sensation for Robot Based on Haptograph and Modal Decomposition
NASA Astrophysics Data System (ADS)
Yokokura, Yuki; Katsura, Seiichiro; Ohishi, Kiyoshi
In the real world, robots should be able to recognize the environment in order to be of help to humans. A video camera and a laser range finder are devices that can help robots recognize the environment. However, these devices cannot obtain tactile information from environments. Future human-assisting-robots should have the ability to recognize haptic signals, and a disturbance observer can possibly be used to provide the robot with this ability. In this study, a disturbance observer is employed in a mobile robot to functionalize the tactile sensation. This paper proposes a method that involves the use of haptograph and modal decomposition for the haptic recognition of road environments. The haptograph presents a graphic view of the tactile information. It is possible to classify road conditions intuitively. The robot controller is designed by considering the decoupled modal coordinate system, which consists of translational and rotational modes. Modal decomposition is performed by using a quarry matrix. Once the robot is provided with the ability to recognize tactile sensations, its usefulness to humans will increase.
ERIC Educational Resources Information Center
White, Peter A.
2012-01-01
Forces are experienced in actions on objects. The mechanoreceptor system is stimulated by proximal forces in interactions with objects, and experiences of force occur in a context of information yielded by other sensory modalities, principally vision. These experiences are registered and stored as episodic traces in the brain. These stored…
Asymmetric Oscillation Distorts the Perceived Heaviness of Handheld Objects.
Amemiya, T; Maeda, T
2008-01-01
Weight perception has been of great interest for over three centuries. Most research has been concerned with the weight of static objects, and some illusions have been discovered. Here, we show a new illusion related to the perception of the heaviness of oscillating objects. We performed experiments that involved comparing the weight of two objects of identical physical appearance but with different gross weights and oscillation patterns (vibrating vertically at frequencies of 5 or 9 cycles per second with symmetric and asymmetric acceleration patterns). The results show that the perceived weight of an object vibrating with asymmetric acceleration increases compared to that with symmetric acceleration when the acceleration peaks in the gravity direction. In contrast, almost no heaviness perception change was observed in the anti-gravity direction. We speculate that the reason for the divergence between these results is caused by the differential impact of these two hypothesized perceptual mechanisms as follows: the salience of pulse stimuli appears to have a strong influence in the gravity direction, whereas filling-in could explain our observations in the anti-gravity direction. The study of this haptic illusion can provide valuable insights into not only human perceptual mechanisms but into the design of ungrounded haptic interfaces.
Saving and Reproduction of Human Motion Data by Using Haptic Devices with Different Configurations
NASA Astrophysics Data System (ADS)
Tsunashima, Noboru; Yokokura, Yuki; Katsura, Seiichiro
Recently, there has been increased focus on “haptic recording” development of a motion-copying system is an efficient method for the realization of haptic recording. Haptic recording involves saving and reproduction of human motion data on the basis of haptic information. To increase the number of applications of the motion-copying system in various fields, it is necessary to reproduce human motion data by using haptic devices with different configurations. In this study, a method for the above-mentioned haptic recording is developed. In this method, human motion data are saved and reproduced on the basis of work space information, which is obtained by coordinate transformation of motor space information. The validity of the proposed method is demonstrated by experiments. With the proposed method, saving and reproduction of human motion data by using various devices is achieved. Furthermore, it is also possible to use haptic recording in various fields.
Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria
2013-08-01
In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.
A kinesthetic washout filter for force-feedback rendering.
Danieau, Fabien; Lecuyer, Anatole; Guillotel, Philippe; Fleureau, Julien; Mollet, Nicolas; Christie, Marc
2015-01-01
Today haptic feedback can be designed and associated to audiovisual content (haptic-audiovisuals or HAV). Although there are multiple means to create individual haptic effects, the issue of how to properly adapt such effects on force-feedback devices has not been addressed and is mostly a manual endeavor. We propose a new approach for the haptic rendering of HAV, based on a washout filter for force-feedback devices. A body model and an inverse kinematics algorithm simulate the user's kinesthetic perception. Then, the haptic rendering is adapted in order to handle transitions between haptic effects and to optimize the amplitude of effects regarding the device capabilities. Results of a user study show that this new haptic rendering can successfully improve the HAV experience.
Structural impact detection with vibro-haptic interfaces
NASA Astrophysics Data System (ADS)
Jung, Hwee-Kwon; Park, Gyuhae; Todd, Michael D.
2016-07-01
This paper presents a new sensing paradigm for structural impact detection using vibro-haptic interfaces. The goal of this study is to allow humans to ‘feel’ structural responses (impact, shape changes, and damage) and eventually determine health conditions of a structure. The target applications for this study are aerospace structures, in particular, airplane wings. Both hardware and software components are developed to realize the vibro-haptic-based impact detection system. First, L-shape piezoelectric sensor arrays are deployed to measure the acoustic emission data generated by impacts on a wing. Unique haptic signals are then generated by processing the measured acoustic emission data. These haptic signals are wirelessly transmitted to human arms, and with vibro-haptic interface, human pilots could identify impact location, intensity and possibility of subsequent damage initiation. With the haptic interface, the experimental results demonstrate that human could correctly identify such events, while reducing false indications on structural conditions by capitalizing on human’s classification capability. Several important aspects of this study, including development of haptic interfaces, design of optimal human training strategies, and extension of the haptic capability into structural impact detection are summarized in this paper.
Haptic wearables as sensory replacement, sensory augmentation and trainer - a review.
Shull, Peter B; Damian, Dana D
2015-07-20
Sensory impairments decrease quality of life and can slow or hinder rehabilitation. Small, computationally powerful electronics have enabled the recent development of wearable systems aimed to improve function for individuals with sensory impairments. The purpose of this review is to synthesize current haptic wearable research for clinical applications involving sensory impairments. We define haptic wearables as untethered, ungrounded body worn devices that interact with skin directly or through clothing and can be used in natural environments outside a laboratory. Results of this review are categorized by degree of sensory impairment. Total impairment, such as in an amputee, blind, or deaf individual, involves haptics acting as sensory replacement; partial impairment, as is common in rehabilitation, involves haptics as sensory augmentation; and no impairment involves haptics as trainer. This review found that wearable haptic devices improved function for a variety of clinical applications including: rehabilitation, prosthetics, vestibular loss, osteoarthritis, vision loss and hearing loss. Future haptic wearables development should focus on clinical needs, intuitive and multimodal haptic displays, low energy demands, and biomechanical compliance for long-term usage.
Pressman, Assaf; Karniel, Amir; Mussa-Ivaldi, Ferdinando A
2011-04-27
A new haptic illusion is described, in which the location of the mobile object affects the perception of its rigidity. There is theoretical and experimental support for the notion that limb position sense results from the brain combining ongoing sensory information with expectations arising from prior experience. How does this probabilistic state information affect one's tactile perception of the environment mechanics? In a simple estimation process, human subjects were asked to report the relative rigidity of two simulated virtual objects. One of the objects remained fixed in space and had various coefficients of stiffness. The other virtual object had constant stiffness but moved with respect to the subjects. Earlier work suggested that the perception of an object's rigidity is consistent with a process of regression between the contact force and the perceived amount of penetration inside the object's boundary. The amount of penetration perceived by the subject was affected by varying the position of the object. This, in turn, had a predictable effect on the perceived rigidity of the contact. Subjects' reports on the relative rigidity of the object are best accounted for by a probabilistic model in which the perceived boundary of the object is estimated based on its current location and on past observations. Therefore, the perception of contact rigidity is accounted for by a stochastic process of state estimation underlying proprioceptive localization of the hand.
Capitena, Cara E; Gamett, Kevin; Pantcheva, Mina B
2016-10-01
To report a case of delayed presentation of a severed acrylic single-piece intraocular lens (IOL) haptic fragment causing corneal edema after uneventful phacoemulsification surgery. An 85-year-old male presented with inferior corneal decompensation six months after a reportedly uneventful phacoemulsification in his left eye. A distal haptic fragment of an acrylic single-piece posterior chamber intraocular lens was found in the inferior anterior chamber angle. Intraoperative examination revealed that the dislocated fragment originated from the temporal haptic, the remainder of which was adherent to the anterior surface of the capsular bag. The clipped edge of the haptic fragment showed a clean, flat surface, suggesting it was severed by a sharp object. The findings were considered consistent with cutting of the fragment during implantation presumably from improper lens loading, improper implantation technique, or defective implantation devices. This is the first case report of a foldable acrylic intraocular lens severed during routine uncomplicated cataract surgery that was not noted at the time of the surgery or in the immediate postoperative period. Delayed presentation of severed IOL fragments should be considered in cases of late onset corneal edema post-operatively, when other causes have been ruled out. Careful implantation technique and thorough examination of the intraocular lens after implantation to assess for lens damage intraoperatively is essential to avoid such rare complications.
Williams, Camille K.; Tremblay, Luc; Carnahan, Heather
2016-01-01
Researchers in the domain of haptic training are now entering the long-standing debate regarding whether or not it is best to learn a skill by experiencing errors. Haptic training paradigms provide fertile ground for exploring how various theories about feedback, errors and physical guidance intersect during motor learning. Our objective was to determine how error minimizing, error augmenting and no haptic feedback while learning a self-paced curve-tracing task impact performance on delayed (1 day) retention and transfer tests, which indicate learning. We assessed performance using movement time and tracing error to calculate a measure of overall performance – the speed accuracy cost function. Our results showed that despite exhibiting the worst performance during skill acquisition, the error augmentation group had significantly better accuracy (but not overall performance) than the error minimization group on delayed retention and transfer tests. The control group’s performance fell between that of the two experimental groups but was not significantly different from either on the delayed retention test. We propose that the nature of the task (requiring online feedback to guide performance) coupled with the error augmentation group’s frequent off-target experience and rich experience of error-correction promoted information processing related to error-detection and error-correction that are essential for motor learning. PMID:28082937
Short-Term Visual Deprivation, Tactile Acuity, and Haptic Solid Shape Discrimination
Crabtree, Charles E.; Norman, J. Farley
2014-01-01
Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task – perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task – the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation. PMID:25397327
End-to-End Flow Control for Visual-Haptic Communication under Bandwidth Change
NASA Astrophysics Data System (ADS)
Yashiro, Daisuke; Tian, Dapeng; Yakoh, Takahiro
This paper proposes an end-to-end flow controller for visual-haptic communication. A visual-haptic communication system transmits non-real-time packets, which contain large-size visual data, and real-time packets, which contain small-size haptic data. When the transmission rate of visual data exceeds the communication bandwidth, the visual-haptic communication system becomes unstable owing to buffer overflow. To solve this problem, an end-to-end flow controller is proposed. This controller determines the optimal transmission rate of visual data on the basis of the traffic conditions, which are estimated by the packets for haptic communication. Experimental results confirm that in the proposed method, a short packet-sending interval and a short delay are achieved under bandwidth change, and thus, high-precision visual-haptic communication is realized.
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay
2012-01-01
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis. PMID:23149420
Haptic augmentation of science instruction: Does touch matter?
NASA Astrophysics Data System (ADS)
Jones, M. Gail; Minogue, James; Tretter, Thomas R.; Negishi, Atsuko; Taylor, Russell
2006-01-01
This study investigated the impact of haptic augmentation of a science inquiry program on students' learning about viruses and nanoscale science. The study assessed how the addition of different types of haptic feedback (active touch and kinesthetic feedback) combined with computer visualizations influenced middle and high school students' experiences. The influences of a PHANToM (a sophisticated haptic desktop device), a Sidewinder (a haptic gaming joystick), and a mouse (no haptic feedback) interface were compared. The levels of engagement in the instruction and students' attitudes about the instructional program were assessed using a combination of constructed response and Likert scale items. Potential cognitive differences were examined through an analysis of spontaneously generated analogies that appeared during student discourse. Results showed that the addition of haptic feedback from the haptic-gaming joystick and the PHANToM provided a more immersive learning environment that not only made the instruction more engaging but may also influence the way in which the students construct their understandings about abstract science concepts.
Matsumiya, Kazumichi
2013-10-01
Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.
A magnetorheological haptic cue accelerator for manual transmission vehicles
NASA Astrophysics Data System (ADS)
Han, Young-Min; Noh, Kyung-Wook; Lee, Yang-Sub; Choi, Seung-Bok
2010-07-01
This paper proposes a new haptic cue function for manual transmission vehicles to achieve optimal gear shifting. This function is implemented on the accelerator pedal by utilizing a magnetorheological (MR) brake mechanism. By combining the haptic cue function with the accelerator pedal, the proposed haptic cue device can transmit the optimal moment of gear shifting for manual transmission to a driver without requiring the driver's visual attention. As a first step to achieve this goal, a MR fluid-based haptic device is devised to enable rotary motion of the accelerator pedal. Taking into account spatial limitations, the design parameters are optimally determined using finite element analysis to maximize the relative control torque. The proposed haptic cue device is then manufactured and its field-dependent torque and time response are experimentally evaluated. Then the manufactured MR haptic cue device is integrated with the accelerator pedal. A simple virtual vehicle emulating the operation of the engine of a passenger vehicle is constructed and put into communication with the haptic cue device. A feed-forward torque control algorithm for the haptic cue is formulated and control performances are experimentally evaluated and presented in the time domain.
Dependence of behavioral performance on material category in an object grasping task with monkeys.
Yokoi, Isao; Tachibana, Atsumichi; Minamimoto, Takafumi; Goda, Naokazu; Komatsu, Hidehiko
2018-05-02
Material perception is an essential part of our cognitive function that enables us to properly interact with our complex daily environment. One important aspect of material perception is its multimodal nature. When we see an object, we generally recognize its haptic properties as well as its visual properties. Consequently, one must examine behavior using real objects that are perceived both visually and haptically to fully understand the characteristics of material perception. As a first step, we examined whether there is any difference in the behavioral responses to different materials in monkeys trained to perform an object grasping task in which they saw and grasped rod-shaped real objects made of various materials. We found that the monkeys' behavior in the grasping task, measured based on the success rate and the pulling force, differed depending on the material category. Monkeys easily and correctly grasped objects of some materials, such as metal and glass, but failed to grasp objects of other materials. In particular, monkeys avoided grasping fur-covered objects. The differences in the behavioral responses to the material categories cannot be explained solely based on the degree of familiarity with the different materials. These results shed light on the organization of multimodal representation of materials, where their biological significance is an important factor. In addition, a monkey that avoided touching real fur-covered objects readily touched images of the same objects presented on a CRT display. This suggests employing real objects is important when studying behaviors related to material perception.
Development master arm of 2-DOF planar parallel manipulator for In-Vitro Fertilization
NASA Astrophysics Data System (ADS)
Thamrongaphichartkul, Kitti; Vongbunyong, Supachai; Nuntakarn, Lalana
2018-01-01
Micromanipulator is a mechanical device used for manipulating miniature objects in the order of micron. It is widely used in In-Vitro Fertilization (IVF) in which sperms will be held in a micro-needle and penetrate to an oocyte for fertilization. IVF needs to be performed by high skill embryologists to control the movement of the needle accurately due to the lack of tactile perception of the user. Haptic device is a device that can transmit and simulate position, velocity and force in order to enhance interaction between the user and system. However, commercially available haptic devices have unnecessary degrees of freedom and limited workspace which are inappropriate for IVF process. This paper focuses on development of a haptic device for using in IVF process. It will be used as a master arm for the master-slave system for IVF process in order to enhance the ability of users to control the micromanipulator. As a result, the embryologist is able to carry out the IVF process more effectively with having tactile perception.
Reframing the action and perception dissociation in DF: haptics matters, but how?
Whitwell, Robert L; Buckingham, Gavin
2013-02-01
Goodale and Milner's (1992) "vision-for-action" and "vision-for-perception" account of the division of labor between the dorsal and ventral "streams" has come to dominate contemporary views of the functional roles of these two pathways. Nevertheless, some lines of evidence for the model remain controversial. Recently, Thomas Schenk reexamined visual form agnosic patient DF's spared anticipatory grip scaling to object size, one of the principal empirical pillars of the model. Based on this new evidence, Schenk rejects the original interpretation of DF's spared ability that was based on segregated processing of object size and argues that DF's spared grip scaling relies on haptic feedback to calibrate visual egocentric cues that relate the posture of the hand to the visible edges of the goal-object. However, a careful consideration of the tasks that Schenk employed reveals some problems with his claim. We suspect that the core issues of this controversy will require a closer examination of the role that cognition plays in the operation of the dorsal and ventral streams in healthy controls and in patient DF.
The Efficacy of Surface Haptics and Force Feedback in Education
ERIC Educational Resources Information Center
Gorlewicz, Jenna Lynn
2013-01-01
This dissertation bridges the fields of haptics, engineering, and education to realize some of the potential benefits haptic devices may have in Science, Technology, Engineering, and Math (STEM) education. Specifically, this dissertation demonstrates the development, implementation, and assessment of two haptic devices in engineering and math…
Incorporating Haptic Feedback in Simulation for Learning Physics
ERIC Educational Resources Information Center
Han, Insook; Black, John B.
2011-01-01
The purpose of this study was to investigate the effectiveness of a haptic augmented simulation in learning physics. The results indicate that haptic augmented simulations, both the force and kinesthetic and the purely kinesthetic simulations, were more effective than the equivalent non-haptic simulation in providing perceptual experiences and…
Haptic Distal Spatial Perception Mediated by Strings: Haptic "Looming"
ERIC Educational Resources Information Center
Cabe, Patrick A.
2011-01-01
Five experiments tested a haptic analog of optical looming, demonstrating string-mediated haptic distal spatial perception. Horizontally collinear hooks supported a weighted string held taut by a blindfolded participant's finger midway between the hooks. At the finger, the angle between string segments increased as the finger approached…
ERIC Educational Resources Information Center
Schonborn, Konrad J.; Bivall, Petter; Tibell, Lena A. E.
2011-01-01
This study explores tertiary students' interaction with a haptic virtual model representing the specific binding of two biomolecules, a core concept in molecular life science education. Twenty students assigned to a "haptics" (experimental) or "no-haptics" (control) condition performed a "docking" task where users sought the most favourable…
Differential effects of non-informative vision and visual interference on haptic spatial processing
van Rheede, Joram J.; Postma, Albert; Kappers, Astrid M. L.
2008-01-01
The primary purpose of this study was to examine the effects of non-informative vision and visual interference upon haptic spatial processing, which supposedly derives from an interaction between an allocentric and egocentric reference frame. To this end, a haptic parallelity task served as baseline to determine the participant-dependent biasing influence of the egocentric reference frame. As expected, large systematic participant-dependent deviations from veridicality were observed. In the second experiment we probed the effect of non-informative vision on the egocentric bias. Moreover, orienting mechanisms (gazing directions) were studied with respect to the presentation of haptic information in a specific hemispace. Non-informative vision proved to have a beneficial effect on haptic spatial processing. No effect of gazing direction or hemispace was observed. In the third experiment we investigated the effect of simultaneously presented interfering visual information on the haptic bias. Interfering visual information parametrically influenced haptic performance. The interplay of reference frames that subserves haptic spatial processing was found to be related to both the effects of non-informative vision and visual interference. These results suggest that spatial representations are influenced by direct cross-modal interactions; inter-participant differences in the haptic modality resulted in differential effects of the visual modality. PMID:18553074
Precise Haptic Device Co-Location for Visuo-Haptic Augmented Reality.
Eck, Ulrich; Pankratz, Frieder; Sandor, Christian; Klinker, Gudrun; Laga, Hamid
2015-12-01
Visuo-haptic augmented reality systems enable users to see and touch digital information that is embedded in the real world. PHANToM haptic devices are often employed to provide haptic feedback. Precise co-location of computer-generated graphics and the haptic stylus is necessary to provide a realistic user experience. Previous work has focused on calibration procedures that compensate the non-linear position error caused by inaccuracies in the joint angle sensors. In this article we present a more complete procedure that additionally compensates for errors in the gimbal sensors and improves position calibration. The proposed procedure further includes software-based temporal alignment of sensor data and a method for the estimation of a reference for position calibration, resulting in increased robustness against haptic device initialization and external tracker noise. We designed our procedure to require minimal user input to maximize usability. We conducted an extensive evaluation with two different PHANToMs, two different optical trackers, and a mechanical tracker. Compared to state-of-the-art calibration procedures, our approach significantly improves the co-location of the haptic stylus. This results in higher fidelity visual and haptic augmentations, which are crucial for fine-motor tasks in areas such as medical training simulators, assembly planning tools, or rapid prototyping applications.
Rauter, Georg; Sigrist, Roland; Riener, Robert; Wolf, Peter
2015-01-01
In literature, the effectiveness of haptics for motor learning is controversially discussed. Haptics is believed to be effective for motor learning in general; however, different types of haptic control enhance different movement aspects. Thus, in dependence on the movement aspects of interest, one type of haptic control may be effective whereas another one is not. Therefore, in the current work, it was investigated if and how different types of haptic controllers affect learning of spatial and temporal movement aspects. In particular, haptic controllers that enforce active participation of the participants were expected to improve spatial aspects. Only haptic controllers that provide feedback about the task's velocity profile were expected to improve temporal aspects. In a study on learning a complex trunk-arm rowing task, the effect of training with four different types of haptic control was investigated: position control, path control, adaptive path control, and reactive path control. A fifth group (control) trained with visual concurrent augmented feedback. As hypothesized, the position controller was most effective for learning of temporal movement aspects, while the path controller was most effective in teaching spatial movement aspects of the rowing task. Visual feedback was also effective for learning temporal and spatial movement aspects.
Prevailing Trends in Haptic Feedback Simulation for Minimally Invasive Surgery.
Pinzon, David; Byrns, Simon; Zheng, Bin
2016-08-01
Background The amount of direct hand-tool-tissue interaction and feedback in minimally invasive surgery varies from being attenuated in laparoscopy to being completely absent in robotic minimally invasive surgery. The role of haptic feedback during surgical skill acquisition and its emphasis in training have been a constant source of controversy. This review discusses the major developments in haptic simulation as they relate to surgical performance and the current research questions that remain unanswered. Search Strategy An in-depth review of the literature was performed using PubMed. Results A total of 198 abstracts were returned based on our search criteria. Three major areas of research were identified, including advancements in 1 of the 4 components of haptic systems, evaluating the effectiveness of haptic integration in simulators, and improvements to haptic feedback in robotic surgery. Conclusions Force feedback is the best method for tissue identification in minimally invasive surgery and haptic feedback provides the greatest benefit to surgical novices in the early stages of their training. New technology has improved our ability to capture, playback and enhance to utility of haptic cues in simulated surgery. Future research should focus on deciphering how haptic training in surgical education can increase performance, safety, and improve training efficiency. © The Author(s) 2016.
Haptic Foot Pedal: Influence of Shoe Type, Age, and Gender on Subjective Pulse Perception.
Geitner, Claudia; Birrell, Stewart; Krehl, Claudia; Jennings, Paul
2018-06-01
This study investigates the influence of shoe type (sneakers and safety boots), age, and gender on the perception of haptic pulse feedback provided by a prototype accelerator pedal in a running stationary vehicle. Haptic feedback can be a less distracting alternative to traditionally visual and auditory in-vehicle feedback. However, to be effective, the device delivering the haptic feedback needs to be in contact with the person. Factors such as shoe type vary naturally over the season and could render feedback that is perceived well in one situation, unnoticeable in another. In this study, we evaluate factors that can influence the subjective perception of haptic feedback in a stationary but running car: shoe type, age, and gender. Thirty-six drivers within three age groups (≤39, 40-59, and ≥60) took part. For each haptic feedback, participants rated intensity, urgency, and comfort via a questionnaire. The perception of the haptic feedback is significantly influenced by the interaction between the pulse's duration and force amplitude and the participant's age and gender but not shoe type. The results indicate that it is important to consider different age groups and gender in the evaluation of haptic feedback. Future research might also look into approaches to adapt haptic feedback to the individual driver's preferences. Findings from this study can be applied to the design of an accelerator pedal in a car, for example, for a nonvisual in-vehicle warning, but also to plan user studies with a haptic pedal in general.
Rajanna, Vijay; Vo, Patrick; Barth, Jerry; Mjelde, Matthew; Grey, Trevor; Oduola, Cassandra; Hammond, Tracy
2016-03-01
A carefully planned, structured, and supervised physiotherapy program, following a surgery, is crucial for the successful diagnosis of physical injuries. Nearly 50 % of the surgeries fail due to unsupervised, and erroneous physiotherapy. The demand for a physiotherapist for an extended period is expensive to afford, and sometimes inaccessible. Researchers have tried to leverage the advancements in wearable sensors and motion tracking by building affordable, automated, physio-therapeutic systems that direct a physiotherapy session by providing audio-visual feedback on patient's performance. There are many aspects of automated physiotherapy program which are yet to be addressed by the existing systems: a wide classification of patients' physiological conditions to be diagnosed, multiple demographics of the patients (blind, deaf, etc.), and the need to pursue patients to adopt the system for an extended period for self-care. In our research, we have tried to address these aspects by building a health behavior change support system called KinoHaptics, for post-surgery rehabilitation. KinoHaptics is an automated, wearable, haptic assisted, physio-therapeutic system that can be used by a wide variety of demographics and for various physiological conditions of the patients. The system provides rich and accurate vibro-haptic feedback that can be felt by the user, irrespective of the physiological limitations. KinoHaptics is built to ensure that no injuries are induced during the rehabilitation period. The persuasive nature of the system allows for personal goal-setting, progress tracking, and most importantly life-style compatibility. The system was evaluated under laboratory conditions, involving 14 users. Results show that KinoHaptics is highly convenient to use, and the vibro-haptic feedback is intuitive, accurate, and has shown to prevent accidental injuries. Also, results show that KinoHaptics is persuasive in nature as it supports behavior change and habit building. The successful acceptance of KinoHaptics, an automated, wearable, haptic assisted, physio-therapeutic system proves the need and future-scope of automated physio-therapeutic systems for self-care and behavior change. It also proves that such systems incorporated with vibro-haptic feedback encourage strong adherence to the physiotherapy program; can have profound impact on the physiotherapy experience resulting in higher acceptance rate.
G2H--graphics-to-haptic virtual environment development tool for PC's.
Acosta, E; Temkin, B; Krummel, T M; Heinrichs, W L
2000-01-01
For surgical training and preparations, the existing surgical virtual environments have shown great improvement. However, these improvements are more in the visual aspect. The incorporation of haptics into virtual reality base surgical simulations would enhance the sense of realism greatly. To aid in the development of the haptic surgical virtual environment we have created a graphics to haptic, G2H, virtual environment developer tool. G2H transforms graphical virtual environments (created or imported) to haptic virtual environments without programming. The G2H capability has been demonstrated using the complex 3D pelvic model of Lucy 2.0, the Stanford Visible Female. The pelvis was made haptic using G2H without any further programming effort.
Vibrotactile display for mobile applications based on dielectric elastomer stack actuators
NASA Astrophysics Data System (ADS)
Matysek, Marc; Lotz, Peter; Flittner, Klaus; Schlaak, Helmut F.
2010-04-01
Dielectric elastomer stack actuators (DESA) offer the possibility to build actuator arrays at very high density. The driving voltage can be defined by the film thickness, ranging from 80 μm down to 5 μm and driving field strength of 30 V/μm. In this paper we present the development of a vibrotactile display based on multilayer technology. The display is used to present several operating conditions of a machine in form of haptic information to a human finger. As an example the design of a mp3-player interface is introduced. To build up an intuitive and user friendly interface several aspects of human haptic perception have to be considered. Using the results of preliminary user tests the interface is designed and an appropriate actuator layout is derived. Controlling these actuators is important because there are many possibilities to present different information, e.g. by varying the driving parameters. A built demonstrator is used to verify the concept: a high recognition rate of more than 90% validates the concept. A characterization of mechanical and electrical parameters proofs the suitability of dielectric elastomer stack actuators for the use in mobile applications.
Cross-Sensory Transfer of Reference Frames in Spatial Memory
ERIC Educational Resources Information Center
Kelly, Jonathan W.; Avraamides, Marios N.
2011-01-01
Two experiments investigated whether visual cues influence spatial reference frame selection for locations learned through touch. Participants experienced visual cues emphasizing specific environmental axes and later learned objects through touch. Visual cues were manipulated and haptic learning conditions were held constant. Imagined perspective…
Predicting successful tactile mapping of virtual objects.
Brayda, Luca; Campus, Claudio; Gori, Monica
2013-01-01
Improving spatial ability of blind and visually impaired people is the main target of orientation and mobility (O&M) programs. In this study, we use a minimalistic mouse-shaped haptic device to show a new approach aimed at evaluating devices providing tactile representations of virtual objects. We consider psychophysical, behavioral, and subjective parameters to clarify under which circumstances mental representations of spaces (cognitive maps) can be efficiently constructed with touch by blindfolded sighted subjects. We study two complementary processes that determine map construction: low-level perception (in a passive stimulation task) and high-level information integration (in an active exploration task). We show that jointly considering a behavioral measure of information acquisition and a subjective measure of cognitive load can give an accurate prediction and a practical interpretation of mapping performance. Our simple TActile MOuse (TAMO) uses haptics to assess spatial ability: this may help individuals who are blind or visually impaired to be better evaluated by O&M practitioners or to evaluate their own performance.
NASA Astrophysics Data System (ADS)
Choi, Seung-Hyun; Kim, Soomin; Kim, Pyunghwa; Park, Jinhyuk; Choi, Seung-Bok
2015-06-01
In this study, we developed a novel four-degrees-of-freedom haptic master using controllable magnetorheological (MR) fluid. We also integrated the haptic master with a vision device with image processing for robot-assisted minimally invasive surgery (RMIS). The proposed master can be used in RMIS as a haptic interface to provide the surgeon with a sense of touch by using both kinetic and kinesthetic information. The slave robot, which is manipulated with a proportional-integrative-derivative controller, uses a force sensor to obtain the desired forces from tissue contact, and these desired repulsive forces are then embodied through the MR haptic master. To verify the effectiveness of the haptic master, the desired force and actual force are compared in the time domain. In addition, a visual feedback system is implemented in the RMIS experiment to distinguish between the tumor and organ more clearly and provide better visibility to the operator. The hue-saturation-value color space is adopted for the image processing since it is often more intuitive than other color spaces. The image processing and haptic feedback are realized on surgery performance. In this work, tumor-cutting experiments are conducted under four different operating conditions: haptic feedback on, haptic feedback off, image processing on, and image processing off. The experimental realization shows that the performance index, which is a function of pixels, is different in the four operating conditions.
Haptic Paddle Enhancements and a Formal Assessment of Student Learning in System Dynamics
ERIC Educational Resources Information Center
Gorlewicz, Jenna L.; Kratchman, Louis B.; Webster, Robert J., III
2014-01-01
The haptic paddle is a force-feedback joystick used at several universities in teaching System Dynamics, a core mechanical engineering undergraduate course where students learn to model dynamic systems in several domains. A second goal of the haptic paddle is to increase the accessibility of robotics and haptics by providing a low-cost device for…
Review of Designs for Haptic Data Visualization.
Paneels, Sabrina; Roberts, Jonathan C
2010-01-01
There are many different uses for haptics, such as training medical practitioners, teleoperation, or navigation of virtual environments. This review focuses on haptic methods that display data. The hypothesis is that haptic devices can be used to present information, and consequently, the user gains quantitative, qualitative, or holistic knowledge about the presented data. Not only is this useful for users who are blind or partially sighted (who can feel line graphs, for instance), but also the haptic modality can be used alongside other modalities, to increase the amount of variables being presented, or to duplicate some variables to reinforce the presentation. Over the last 20 years, a significant amount of research has been done in haptic data presentation; e.g., researchers have developed force feedback line graphs, bar charts, and other forms of haptic representations. However, previous research is published in different conferences and journals, with different application emphases. This paper gathers and collates these various designs to provide a comprehensive review of designs for haptic data visualization. The designs are classified by their representation: Charts, Maps, Signs, Networks, Diagrams, Images, and Tables. This review provides a comprehensive reference for researchers and learners, and highlights areas for further research.
Study on development of active-passive rehabilitation system for upper limbs: Hybrid-PLEMO
NASA Astrophysics Data System (ADS)
Kikuchi, T.; Jin, Y.; Fukushima, K.; Akai, H.; Furusho, J.
2009-02-01
In recent years, many researchers have studied the potential of using robotics technology to assist and quantify the motor functions for neuron-rehabilitation. Some kinds of haptic devices have been developed and evaluated its efficiency with clinical tests, for example, upper limb training for patients with spasticity after stroke. Active-type (motor-driven) haptic devices can realize a lot of varieties of haptics. But they basically require high-cost safety system. On the other hand, passive-type (brake-based) haptic devices have inherent safety. However, the passive robot system has strong limitation on varieties of haptics. There are not sufficient evidences to clarify how the passive/active haptics effect to the rehabilitation of motor skills. In this paper, we developed an active-passive-switchable rehabilitation system with ER clutch/brake device named "Hybrid-PLEMO" in order to address these problems. In this paper, basic structures and haptic control methods of the Hybrid-PLEMO are described.
Calibrating Reach Distance to Visual Targets
ERIC Educational Resources Information Center
Mon-Williams, Mark; Bingham, Geoffrey P.
2007-01-01
The authors investigated the calibration of reach distance by gradually distorting the haptic feedback obtained when participants grasped visible target objects. The authors found that the modified relationship between visually specified distance and reach distance could be captured by a straight-line mapping function. Thus, the relation could be…
NASA Astrophysics Data System (ADS)
Huang, Shih-Chieh Douglas
In this dissertation, I investigate the effects of a grounded learning experience on college students' mental models of physics systems. The grounded learning experience consisted of a priming stage and an instruction stage, and within each stage, one of two different types of visuo-haptic representation was applied: visuo-gestural simulation (visual modality and gestures) and visuo-haptic simulation (visual modality, gestures, and somatosensory information). A pilot study involving N = 23 college students examined how using different types of visuo-haptic representation in instruction affected people's mental model construction for physics systems. Participants' abilities to construct mental models were operationalized through their pretest-to-posttest gain scores for a basic physics system and their performance on a transfer task involving an advanced physics system. Findings from this pilot study revealed that, while both simulations significantly improved participants' mental modal construction for physics systems, visuo-haptic simulation was significantly better than visuo-gestural simulation. In addition, clinical interviews suggested that participants' mental model construction for physics systems benefited from receiving visuo-haptic simulation in a tutorial prior to the instruction stage. A dissertation study involving N = 96 college students examined how types of visuo-haptic representation in different applications support participants' mental model construction for physics systems. Participant's abilities to construct mental models were again operationalized through their pretest-to-posttest gain scores for a basic physics system and their performance on a transfer task involving an advanced physics system. Participants' physics misconceptions were also measured before and after the grounded learning experience. Findings from this dissertation study not only revealed that visuo-haptic simulation was significantly more effective in promoting mental model construction and remedying participants' physics misconceptions than visuo-gestural simulation, they also revealed that visuo-haptic simulation was more effective during the priming stage than during the instruction stage. Interestingly, the effects of visuo-haptic simulation in priming and visuo-haptic simulation in instruction on participants' pretest-to-posttest gain scores for a basic physics system appeared additive. These results suggested that visuo-haptic simulation is effective in physics learning, especially when it is used during the priming stage.
van der Meijden, O A J; Schijven, M P
2009-06-01
Virtual reality (VR) as surgical training tool has become a state-of-the-art technique in training and teaching skills for minimally invasive surgery (MIS). Although intuitively appealing, the true benefits of haptic (VR training) platforms are unknown. Many questions about haptic feedback in the different areas of surgical skills (training) need to be answered before adding costly haptic feedback in VR simulation for MIS training. This study was designed to review the current status and value of haptic feedback in conventional and robot-assisted MIS and training by using virtual reality simulation. A systematic review of the literature was undertaken using PubMed and MEDLINE. The following search terms were used: Haptic feedback OR Haptics OR Force feedback AND/OR Minimal Invasive Surgery AND/OR Minimal Access Surgery AND/OR Robotics AND/OR Robotic Surgery AND/OR Endoscopic Surgery AND/OR Virtual Reality AND/OR Simulation OR Surgical Training/Education. The results were assessed according to level of evidence as reflected by the Oxford Centre of Evidence-based Medicine Levels of Evidence. In the current literature, no firm consensus exists on the importance of haptic feedback in performing minimally invasive surgery. Although the majority of the results show positive assessment of the benefits of force feedback, results are ambivalent and not unanimous on the subject. Benefits are least disputed when related to surgery using robotics, because there is no haptic feedback in currently used robotics. The addition of haptics is believed to reduce surgical errors resulting from a lack of it, especially in knot tying. Little research has been performed in the area of robot-assisted endoscopic surgical training, but results seem promising. Concerning VR training, results indicate that haptic feedback is important during the early phase of psychomotor skill acquisition.
Hedayat, Isabel; Moraes, Renato; Lanovaz, Joel L; Oates, Alison R
2017-06-01
There are different ways to add haptic input during walking which may affect walking balance. This study compared the use of two different haptic tools (rigid railing and haptic anchors) and investigated whether any effects on walking were the result of the added sensory input and/or the posture generated when using those tools. Data from 28 young healthy adults were collected using the Mobility Lab inertial sensor system (APDM, Oregon, USA). Participants walked with and without both haptic tools and while pretending to use both haptic tools (placebo trials), with eyes opened and eyes closed. Using the tools or pretending to use both tools decreased normalized stride velocity (p < .001-0.008) and peak medial-lateral (ML) trunk velocity (p < .001-0.001). Normalized stride velocity was slower when actually using the railing compared to placebo railing trials (p = .006). Using the anchors resulted in lower peak ML trunk velocity than the railing (p = .002). The anchors had lower peak ML trunk velocity than placebo anchors (p < .001), but there was no difference between railing and placebo railing (p > .999). These findings highlight a difference in the type of tool used to add haptic input and suggest that changes in balance control strategy resulting from using the railing are based on arm placement, where it is the posture combined with added sensory input that affects balance control strategies with the haptic anchors. These findings provide a strong framework for additional research to be conducted on the effects of haptic input on walking in populations known to have decreased walking balance.
Haptic device development based on electro static force of cellulose electro active paper
NASA Astrophysics Data System (ADS)
Yun, Gyu-young; Kim, Sang-Youn; Jang, Sang-Dong; Kim, Dong-Gu; Kim, Jaehwan
2011-04-01
Haptic is one of well-considered device which is suitable for demanding virtual reality applications such as medical equipment, mobile devices, the online marketing and so on. Nowadays, many of concepts for haptic devices have been suggested to meet the demand of industries. Cellulose has received much attention as an emerging smart material, named as electro-active paper (EAPap). The EAPap is attractive for mobile haptic devices due to its unique characteristics in terms of low actuation power, suitability for thin devices and transparency. In this paper, we suggest a new concept of haptic actuator with the use of cellulose EAPap. Its performance is evaluated depending on various actuation conditions. As a result, cellulose electrostatic force actuator shows a large output displacement and fast response, which is suitable for mobile haptic devices.
Detection thresholds for small haptic effects
NASA Astrophysics Data System (ADS)
Dosher, Jesse A.; Hannaford, Blake
2002-02-01
We are interested in finding out whether or not haptic interfaces will be useful in portable and hand held devices. Such systems will have severe constraints on force output. Our first step is to investigate the lower limits at which haptic effects can be perceived. In this paper we report on experiments studying the effects of varying the amplitude, size, shape, and pulse-duration of a haptic feature. Using a specific haptic device we measure the smallest detectable haptics effects, with active exploration of saw-tooth shaped icons sized 3, 4 and 5 mm, a sine-shaped icon 5 mm wide, and static pulses 50, 100, and 150 ms in width. Smooth shaped icons resulted in a detection threshold of approximately 55 mN, almost twice that of saw-tooth shaped icons which had a threshold of 31 mN.
Enhancing audiovisual experience with haptic feedback: a survey on HAV.
Danieau, F; Lecuyer, A; Guillotel, P; Fleureau, J; Mollet, N; Christie, M
2013-01-01
Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation, and virtual reality. Today there is a growing interest among researchers in integrating haptic feedback into audiovisual systems. A new medium emerges from this effort: haptic-audiovisual (HAV) content. This paper presents the techniques, formalisms, and key results pertinent to this medium. We first review the three main stages of the HAV workflow: the production, distribution, and rendering of haptic effects. We then highlight the pressing necessity for evaluation techniques in this context and discuss the key challenges in the field. By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial and societal stakes are significant.
Haptic interface of web-based training system for interventional radiology procedures
NASA Astrophysics Data System (ADS)
Ma, Xin; Lu, Yiping; Loe, KiaFock; Nowinski, Wieslaw L.
2004-05-01
The existing web-based medical training systems and surgical simulators can provide affordable and accessible medical training curriculum, but they seldom offer the trainee realistic and affordable haptic feedback. Therefore, they cannot offer the trainee a suitable practicing environment. In this paper, a haptic solution for interventional radiology (IR) procedures is proposed. System architecture of a web-based training system for IR procedures is briefly presented first. Then, the mechanical structure, the working principle and the application of a haptic device are discussed in detail. The haptic device works as an interface between the training environment and the trainees and is placed at the end user side. With the system, the user can be trained on the interventional radiology procedures - navigating catheters, inflating balloons, deploying coils and placing stents on the web and get surgical haptic feedback in real time.
Adaptive space warping to enhance passive haptics in an arthroscopy surgical simulator.
Spillmann, Jonas; Tuchschmid, Stefan; Harders, Matthias
2013-04-01
Passive haptics, also known as tactile augmentation, denotes the use of a physical counterpart to a virtual environment to provide tactile feedback. Employing passive haptics can result in more realistic touch sensations than those from active force feedback, especially for rigid contacts. However, changes in the virtual environment would necessitate modifications of the physical counterparts. In recent work space warping has been proposed as one solution to overcome this limitation. In this technique virtual space is distorted such that a variety of virtual models can be mapped onto one single physical object. In this paper, we propose as an extension adaptive space warping; we show how this technique can be employed in a mixed-reality surgical training simulator in order to map different virtual patients onto one physical anatomical model. We developed methods to warp different organ geometries onto one physical mock-up, to handle different mechanical behaviors of the virtual patients, and to allow interactive modifications of the virtual structures, while the physical counterparts remain unchanged. Various practical examples underline the wide applicability of our approach. To the best of our knowledge this is the first practical usage of such a technique in the specific context of interactive medical training.
Pressman, Assaf; Karniel, Amir; Mussa-Ivaldi, Ferdinando A.
2011-01-01
A new haptic illusion is described, in which the location of the mobile object affects the perception of its rigidity. There is theoretical and experimental support to the notion that limb position sense results from the brain combining ongoing sensory information with expectations arising from prior experience. How does this probabilistic state information affect one’s tactile perception of the environment mechanics? In a simple estimation process human subjects were asked to report the relative rigidity of two simulated virtual objects. One of the objects remained fixed in space and had various coefficients of stiffness. The other virtual object had constant stiffness but moved with respect to the subjects. Earlier work suggested that the perception of an object’s rigidity is consistent with a process of regression between the contact force and the perceived amount of penetration inside the object’s boundary. The amount of penetration perceived by the subject was affected by varying the position of the object. This, in turn, had a predictable effect on the perceived rigidity of the contact. Subjects’ reports on the relative rigidity of the object are best accounted for by a probabilistic model in which the perceived boundary of the object is estimated based on its current location and on its past observations. Therefore, the perception of contact rigidity is accounted for by a stochastic process of state estimation underlying proprioceptive localization of the hand. PMID:21525300
NASA Astrophysics Data System (ADS)
Martin, P.; Tseu, A.; Férey, N.; Touraine, D.; Bourdot, P.
2014-02-01
Most advanced immersive devices provide collaborative environment within several users have their distinct head-tracked stereoscopic point of view. Combining with common used interactive features such as voice and gesture recognition, 3D mouse, haptic feedback, and spatialized audio rendering, these environments should faithfully reproduce a real context. However, even if many studies have been carried out on multimodal systems, we are far to definitively solve the issue of multimodal fusion, which consists in merging multimodal events coming from users and devices, into interpretable commands performed by the application. Multimodality and collaboration was often studied separately, despite of the fact that these two aspects share interesting similarities. We discuss how we address this problem, thought the design and implementation of a supervisor that is able to deal with both multimodal fusion and collaborative aspects. The aim of this supervisor is to ensure the merge of user's input from virtual reality devices in order to control immersive multi-user applications. We deal with this problem according to a practical point of view, because the main requirements of this supervisor was defined according to a industrial task proposed by our automotive partner, that as to be performed with multimodal and collaborative interactions in a co-located multi-user environment. In this task, two co-located workers of a virtual assembly chain has to cooperate to insert a seat into the bodywork of a car, using haptic devices to feel collision and to manipulate objects, combining speech recognition and two hands gesture recognition as multimodal instructions. Besides the architectural aspect of this supervisor, we described how we ensure the modularity of our solution that could apply on different virtual reality platforms, interactive contexts and virtual contents. A virtual context observer included in this supervisor in was especially designed to be independent to the content of the virtual scene of targeted application, and is use to report high-level interactive and collaborative events. This context observer allows the supervisor to merge these interactive and collaborative events, but is also used to deal with new issues coming from our observation of two co-located users in an immersive device performing this assembly task. We highlight the fact that when speech recognition features are provided to the two users, it is required to automatically detect according to the interactive context, whether the vocal instructions must be translated into commands that have to be performed by the machine, or whether they take a part of the natural communication necessary for collaboration. Information coming from this context observer that indicates a user is looking at its collaborator, is important to detect if the user is talking to its partner. Moreover, as the users are physically co-localised and head-tracking is used to provide high fidelity stereoscopic rendering, and natural walking navigation in the virtual scene, we have to deals with collision and screen occlusion between the co-located users in the physical work space. Working area and focus of each user, computed and reported by the context observer is necessary to prevent or avoid these situations.
Seung, Sungmin; Choi, Hongseok; Jang, Jongseong; Kim, Young Soo; Park, Jong-Oh; Park, Sukho; Ko, Seong Young
2017-01-01
This article presents a haptic-guided teleoperation for a tumor removal surgical robotic system, so-called a SIROMAN system. The system was developed in our previous work to make it possible to access tumor tissue, even those that seat deeply inside the brain, and to remove the tissue with full maneuverability. For a safe and accurate operation to remove only tumor tissue completely while minimizing damage to the normal tissue, a virtual wall-based haptic guidance together with a medical image-guided control is proposed and developed. The virtual wall is extracted from preoperative medical images, and the robot is controlled to restrict its motion within the virtual wall using haptic feedback. Coordinate transformation between sub-systems, a collision detection algorithm, and a haptic-guided teleoperation using a virtual wall are described in the context of using SIROMAN. A series of experiments using a simplified virtual wall are performed to evaluate the performance of virtual wall-based haptic-guided teleoperation. With haptic guidance, the accuracy of the robotic manipulator's trajectory is improved by 57% compared to one without. The tissue removal performance is also improved by 21% ( p < 0.05). The experiments show that virtual wall-based haptic guidance provides safer and more accurate tissue removal for single-port brain surgery.
Haptic adaptation to slant: No transfer between exploration modes
van Dam, Loes C. J.; Plaisier, Myrthe A.; Glowania, Catharina; Ernst, Marc O.
2016-01-01
Human touch is an inherently active sense: to estimate an object’s shape humans often move their hand across its surface. This way the object is sampled both in a serial (sampling different parts of the object across time) and parallel fashion (sampling using different parts of the hand simultaneously). Both the serial (moving a single finger) and parallel (static contact with the entire hand) exploration modes provide reliable and similar global shape information, suggesting the possibility that this information is shared early in the sensory cortex. In contrast, we here show the opposite. Using an adaptation-and-transfer paradigm, a change in haptic perception was induced by slant-adaptation using either the serial or parallel exploration mode. A unified shape-based coding would predict that this would equally affect perception using other exploration modes. However, we found that adaptation-induced perceptual changes did not transfer between exploration modes. Instead, serial and parallel exploration components adapted simultaneously, but to different kinaesthetic aspects of exploration behaviour rather than object-shape per se. These results indicate that a potential combination of information from different exploration modes can only occur at down-stream cortical processing stages, at which adaptation is no longer effective. PMID:27698392
Development of a StandAlone Surgical Haptic Arm.
Jones, Daniel; Lewis, Andrew; Fischer, Gregory S
2011-01-01
When performing telesurgery with current commercially available Minimally Invasive Robotic Surgery (MIRS) systems, a surgeon cannot feel the tool interactions that are inherent in traditional laparoscopy. It is proposed that haptic feedback in the control of MIRS systems could improve the speed, safety and learning curve of robotic surgery. To test this hypothesis, a standalone surgical haptic arm (SASHA) capable of manipulating da Vinci tools has been designed and fabricated with the additional ability of providing information for haptic feedback. This arm was developed as a research platform for developing and evaluating approaches to telesurgery, including various haptic mappings between master and slave and evaluating the effects of latency.
The Cognitive and Neural Correlates of Tactile Memory
ERIC Educational Resources Information Center
Gallace, Alberto; Spence, Charles
2009-01-01
Tactile memory systems are involved in the storage and retrieval of information about stimuli that impinge on the body surface and objects that people explore haptically. Here, the authors review the behavioral, neuropsychological, neurophysiological, and neuroimaging research on tactile memory. This body of research reveals that tactile memory…
Three-dimensional printing: review of application in medicine and hepatic surgery.
Yao, Rui; Xu, Gang; Mao, Shuang-Shuang; Yang, Hua-Yu; Sang, Xin-Ting; Sun, Wei; Mao, Yi-Lei
2016-12-01
Three-dimensional (3D) printing (3DP) is a rapid prototyping technology that has gained increasing recognition in many different fields. Inherent accuracy and low-cost property enable applicability of 3DP in many areas, such as manufacturing, aerospace, medical, and industrial design. Recently, 3DP has gained considerable attention in the medical field. The image data can be quickly turned into physical objects by using 3DP technology. These objects are being used across a variety of surgical specialties. The shortage of cadaver specimens is a major problem in medical education. However, this concern has been solved with the emergence of 3DP model. Custom-made items can be produced by using 3DP technology. This innovation allows 3DP use in preoperative planning and surgical training. Learning is difficult among medical students because of the complex anatomical structures of the liver. Thus, 3D visualization is a useful tool in anatomy teaching and hepatic surgical training. However, conventional models do not capture haptic qualities. 3DP can produce highly accurate and complex physical models. Many types of human or animal differentiated cells can be printed successfully with the development of 3D bio-printing technology. This progress represents a valuable breakthrough that exhibits many potential uses, such as research on drug metabolism or liver disease mechanism. This technology can also be used to solve shortage of organs for transplant in the future.
Three-dimensional printing: review of application in medicine and hepatic surgery
Yao, Rui; Xu, Gang; Mao, Shuang-Shuang; Yang, Hua-Yu; Sang, Xin-Ting; Sun, Wei; Mao, Yi-Lei
2016-01-01
Three-dimensional (3D) printing (3DP) is a rapid prototyping technology that has gained increasing recognition in many different fields. Inherent accuracy and low-cost property enable applicability of 3DP in many areas, such as manufacturing, aerospace, medical, and industrial design. Recently, 3DP has gained considerable attention in the medical field. The image data can be quickly turned into physical objects by using 3DP technology. These objects are being used across a variety of surgical specialties. The shortage of cadaver specimens is a major problem in medical education. However, this concern has been solved with the emergence of 3DP model. Custom-made items can be produced by using 3DP technology. This innovation allows 3DP use in preoperative planning and surgical training. Learning is difficult among medical students because of the complex anatomical structures of the liver. Thus, 3D visualization is a useful tool in anatomy teaching and hepatic surgical training. However, conventional models do not capture haptic qualities. 3DP can produce highly accurate and complex physical models. Many types of human or animal differentiated cells can be printed successfully with the development of 3D bio-printing technology. This progress represents a valuable breakthrough that exhibits many potential uses, such as research on drug metabolism or liver disease mechanism. This technology can also be used to solve shortage of organs for transplant in the future. PMID:28154775
A study on haptic collaborative game in shared virtual environment
NASA Astrophysics Data System (ADS)
Lu, Keke; Liu, Guanyang; Liu, Lingzhi
2013-03-01
A study on collaborative game in shared virtual environment with haptic feedback over computer networks is introduced in this paper. A collaborative task was used where the players located at remote sites and played the game together. The player can feel visual and haptic feedback in virtual environment compared to traditional networked multiplayer games. The experiment was desired in two conditions: visual feedback only and visual-haptic feedback. The goal of the experiment is to assess the impact of force feedback on collaborative task performance. Results indicate that haptic feedback is beneficial for performance enhancement for collaborative game in shared virtual environment. The outcomes of this research can have a powerful impact on the networked computer games.
A “virtually minimal” visuo-haptic training of attention in severe traumatic brain injury
2013-01-01
Background Although common during the early stages of recovery from severe traumatic brain injury (TBI), attention deficits have been scarcely investigated. Encouraging evidence suggests beneficial effects of attention training in more chronic and higher functioning patients. Interactive technology may provide new opportunities for rehabilitation in inpatients who are earlier in their recovery. Methods We designed a “virtually minimal” approach using robot-rendered haptics in a virtual environment to train severely injured inpatients in the early stages of recovery to sustain attention to a visuo-motor task. 21 inpatients with severe TBI completed repetitive reaching toward targets that were both seen and felt. Patients were tested over two consecutive days, experiencing 3 conditions (no haptic feedback, a break-through force, and haptic nudge) in 12 successive, 4-minute blocks. Results The interactive visuo-haptic environments were well-tolerated and engaging. Patients typically remained attentive to the task. However, patients exhibited attention loss both before (prolonged initiation) and during (pauses during motion) a movement. Compared to no haptic feedback, patients benefited from haptic nudge cues but not break-through forces. As training progressed, patients increased the number of targets acquired and spontaneously improved from one day to the next. Conclusions Interactive visuo-haptic environments could be beneficial for attention training for severe TBI patients in the early stages of recovery and warrants further and more prolonged clinical testing. PMID:23938101
A "virtually minimal" visuo-haptic training of attention in severe traumatic brain injury.
Dvorkin, Assaf Y; Ramaiya, Milan; Larson, Eric B; Zollman, Felise S; Hsu, Nancy; Pacini, Sonia; Shah, Amit; Patton, James L
2013-08-09
Although common during the early stages of recovery from severe traumatic brain injury (TBI), attention deficits have been scarcely investigated. Encouraging evidence suggests beneficial effects of attention training in more chronic and higher functioning patients. Interactive technology may provide new opportunities for rehabilitation in inpatients who are earlier in their recovery. We designed a "virtually minimal" approach using robot-rendered haptics in a virtual environment to train severely injured inpatients in the early stages of recovery to sustain attention to a visuo-motor task. 21 inpatients with severe TBI completed repetitive reaching toward targets that were both seen and felt. Patients were tested over two consecutive days, experiencing 3 conditions (no haptic feedback, a break-through force, and haptic nudge) in 12 successive, 4-minute blocks. The interactive visuo-haptic environments were well-tolerated and engaging. Patients typically remained attentive to the task. However, patients exhibited attention loss both before (prolonged initiation) and during (pauses during motion) a movement. Compared to no haptic feedback, patients benefited from haptic nudge cues but not break-through forces. As training progressed, patients increased the number of targets acquired and spontaneously improved from one day to the next. Interactive visuo-haptic environments could be beneficial for attention training for severe TBI patients in the early stages of recovery and warrants further and more prolonged clinical testing.
NASA Astrophysics Data System (ADS)
Yin, Feilong; Hayashi, Ryuzo; Raksincharoensak, Pongsathorn; Nagai, Masao
This research proposes a haptic velocity guidance assistance system for realizing eco-driving as well as enhancing traffic capacity by cooperating with ITS (Intelligent Transportation Systems). The proposed guidance system generates the desired accelerator pedal (abbreviated as pedal) stroke with respect to the desired velocity obtained from ITS considering vehicle dynamics, and provides the desired pedal stroke to the driver via a haptic pedal whose reaction force is controllable and guides the driver in order to trace the desired velocity in real time. The main purpose of this paper is to discuss the feasibility of the haptic velocity guidance. A haptic velocity guidance system for research is developed on the Driving Simulator of TUAT (DS), by attaching a low-inertia, low-friction motor to the pedal, which does not change the original characteristics of the original pedal when it is not operated, implementing an algorithm regarding the desired pedal stroke calculation and the reaction force controller. The haptic guidance maneuver is designed based on human pedal stepping experiments. A simple velocity profile with acceleration, deceleration and cruising is synthesized according to naturalistic driving for testing the proposed system. The experiment result of 9 drivers shows that the haptic guidance provides high accuracy and quick response in velocity tracking. These results prove that the haptic guidance is a promising velocity guidance method from the viewpoint of HMI (Human Machine Interface).
Haptic perception accuracy depending on self-produced movement.
Park, Chulwook; Kim, Seonjin
2014-01-01
This study measured whether self-produced movement influences haptic perception ability (experiment 1) as well as the factors associated with levels of influence (experiment 2) in racket sports. For experiment 1, the haptic perception accuracy levels of five male table tennis experts and five male novices were examined under two different conditions (no movement vs. movement). For experiment 2, the haptic afferent subsystems of five male table tennis experts and five male novices were investigated in only the self-produced movement-coupled condition. Inferential statistics (ANOVA, t-test) and custom-made devices (shock & vibration sensor, Qualisys Track Manager) of the data were used to determine the haptic perception accuracy (experiment 1, experiment 2) and its association with expertise. The results of this research show that expert-level players acquire higher accuracy with less variability (racket vibration and angle) than novice-level players, especially in their self-produced movement coupled performances. The important finding from this result is that, in terms of accuracy, the skill-associated differences were enlarged during self-produced movement. To explain the origin of this difference between experts and novices, the functional variability of haptic afferent subsystems can serve as a reference. These two factors (self-produced accuracy and the variability of haptic features) as investigated in this study would be useful criteria for educators in racket sports and suggest a broader hypothesis for further research into the effects of the haptic accuracy related to variability.
Mechatronic design of haptic forceps for robotic surgery.
Rizun, P; Gunn, D; Cox, B; Sutherland, G
2006-12-01
Haptic feedback increases operator performance and comfort during telerobotic manipulation. Feedback of grasping pressure is critical in many microsurgical tasks, yet no haptic interface for surgical tools is commercially available. Literature on the psychophysics of touch was reviewed to define the spectrum of human touch perception and the fidelity requirements of an ideal haptic interface. Mechanical design and control literature was reviewed to translate the psychophysical requirements to engineering specification. High-fidelity haptic forceps were then developed through an iterative process between engineering and surgery. The forceps are a modular device that integrate with a haptic hand controller to add force feedback for tool actuation in telerobotic or virtual surgery. Their overall length is 153 mm and their mass is 125 g. A contact-free voice coil actuator generates force feedback at frequencies up to 800 Hz. Maximum force output is 6 N (2N continuous) and the force resolution is 4 mN. The forceps employ a contact-free magnetic position sensor as well as micro-machined accelerometers to measure opening/closing acceleration. Position resolution is 0.6 microm with 1.3 microm RMS noise. The forceps can simulate stiffness greater than 20N/mm or impedances smaller than 15 g with no noticeable haptic artifacts or friction. As telerobotic surgery evolves, haptics will play an increasingly important role. Copyright 2006 John Wiley & Sons, Ltd.
Afzal, Muhammad Raheel; Byun, Ha-Young; Oh, Min-Kyun; Yoon, Jungwon
2015-03-13
Haptic control is a useful therapeutic option in rehabilitation featuring virtual reality interaction. As with visual and vibrotactile biofeedback, kinesthetic haptic feedback may assist in postural control, and can achieve balance control. Kinesthetic haptic feedback in terms of body sway can be delivered via a commercially available haptic device and can enhance the balance stability of both young healthy subjects and stroke patients. Our system features a waist-attached smartphone, software running on a computer (PC), and a dedicated Phantom Omni® device. Young healthy participants performed balance tasks after assumption of each of four distinct postures for 30 s (one foot on the ground; the Tandem Romberg stance; one foot on foam; and the Tandem Romberg stance on foam) with eyes closed. Patient eyes were not closed and assumption of the Romberg stance (only) was tested during a balance task 25 s in duration. An Android application running continuously on the smartphone sent mediolateral (ML) and anteroposterior (AP) tilt angles to a PC, which generated kinesthetic haptic feedback via Phantom Omni®. A total of 16 subjects, 8 of whom were young healthy and 8 of whom had suffered stroke, participated in the study. Post-experiment data analysis was performed using MATLAB®. Mean Velocity Displacement (MVD), Planar Deviation (PD), Mediolateral Trajectory (MLT) and Anteroposterior Trajectory (APT) parameters were analyzed to measure reduction in body sway. Our kinesthetic haptic feedback system was effective to reduce postural sway in young healthy subjects regardless of posture and the condition of the substrate (the ground) and to improve MVD and PD in stroke patients who assumed the Romberg stance. Analysis of Variance (ANOVA) revealed that kinesthetic haptic feedback significantly reduced body sway in both categories of subjects. Kinesthetic haptic feedback can be implemented using a commercial haptic device and a smartphone. Intuitive balance cues were created using the handle of a haptic device, rendering the approach very simple yet efficient in practice. This novel form of biofeedback will be a useful rehabilitation tool improving the balance of stroke patients.
Poor shape perception is the reason reaches-to-grasp are visually guided online.
Lee, Young-Lim; Crabtree, Charles E; Norman, J Farley; Bingham, Geoffrey P
2008-08-01
Both judgment studies and studies of feedforward reaching have shown that the visual perception of object distance, size, and shape are inaccurate. However, feedback has been shown to calibrate feedfoward reaches-to-grasp to make them accurate with respect to object distance and size. We now investigate whether shape perception (in particular, the aspect ratio of object depth to width) can be calibrated in the context of reaches-to-grasp. We used cylindrical objects with elliptical cross-sections of varying eccentricity. Our participants reached to grasp the width or the depth of these objects with the index finger and thumb. The maximum grasp aperture and the terminal grasp aperture were used to evaluate perception. Both occur before the hand has contacted an object. In Experiments 1 and 2, we investigated whether perceived shape is recalibrated by distorted haptic feedback. Although somewhat equivocal, the results suggest that it is not. In Experiment 3, we tested the accuracy of feedforward grasping with respect to shape with haptic feedback to allow calibration. Grasping was inaccurate in ways comparable to findings in shape perception judgment studies. In Experiment 4, we hypothesized that online guidance is needed for accurate grasping. Participants reached to grasp either with or without vision of the hand. The result was that the former was accurate, whereas the latter was not. We conclude that shape perception is not calibrated by feedback from reaches-to-grasp and that online visual guidance is required for accurate grasping because shape perception is poor.
Active Manual Movement Improves Directional Perception of Illusory Force.
Amemiya, Tomohiro; Gomi, Hiroaki
2016-01-01
Active touch sensing is known to facilitate the discrimination or recognition of the spatial properties of an object from the movement of tactile sensors on the skin and by integrating proprioceptive feedback about hand positions or motor commands related to ongoing hand movements. On the other hand, several studies have reported that tactile processing is suppressed by hand movement. Thus, it is unclear whether or not the active exploration of force direction by using hand or arm movement improves the perception of the force direction. Here, we show that active manual movement in both the rotational and translational directions enhances the precise perception of the force direction. To make it possible to move a hand in space without any physical constraints, we have adopted a method of inducing the sensation of illusory force by asymmetric vibration. We found that the precision of the perceived force direction was significantly better when the shoulder is rotated medially and laterally. We also found that directional errors supplied by the motor response of the perceived force were smaller than those resulting from perceptual judgments between visual and haptic directional stimuli. These results demonstrate that active manual movement boosts the precision of the perceived direction of an illusory force.
Salisbury, C M; Gillespie, R B; Tan, H Z; Barbagli, F; Salisbury, J K
2011-01-01
In this paper, we extend the concept of the contrast sensitivity function - used to evaluate video projectors - to the evaluation of haptic devices. We propose using human observers to determine if vibrations rendered using a given haptic device are accompanied by artifacts detectable to humans. This determination produces a performance measure that carries particular relevance to applications involving texture rendering. For cases in which a device produces detectable artifacts, we have developed a protocol that localizes deficiencies in device design and/or hardware implementation. In this paper, we present results from human vibration detection experiments carried out using three commercial haptic devices and one high performance voice coil motor. We found that all three commercial devices produced perceptible artifacts when rendering vibrations near human detection thresholds. Our protocol allowed us to pinpoint the deficiencies, however, and we were able to show that minor modifications to the haptic hardware were sufficient to make these devices well suited for rendering vibrations, and by extension, the vibratory components of textures. We generalize our findings to provide quantitative design guidelines that ensure the ability of haptic devices to proficiently render the vibratory components of textures.
Haptic Feedback in Robot-Assisted Minimally Invasive Surgery
Okamura, Allison M.
2009-01-01
Purpose of Review Robot-assisted minimally invasive surgery (RMIS) holds great promise for improving the accuracy and dexterity of a surgeon while minimizing trauma to the patient. However, widespread clinical success with RMIS has been marginal. It is hypothesized that the lack of haptic (force and tactile) feedback presented to the surgeon is a limiting factor. This review explains the technical challenges of creating haptic feedback for robot-assisted surgery and provides recent results that evaluate the effectiveness of haptic feedback in mock surgical tasks. Recent Findings Haptic feedback systems for RMIS are still under development and evaluation. Most provide only force feedback, with limited fidelity. The major challenge at this time is sensing forces applied to the patient. A few tactile feedback systems for RMIS have been created, but their practicality for clinical implementation needs to be shown. It is particularly difficult to sense and display spatially distributed tactile information. The cost-benefit ratio for haptic feedback in RMIS has not been established. Summary The designs of existing commercial RMIS systems are not conducive for force feedback, and creative solutions are needed to create compelling tactile feedback systems. Surgeons, engineers, and neuroscientists should work together to develop effective solutions for haptic feedback in RMIS. PMID:19057225
A Multi-Finger Interface with MR Actuators for Haptic Applications.
Qin, Huanhuan; Song, Aiguo; Gao, Zhan; Liu, Yuqing; Jiang, Guohua
2018-01-01
Haptic devices with multi-finger input are highly desirable in providing realistic and natural feelings when interacting with the remote or virtual environment. Compared with the conventional actuators, MR (Magneto-rheological) actuators are preferable options in haptics because of larger passive torque and torque-volume ratios. Among the existing haptic MR actuators, most of them are still bulky and heavy. If they were smaller and lighter, they would become more suitable for haptics. In this paper, a small-scale yet powerful MR actuator was designed to build a multi-finger interface for the 6 DOF haptic device. The compact structure was achieved by adopting the multi-disc configuration. Based on this configuration, the MR actuator can generate the maximum torque of 480 N.mm with dimensions of only 36 mm diameter and 18 mm height. Performance evaluation showed that it can exhibit a relatively high dynamic range and good response characteristics when compared with some other haptic MR actuators. The multi-finger interface is equipped with three MR actuators and can provide up to 8 N passive force to the thumb, index and middle fingers, respectively. An application example was used to demonstrate the effectiveness and potential of this new MR actuator based interface.
Haptic augmented skin surface generation toward telepalpation from a mobile skin image.
Kim, K
2018-05-01
Very little is known about the methods of integrating palpation techniques to existing mobile teleskin imaging that delivers low quality tactile information (roughness) for telepalpation. However, no study has been reported yet regarding telehaptic palpation using mobile phone images for teledermatology or teleconsultations of skincare. This study is therefore aimed at introducing a new algorithm accurately reconstructing a haptic augmented skin surface for telehaptic palpation using a low-cost clip-on microscope simply attached to a mobile phone. Multiple algorithms such as gradient-based image enhancement, roughness-adaptive tactile mask generation, roughness-enhanced 3D tactile map building, and visual and haptic rendering with a three-degrees-of-freedom (DOF) haptic device were developed and integrated as one system. Evaluation experiments have been conducted to test the performance of 3D roughness reconstruction with/without the tactile mask. The results confirm that reconstructed haptic roughness with the tactile mask is superior to the reconstructed haptic roughness without the tactile mask. Additional experiments demonstrate that the proposed algorithm is robust against varying lighting conditions and blurring. In last, a user study has been designed to see the effect of the haptic modality to the existing visual only interface and the results attest that the haptic skin palpation can significantly improve the skin exam performance. Mobile image-based telehaptic palpation technology was proposed, and an initial version was developed. The developed technology was tested with several skin images and the experimental results showed the superiority of the proposed scheme in terms of the performance of haptic augmentation of real skin images. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Force Exertion Capacity Measurements in Haptic Virtual Environments
ERIC Educational Resources Information Center
Munih, Marko; Bardorfer, Ales; Ceru, Bojan; Bajd, Tadej; Zupan, Anton
2010-01-01
An objective test for evaluating functional status of the upper limbs (ULs) in patients with muscular distrophy (MD) is presented. The method allows for quantitative assessment of the UL functional state with an emphasis on force exertion capacity. The experimental measurement setup and the methodology for the assessment of maximal exertable force…
Brokaw, Elizabeth B; Murray, Theresa M; Nef, Tobias; Lum, Peter S; Brokaw, Elizabeth B; Nichols, Diane; Holley, Rahsaan J
2011-01-01
After a stroke abnormal joint coordination of the arm may limit functional movement and recovery. To aid in training inter-joint movement coordination a haptic guidance method for functional driven rehabilitation after stroke called Time Independent Functional Training (TIFT) has been developed for the ARMin III robot. The mode helps retraining inter-joint coordination during functional movements, such as putting an object on a shelf, pouring from a pitcher, and sorting objects into bins. A single chronic stroke subject was tested for validation of the modality. The subject was given 1.5 hrs of robotic therapy twice a week for 4 weeks. The therapy and the results of training the single stroke subject are discussed. The subject showed a decrease in training joint error for the sorting task across training sessions and increased self-selected movement time in training. In kinematic reaching analysis the subject showed improvements in range of motion and joint coordination in a reaching task, as well as improvements in supination-pronation range of motion at the wrist. © 2011 IEEE
Feeling form: the neural basis of haptic shape perception.
Yau, Jeffrey M; Kim, Sung Soo; Thakur, Pramodsingh H; Bensmaia, Sliman J
2016-02-01
The tactile perception of the shape of objects critically guides our ability to interact with them. In this review, we describe how shape information is processed as it ascends the somatosensory neuraxis of primates. At the somatosensory periphery, spatial form is represented in the spatial patterns of activation evoked across populations of mechanoreceptive afferents. In the cerebral cortex, neurons respond selectively to particular spatial features, like orientation and curvature. While feature selectivity of neurons in the earlier processing stages can be understood in terms of linear receptive field models, higher order somatosensory neurons exhibit nonlinear response properties that result in tuning for more complex geometrical features. In fact, tactile shape processing bears remarkable analogies to its visual counterpart and the two may rely on shared neural circuitry. Furthermore, one of the unique aspects of primate somatosensation is that it contains a deformable sensory sheet. Because the relative positions of cutaneous mechanoreceptors depend on the conformation of the hand, the haptic perception of three-dimensional objects requires the integration of cutaneous and proprioceptive signals, an integration that is observed throughout somatosensory cortex. Copyright © 2016 the American Physiological Society.
Besse, Nadine; Rosset, Samuel; Zarate, Juan Jose; Ferrari, Elisabetta; Brayda, Luca; Shea, Herbert
2018-01-01
We present a fully latching and scalable 4 × 4 haptic display with 4 mm pitch, 5 s refresh time, 400 mN holding force, and 650 μm displacement per taxel. The display serves to convey dynamic graphical information to blind and visually impaired users. Combining significant holding force with high taxel density and large amplitude motion in a very compact overall form factor was made possible by exploiting the reversible, fast, hundred-fold change in the stiffness of a thin shape memory polymer (SMP) membrane when heated above its glass transition temperature. Local heating is produced using an addressable array of 3 mm in diameter stretchable microheaters patterned on the SMP. Each taxel is selectively and independently actuated by synchronizing the local Joule heating with a single pressure supply. Switching off the heating locks each taxel into its position (up or down), enabling holding any array configuration with zero power consumption. A 3D-printed pin array is mounted over the SMP membrane, providing the user with a smooth and room temperature array of movable pins to explore by touch. Perception tests were carried out with 24 blind users resulting in 70 percent correct pattern recognition over a 12-word tactile dictionary.
Robotic guidance benefits the learning of dynamic, but not of spatial movement characteristics.
Lüttgen, Jenna; Heuer, Herbert
2012-10-01
Robotic guidance is an engineered form of haptic-guidance training and intended to enhance motor learning in rehabilitation, surgery, and sports. However, its benefits (and pitfalls) are still debated. Here, we investigate the effects of different presentation modes on the reproduction of a spatiotemporal movement pattern. In three different groups of participants, the movement was demonstrated in three different modalities, namely visual, haptic, and visuo-haptic. After demonstration, participants had to reproduce the movement in two alternating recall conditions: haptic and visuo-haptic. Performance of the three groups during recall was compared with regard to spatial and dynamic movement characteristics. After haptic presentation, participants showed superior dynamic accuracy, whereas after visual presentation, participants performed better with regard to spatial accuracy. Added visual feedback during recall always led to enhanced performance, independent of the movement characteristic and the presentation modality. These findings substantiate the different benefits of different presentation modes for different movement characteristics. In particular, robotic guidance is beneficial for the learning of dynamic, but not of spatial movement characteristics.
Haptic feedback in OP:Sense - augmented reality in telemanipulated robotic surgery.
Beyl, T; Nicolai, P; Mönnich, H; Raczkowksy, J; Wörn, H
2012-01-01
In current research, haptic feedback in robot assisted interventions plays an important role. However most approaches to haptic feedback only regard the mapping of the current forces at the surgical instrument to the haptic input devices, whereas surgeons demand a combination of medical imaging and telemanipulated robotic setups. In this paper we describe how this feature is integrated in our robotic research platform OP:Sense. The proposed method allows the automatic transfer of segmented imaging data to the haptic renderer and therefore allows enriching the haptic feedback with virtual fixtures based on imaging data. Anatomical structures are extracted from pre-operative generated medical images or virtual walls are defined by the surgeon inside the imaging data. Combining real forces with virtual fixtures can guide the surgeon to the regions of interest as well as helps to prevent the risk of damage to critical structures inside the patient. We believe that the combination of medical imaging and telemanipulation is a crucial step for the next generation of MIRS-systems.
Augmented reality and haptic interfaces for robot-assisted surgery.
Yamamoto, Tomonori; Abolhassani, Niki; Jung, Sung; Okamura, Allison M; Judkins, Timothy N
2012-03-01
Current teleoperated robot-assisted minimally invasive surgical systems do not take full advantage of the potential performance enhancements offered by various forms of haptic feedback to the surgeon. Direct and graphical haptic feedback systems can be integrated with vision and robot control systems in order to provide haptic feedback to improve safety and tissue mechanical property identification. An interoperable interface for teleoperated robot-assisted minimally invasive surgery was developed to provide haptic feedback and augmented visual feedback using three-dimensional (3D) graphical overlays. The software framework consists of control and command software, robot plug-ins, image processing plug-ins and 3D surface reconstructions. The feasibility of the interface was demonstrated in two tasks performed with artificial tissue: palpation to detect hard lumps and surface tracing, using vision-based forbidden-region virtual fixtures to prevent the patient-side manipulator from entering unwanted regions of the workspace. The interoperable interface enables fast development and successful implementation of effective haptic feedback methods in teleoperation. Copyright © 2011 John Wiley & Sons, Ltd.
Design of a 7-DOF haptic master using a magneto-rheological devices for robot surgery
NASA Astrophysics Data System (ADS)
Kang, Seok-Rae; Choi, Seung-Bok; Hwang, Yong-Hoon; Cha, Seung-Woo
2017-04-01
This paper presents a 7 degrees-of-freedom (7-DOF) haptic master which is applicable to the robot-assisted minimally invasive surgery (RMIS). By utilizing a controllable magneto-rheological (MR) fluid, the haptic master can provide force information to the surgeon during surgery. The proposed haptic master consists of three degrees motions of X, Y, Z and four degrees motions of the pitch, yaw, roll and grasping. All of them have force feedback capability. The proposed haptic master can generate the repulsive forces or torques by activating MR clutch and MR brake. Both MR clutch and MR brake are designed and manufactured with consideration of the size and output torque which is usable to the robotic surgery. A proportional-integral-derivative (PID) controller is then designed and implemented to achieve torque/force tracking trajectories. It is verified that the proposed haptic master can track well the desired torque and force occurred in the surgical place by controlling the input current applied to MR clutch and brake.
Menon, Samir; Zhu, Jack; Goyal, Deeksha; Khatib, Oussama
2017-07-01
Haptic interfaces compatible with functional magnetic resonance imaging (Haptic fMRI) promise to enable rich motor neuroscience experiments that study how humans perform complex manipulation tasks. Here, we present a large-scale study (176 scans runs, 33 scan sessions) that characterizes the reliability and performance of one such electromagnetically actuated device, Haptic fMRI Interface 3 (HFI-3). We outline engineering advances that ensured HFI-3 did not interfere with fMRI measurements. Observed fMRI temporal noise levels with HFI-3 operating were at the fMRI baseline (0.8% noise to signal). We also present results from HFI-3 experiments demonstrating that high resolution fMRI can be used to study spatio-temporal patterns of fMRI blood oxygenation dependent (BOLD) activation. These experiments include motor planning, goal-directed reaching, and visually-guided force control. Observed fMRI responses are consistent with existing literature, which supports Haptic fMRI's effectiveness at studying the brain's motor regions.
Advanced haptic sensor for measuring human skin conditions
NASA Astrophysics Data System (ADS)
Tsuchimi, Daisuke; Okuyama, Takeshi; Tanaka, Mami
2009-12-01
This paper is concerned with the development of a tactile sensor using PVDF (Polyvinylidene Fluoride) film as a sensory receptor of the sensor to evaluate softness, smoothness, and stickiness of human skin. Tactile sense is the most important sense in the sensation receptor of the human body along with eyesight, and we can examine skin condition quickly using these sense. But, its subjectivity and ambiguity make it difficult to quantify skin conditions. Therefore, development of measurement device which can evaluate skin conditions easily and objectively is demanded by dermatologists, cosmetic industries, and so on. In this paper, an advanced haptic sensor system that can measure multiple information of skin condition in various parts of human body is developed. The applications of the sensor system to evaluate softness, smoothness, and stickiness of skin are investigated through two experiments.
Advanced haptic sensor for measuring human skin conditions
NASA Astrophysics Data System (ADS)
Tsuchimi, Daisuke; Okuyama, Takeshi; Tanaka, Mami
2010-01-01
This paper is concerned with the development of a tactile sensor using PVDF (Polyvinylidene Fluoride) film as a sensory receptor of the sensor to evaluate softness, smoothness, and stickiness of human skin. Tactile sense is the most important sense in the sensation receptor of the human body along with eyesight, and we can examine skin condition quickly using these sense. But, its subjectivity and ambiguity make it difficult to quantify skin conditions. Therefore, development of measurement device which can evaluate skin conditions easily and objectively is demanded by dermatologists, cosmetic industries, and so on. In this paper, an advanced haptic sensor system that can measure multiple information of skin condition in various parts of human body is developed. The applications of the sensor system to evaluate softness, smoothness, and stickiness of skin are investigated through two experiments.
Teaching Classical Mechanics Concepts Using Visuo-Haptic Simulators
ERIC Educational Resources Information Center
Neri, Luis; Noguez, Julieta; Robledo-Rella, Victor; Escobar-Castillejos, David; Gonzalez-Nucamendi, Andres
2018-01-01
In this work, the design and implementation of several physics scenarios using haptic devices are presented and discussed. Four visuo-haptic applications were developed for an undergraduate engineering physics course. Experiments with experimental and control groups were designed and implemented. Activities and exercises related to classical…
Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L
2017-02-01
To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.
The "Haptic Finger"- a new device for monitoring skin condition.
Tanaka, Mami; Lévêque, Jean Luc; Tagami, Hachiro; Kikuchi, Katsuko; Chonan, Seifi
2003-05-01
Touching the skin is of great importance for the Clinician for assessing roughness, softness, firmness, etc. This type of clinical assessment is very subjective and therefore non-reproducible from one Clinician to another one or even from time to time for the same Clinician. In order to objectively monitor skin texture, we developed a new sensor, placed directly on the Clinician's finger, which generate some electric signal when slid over the skin surface. The base of this Haptic Finger sensor is a thin stainless steel plate on which sponge rubber, PVDF foil, acetate film and gauze are layered. The signal generated by the sensor was filtered and digitally stored before processing. In a first in vitro experiment, the sensor was moved over different skin models (sponge rubber covered by silicon rubber) of varying hardness and roughness. These experiments allowed the definition of two parameters characterizing textures. The first parameter is variance of the signal processed using wavelet analysis, representing an index of roughness. The second parameter is dispersion of the power spectrum density in the frequency domain, corresponding to hardness. To validate these parameters, the Haptic Finger was used to scan skin surfaces of 30 people, 14 of whom displayed a skin disorder: xerosis (n = 5), atopic dermatitis (n = 7), and psoriasis (n = 2). The results obtained by means of the sensor were compared with subjective, clinical evaluations by a Clinician who scored both roughness and hardness of the skin. Good agreement was observed between clinical assessment of the skin and the two parameters generated using the Haptic Finger. Use of this sensor could prove extremely valuable in cosmetic research where skin surface texture (in terms of tactile properties) is difficult to measure.
Functional Contour-following via Haptic Perception and Reinforcement Learning.
Hellman, Randall B; Tekin, Cem; van der Schaar, Mihaela; Santos, Veronica J
2018-01-01
Many tasks involve the fine manipulation of objects despite limited visual feedback. In such scenarios, tactile and proprioceptive feedback can be leveraged for task completion. We present an approach for real-time haptic perception and decision-making for a haptics-driven, functional contour-following task: the closure of a ziplock bag. This task is challenging for robots because the bag is deformable, transparent, and visually occluded by artificial fingertip sensors that are also compliant. A deep neural net classifier was trained to estimate the state of a zipper within a robot's pinch grasp. A Contextual Multi-Armed Bandit (C-MAB) reinforcement learning algorithm was implemented to maximize cumulative rewards by balancing exploration versus exploitation of the state-action space. The C-MAB learner outperformed a benchmark Q-learner by more efficiently exploring the state-action space while learning a hard-to-code task. The learned C-MAB policy was tested with novel ziplock bag scenarios and contours (wire, rope). Importantly, this work contributes to the development of reinforcement learning approaches that account for limited resources such as hardware life and researcher time. As robots are used to perform complex, physically interactive tasks in unstructured or unmodeled environments, it becomes important to develop methods that enable efficient and effective learning with physical testbeds.
Adamovich, Sergei; Fluet, Gerard G.; Merians, Alma S.; Mathai, Abraham; Qiu, Qinyin
2010-01-01
Current neuroscience has identified several constructs to increase the effectiveness of upper extremity rehabilitation. One is the use of progressive, skill acquisition-oriented training. Another approach emphasizes the use of bilateral activities. Building on these principles, this paper describes the design and feasibility testing of a robotic / virtual environment system designed to train the arm of persons who have had strokes. The system provides a variety of assistance modes, scalable workspaces and hand-robot interfaces allowing persons with strokes to train multiple joints in three dimensions. The simulations utilize assistance algorithms that adjust task difficulty both online and offline in relation to subject performance. Several distinctive haptic effects have been incorporated into the simulations. An adaptive master-slave relationship between the unimpaired and impaired arm encourages active movement of the subject's hemiparetic arm during a bimanual task. Adaptive anti-gravity support and damping stabilize the arm during virtual reaching and placement tasks. An adaptive virtual spring provides assistance to complete the movement if the subject is unable to complete the task in time. Finally, haptically rendered virtual objects help to shape the movement trajectory during a virtual placement task. A proof of concept study demonstrated this system to be safe, feasible and worthy of further study. PMID:19666345
Ponce Wong, Ruben D; Hellman, Randall B; Santos, Veronica J
2014-01-01
Upper-limb amputees rely primarily on visual feedback when using their prostheses to interact with others or objects in their environment. A constant reliance upon visual feedback can be mentally exhausting and does not suffice for many activities when line-of-sight is unavailable. Upper-limb amputees could greatly benefit from the ability to perceive edges, one of the most salient features of 3D shape, through touch alone. We present an approach for estimating edge orientation with respect to an artificial fingertip through haptic exploration using a multimodal tactile sensor on a robot hand. Key parameters from the tactile signals for each of four exploratory procedures were used as inputs to a support vector regression model. Edge orientation angles ranging from -90 to 90 degrees were estimated with an 85-input model having an R (2) of 0.99 and RMS error of 5.08 degrees. Electrode impedance signals provided the most useful inputs by encoding spatially asymmetric skin deformation across the entire fingertip. Interestingly, sensor regions that were not in direct contact with the stimulus provided particularly useful information. Methods described here could pave the way for semi-autonomous capabilities in prosthetic or robotic hands during haptic exploration, especially when visual feedback is unavailable.
Gal, Gilad Ben; Weiss, Ervin I; Gafni, Naomi; Ziv, Amitai
2011-04-01
Virtual reality force feedback simulators provide a haptic (sense of touch) feedback through the device being held by the user. The simulator's goal is to provide a learning experience resembling reality. A newly developed haptic simulator (IDEA Dental, Las Vegas, NV, USA) was assessed in this study. Our objectives were to assess the simulator's ability to serve as a tool for dental instruction, self-practice, and student evaluation, as well as to evaluate the sensation it provides. A total of thirty-three evaluators were divided into two groups. The first group consisted of twenty-one experienced dental educators; the second consisted of twelve fifth-year dental students. Each participant performed drilling tasks using the simulator and filled out a questionnaire regarding the simulator and potential ways of using it in dental education. The results show that experienced dental faculty members as well as advanced dental students found that the simulator could provide significant potential benefits in the teaching and self-learning of manual dental skills. Development of the simulator's tactile sensation is needed to attune it to genuine sensation. Further studies relating to aspects of the simulator's structure and its predictive validity, its scoring system, and the nature of the performed tasks should be conducted.
Operator dynamics for stability condition in haptic and teleoperation system: A survey.
Li, Hongbing; Zhang, Lei; Kawashima, Kenji
2018-04-01
Currently, haptic systems ignore the varying impedance of the human hand with its countless configurations and thus cannot recreate the complex haptic interactions. The literature does not reveal a comprehensive survey on the methods proposed and this study is an attempt to bridge this gap. The paper includes an extensive review of human arm impedance modeling and control deployed to address inherent stability and transparency issues in haptic interaction and teleoperation systems. Detailed classification and comparative study of various contributions in human arm modeling are presented and summarized in tables and diagrams. The main challenges in modeling human arm impedance for haptic robotic applications are identified. The possible future research directions are outlined based on the gaps identified in the survey. Copyright © 2018 John Wiley & Sons, Ltd.
Haptic interface of the KAIST-Ewha colonoscopy simulator II.
Woo, Hyun Soo; Kim, Woo Seok; Ahn, Woojin; Lee, Doo Yong; Yi, Sun Young
2008-11-01
This paper presents an improved haptic interface for the Korea Advanced Institute of Science and Technology Ewha Colonoscopy Simulator II. The haptic interface enables the distal portion of the colonoscope to be freely bent while guaranteeing sufficient workspace and reflective forces for colonoscopy simulation. Its force-torque sensor measures the profiles of the user. Manipulation of the colonoscope tip is monitored by four deflection sensors and triggers computations to render accurate graphic images corresponding to the rotation of the angle knob. Tack sensors are attached to the valve-actuation buttons of the colonoscope to simulate air injection or suction as well as the corresponding deformation of the colon. A survey study for face validation was conducted, and the result shows that the developed haptic interface provides realistic haptic feedback for colonoscopy simulations.
A Haptic-Enhanced System for Molecular Sensing
NASA Astrophysics Data System (ADS)
Comai, Sara; Mazza, Davide
The science of haptics has received an enormous attention in the last decade. One of the major application trends of haptics technology is data visualization and training. In this paper, we present a haptically-enhanced system for manipulation and tactile exploration of molecules.The geometrical models of molecules is extracted either from theoretical or empirical data using file formats widely adopted in chemical and biological fields. The addition of information computed with computational chemistry tools, allows users to feel the interaction forces between an explored molecule and a charge associated to the haptic device, and to visualize a huge amount of numerical data in a more comprehensible way. The developed tool can be used either for teaching or research purposes due to its high reliance on both theoretical and experimental data.
A Haptics Symposium Retrospective: 20 Years
NASA Technical Reports Server (NTRS)
Colgate, J. Edward; Adelstein, Bernard
2012-01-01
The very first "Haptics Symposium" actually went by the name "Issues in the Development of Kinesthetic Displays of Teleoperation and Virtual environments." The word "Haptic" didn't make it into the name until the next year. Not only was the most important word absent but so were RFPs, journals and commercial markets. And yet, as we prepare for the 2012 symposium, haptics is a thriving and amazingly diverse field of endeavor. In this talk we'll reflect on the origins of this field and on its evolution over the past twenty years, as well as the evolution of the Haptics Symposium itself. We hope to share with you some of the excitement we've felt along the way, and that we continue to feel as we look toward the future of our field.
Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.
Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz
2015-01-01
This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.
Spatial patterns of cutaneous vibration during whole-hand haptic interactions
Hayward, Vincent; Visell, Yon
2016-01-01
We investigated the propagation patterns of cutaneous vibration in the hand during interactions with touched objects. Prior research has highlighted the importance of vibrotactile signals during haptic interactions, but little is known of how vibrations propagate throughout the hand. Furthermore, the extent to which the patterns of vibrations reflect the nature of the objects that are touched, and how they are touched, is unknown. Using an apparatus comprised of an array of accelerometers, we mapped and analyzed spatial distributions of vibrations propagating in the skin of the dorsal region of the hand during active touch, grasping, and manipulation tasks. We found these spatial patterns of vibration to vary systematically with touch interactions and determined that it is possible to use these data to decode the modes of interaction with touched objects. The observed vibration patterns evolved rapidly in time, peaking in intensity within a few milliseconds, fading within 20–30 ms, and yielding interaction-dependent distributions of energy in frequency bands that span the range of vibrotactile sensitivity. These results are consistent with findings in perception research that indicate that vibrotactile information distributed throughout the hand can transmit information regarding explored and manipulated objects. The results may further clarify the role of distributed sensory resources in the perceptual recovery of object attributes during active touch, may guide the development of approaches to robotic sensing, and could have implications for the rehabilitation of the upper extremity. PMID:27035957
Menon, Samir; Quigley, Paul; Yu, Michelle; Khatib, Oussama
2014-01-01
Neuroimaging artifacts in haptic functional magnetic resonance imaging (Haptic fMRI) experiments have the potential to induce spurious fMRI activation where there is none, or to make neural activation measurements appear correlated across brain regions when they are actually not. Here, we demonstrate that performing three-dimensional goal-directed reaching motions while operating Haptic fMRI Interface (HFI) does not create confounding motion artifacts. To test for artifacts, we simultaneously scanned a subject's brain with a customized soft phantom placed a few centimeters away from the subject's left motor cortex. The phantom captured task-related motion and haptic noise, but did not contain associated neural activation measurements. We quantified the task-related information present in fMRI measurements taken from the brain and the phantom by using a linear max-margin classifier to predict whether raw time series data could differentiate between motion planning or reaching. fMRI measurements in the phantom were uninformative (2σ, 45-73%; chance=50%), while those in primary motor, visual, and somatosensory cortex accurately classified task-conditions (2σ, 90-96%). We also localized artifacts due to the haptic interface alone by scanning a stand-alone fBIRN phantom, while an operator performed haptic tasks outside the scanner's bore with the interface at the same location. The stand-alone phantom had lower temporal noise and had similar mean classification but a tighter distribution (bootstrap Gaussian fit) than the brain phantom. Our results suggest that any fMRI measurement artifacts for Haptic fMRI reaching experiments are dominated by actual neural responses.
Evaluation of Motor Control Using Haptic Device
NASA Astrophysics Data System (ADS)
Nuruki, Atsuo; Kawabata, Takuro; Shimozono, Tomoyuki; Yamada, Masafumi; Yunokuchi, Kazutomo
When the kinesthesia and the touch act at the same time, such perception is called haptic perception. This sense has the key role in motor information on the force and position control. The haptic perception is important in the field where the evaluation of the motor control is needed. The purpose of this paper is to evaluate the motor control, perception of heaviness and distance in normal and fatigue conditions using psychophysical experiment. We used a haptic device in order to generate precise force and distance, but the precedent of the evaluation system with the haptic device has been few. Therefore, it is another purpose to examine whether the haptic device is useful as evaluation system for the motor control. The psychophysical quantity of force and distance was measured by two kinds of experiments. Eight healthy subjects participated in this study. The stimulation was presented by haptic device [PHANTOM Omni: SensAble Company]. The subjects compared between standard and test stimulation, and answered it had felt which stimulation was strong. In the result of the psychophysical quantity of force, just noticeable difference (JND) had a significant difference, and point of subjective equality (PSE) was not different between normal and muscle fatigue. On the other hand, in the result of the psychophysical quantity of distance, JND and PSE were not difference between normal and muscle fatigue. These results show that control of force was influenced, but control of distance was not influenced in muscle fatigue. Moreover, these results suggested that the haptic device is useful as the evaluation system for the motor control.
Mental rotation of tactile stimuli: using directional haptic cues in mobile devices.
Gleeson, Brian T; Provancher, William R
2013-01-01
Haptic interfaces have the potential to enrich users' interactions with mobile devices and convey information without burdening the user's visual or auditory attention. Haptic stimuli with directional content, for example, navigational cues, may be difficult to use in handheld applications; the user's hand, where the cues are delivered, may not be aligned with the world, where the cues are to be interpreted. In such a case, the user would be required to mentally transform the stimuli between different reference frames. We examine the mental rotation of directional haptic stimuli in three experiments, investigating: 1) users' intuitive interpretation of rotated stimuli, 2) mental rotation of haptic stimuli about a single axis, and 3) rotation about multiple axes and the effects of specific hand poses and joint rotations. We conclude that directional haptic stimuli are suitable for use in mobile applications, although users do not naturally interpret rotated stimuli in any one universal way. We find evidence of cognitive processes involving the rotation of analog, spatial representations and discuss how our results fit into the larger body of mental rotation research. For small angles (e.g., less than 40 degree), these mental rotations come at little cost, but rotations with larger misalignment angles impact user performance. When considering the design of a handheld haptic device, our results indicate that hand pose must be carefully considered, as certain poses increase the difficulty of stimulus interpretation. Generally, all tested joint rotations impact task difficulty, but finger flexion and wrist rotation interact to greatly increase the cost of stimulus interpretation; such hand poses should be avoided when designing a haptic interface.
Mazella, Anaïs; Albaret, Jean-Michel; Picard, Delphine
2016-01-01
To fill an important gap in the psychometric assessment of children and adolescents with impaired vision, we designed a new battery of haptic tests, called Haptic-2D, for visually impaired and sighted individuals aged five to 18 years. Unlike existing batteries, ours uses only two-dimensional raised materials that participants explore using active touch. It is composed of 11 haptic tests, measuring scanning skills, tactile discrimination skills, spatial comprehension skills, short-term tactile memory, and comprehension of tactile pictures. We administered this battery to 138 participants, half of whom were sighted (n=69), and half visually impaired (blind, n=16; low vision, n=53). Results indicated a significant main effect of age on haptic scores, but no main effect of vision or Age × Vision interaction effect. Reliability of test items was satisfactory (Cronbach's alpha, α=0.51-0.84). Convergent validity was good, as shown by a significant correlation (age partialled out) between total haptic scores and scores on the B101 test (rp=0.51, n=47). Discriminant validity was also satisfactory, as attested by a lower but still significant partial correlation between total haptic scores and the raw score on the verbal WISC (rp=0.43, n=62). Finally, test-retest reliability was good (rs=0.93, n=12; interval of one to two months). This new psychometric tool should prove useful to practitioners working with young people with impaired vision. Copyright © 2015 Elsevier Ltd. All rights reserved.
Performance evaluation of a robot-assisted catheter operating system with haptic feedback.
Song, Yu; Guo, Shuxiang; Yin, Xuanchun; Zhang, Linshuai; Hirata, Hideyuki; Ishihara, Hidenori; Tamiya, Takashi
2018-06-20
In this paper, a novel robot-assisted catheter operating system (RCOS) has been proposed as a method to reduce physical stress and X-ray exposure time to physicians during endovascular procedures. The unique design of this system allows the physician to apply conventional bedside catheterization skills (advance, retreat and rotate) to an input catheter, which is placed at the master side to control another patient catheter placed at the slave side. For this purpose, a magnetorheological (MR) fluids-based master haptic interface has been developed to measure the axial and radial motions of an input catheter, as well as to provide the haptic feedback to the physician during the operation. In order to achieve a quick response of the haptic force in the master haptic interface, a hall sensor-based closed-loop control strategy is employed. In slave side, a catheter manipulator is presented to deliver the patient catheter, according to position commands received from the master haptic interface. The contact forces between the patient catheter and blood vessel system can be measured by designed force sensor unit of catheter manipulator. Four levels of haptic force are provided to make the operator aware of the resistance encountered by the patient catheter during the insertion procedure. The catheter manipulator was evaluated for precision positioning. The time lag from the sensed motion to replicated motion is tested. To verify the efficacy of the proposed haptic feedback method, the evaluation experiments in vitro are carried out. The results demonstrate that the proposed system has the ability to enable decreasing the contact forces between the catheter and vasculature.
Somato-Motor Haptic Processing in Posterior Inner Perisylvian Region (SII/pIC) of the Macaque Monkey
Ishida, Hiroaki; Fornia, Luca; Grandi, Laura Clara; Umiltà, Maria Alessandra; Gallese, Vittorio
2013-01-01
The posterior inner perisylvian region including the secondary somatosensory cortex (area SII) and the adjacent region of posterior insular cortex (pIC) has been implicated in haptic processing by integrating somato-motor information during hand-manipulation, both in humans and in non-human primates. However, motor-related properties during hand-manipulation are still largely unknown. To investigate a motor-related activity in the hand region of SII/pIC, two macaque monkeys were trained to perform a hand-manipulation task, requiring 3 different grip types (precision grip, finger exploration, side grip) both in light and in dark conditions. Our results showed that 70% (n = 33/48) of task related neurons within SII/pIC were only activated during monkeys’ active hand-manipulation. Of those 33 neurons, 15 (45%) began to discharge before hand-target contact, while the remaining neurons were tonically active after contact. Thirty-percent (n = 15/48) of studied neurons responded to both passive somatosensory stimulation and to the motor task. A consistent percentage of task-related neurons in SII/pIC was selectively activated during finger exploration (FE) and precision grasping (PG) execution, suggesting they play a pivotal role in control skilled finger movements. Furthermore, hand-manipulation-related neurons also responded when visual feedback was absent in the dark. Altogether, our results suggest that somato-motor neurons in SII/pIC likely contribute to haptic processing from the initial to the final phase of grasping and object manipulation. Such motor-related activity could also provide the somato-motor binding principle enabling the translation of diachronic somatosensory inputs into a coherent image of the explored object. PMID:23936121
Augmented versus Virtual Reality Laparoscopic Simulation: What Is the Difference?
Botden, Sanne M.B.I.; Buzink, Sonja N.; Schijven, Marlies P.
2007-01-01
Background Virtual reality (VR) is an emerging new modality for laparoscopic skills training; however, most simulators lack realistic haptic feedback. Augmented reality (AR) is a new laparoscopic simulation system offering a combination of physical objects and VR simulation. Laparoscopic instruments are used within an hybrid mannequin on tissue or objects while using video tracking. This study was designed to assess the difference in realism, haptic feedback, and didactic value between AR and VR laparoscopic simulation. Methods The ProMIS AR and LapSim VR simulators were used in this study. The participants performed a basic skills task and a suturing task on both simulators, after which they filled out a questionnaire about their demographics and their opinion of both simulators scored on a 5-point Likert scale. The participants were allotted to 3 groups depending on their experience: experts, intermediates and novices. Significant differences were calculated with the paired t-test. Results There was general consensus in all groups that the ProMIS AR laparoscopic simulator is more realistic than the LapSim VR laparoscopic simulator in both the basic skills task (mean 4.22 resp. 2.18, P < 0.000) as well as the suturing task (mean 4.15 resp. 1.85, P < 0.000). The ProMIS is regarded as having better haptic feedback (mean 3.92 resp. 1.92, P < 0.000) and as being more useful for training surgical residents (mean 4.51 resp. 2.94, P < 0.000). Conclusions In comparison with the VR simulator, the AR laparoscopic simulator was regarded by all participants as a better simulator for laparoscopic skills training on all tested features. PMID:17361356
Investigations into haptic space and haptic perception of shape for active touch
NASA Astrophysics Data System (ADS)
Sanders, A. F. J.
2008-12-01
This thesis presents a number of psychophysical investigations into haptic space and haptic perception of shape. Haptic perception is understood to include the two subsystems of the cutaneous sense and kinesthesis. Chapter 2 provides an extensive quantitative study into haptic perception of curvature. I investigated bimanual curvature discrimination of cylindrically curved, hand-sized surfaces. I found that discrimination thresholds were in the same range as unimanual thresholds reported in previous studies. Moreover, the distance between the surfaces or the position of the setup with respect to the observer had no effect on thresholds. Finally, I found idiosyncratic biases: A number of observers judged two surfaces that had different radii as equally curved. Biases were of the same order of magnitude as thresholds. In Chapter 3, I investigated haptic space. Here, haptic space is understood to be (1) the set of observer’s judgments of spatial relations in physical space, and (2) a set of constraints by which these judgments are internally consistent. I asked blindfolded observers to construct straight lines in a number of different tasks. I show that the shape of the haptically straight line depends on the task used to produce it. I therefore conclude that there is no unique definition of the haptically straight line and that doubts are cast on the usefulness of the concept of haptic space. In Chapter 4, I present a new experiment into haptic length perception. I show that when observers trace curved pathways with their index finger and judge distance traversed, their distance estimates depend on the geometry of the paths: Lengths of convex, cylindrically curved pathways were overestimated and lengths of concave pathways were underestimated. In addition, I show that a kinematic mechanism must underlie this interaction: (1) the geometry of the path traced by the finger affects movement speed and consequently movement time, and (2) movement time is taken as a measure of traversed length. The study presented in Chapter 5 addresses the question of how kinematic properties of exploratory movements affect perceived shape. I identify a kinematic invariant for the case of a single finger moving across cylindrically curved strips under conditions of slip. I found that the rotation angle of the finger increased linearly with the curvature of the stimulus. In addition, I show that observers took rotation angle as their primary measure of perceived curvature: Observers rotated their finger less on a concave curvature by a constant amount, and consequently, they overestimated the radius of the concave strips compared to the convex ones. Finally, in Chapter 6, I investigated the haptic filled-space illusion for dynamic touch: Observers move their fingertip across an unfilled extent or an extent filled with intermediate stimulations. Previous researchers have reported lengths of filled extents to be overestimated, but the parameters affecting the strength of the illusion are still largely unknown. Factors investigated in this chapter include end point effects, filler density and overall average movement speed.
Learning Multisensory Representations
2016-05-23
public release. Erdogan , G., Yildirim, I., & Jacobs, R. A. (2014). Transfer of object shape knowledge across visual and haptic modalities. Proceedings...2014). The adaptive nature of visual working memory. Current Directions in Psychological Science, 23, 164-170. Erdogan , G., Yildirim, I...sequence category knowledge: A probabilistic language of thought approach. Psychonomic Bulletin and Review, 22, 673-686. Erdogan , G., Chen, Q., Garcea, F
Haptics in Education: Exploring an Untapped Sensory Modality
ERIC Educational Resources Information Center
Minogue, James; Jones, M. Gail
2006-01-01
As human beings, we can interact with our environment through the sense of touch, which helps us to build an understanding of objects and events. The implications of touch for cognition are recognized by many educators who advocate the use of "hands-on" instruction. But is it possible to know something more completely by touching it? Does touch…
Haptic Glove Technology: Skill Development through Video Game Play
ERIC Educational Resources Information Center
Bargerhuff, Mary Ellen; Cowan, Heidi; Oliveira, Francisco; Quek, Francis; Fang, Bing
2010-01-01
This article introduces a recently developed haptic glove system and describes how the participants used a video game that was purposely designed to train them in skills that are needed for the efficient use of the haptic glove. Assessed skills included speed, efficiency, embodied skill, and engagement. The findings and implications for future…
Effect of Auditory Interference on Memory of Haptic Perceptions.
ERIC Educational Resources Information Center
Anater, Paul F.
1980-01-01
The effect of auditory interference on the processing of haptic information by 61 visually impaired students (8 to 20 years old) was the focus of the research described in this article. It was assumed that as the auditory interference approximated the verbalized activity of the haptic task, accuracy of recall would decline. (Author)
Investigating Students' Ideas about Buoyancy and the Influence of Haptic Feedback
ERIC Educational Resources Information Center
Minogue, James; Borland, David
2016-01-01
While haptics (simulated touch) represents a potential breakthrough technology for science teaching and learning, there is relatively little research into its differential impact in the context of teaching and learning. This paper describes the testing of a haptically enhanced simulation (HES) for learning about buoyancy. Despite a lifetime of…
Superior haptic-to-visual shape matching in autism spectrum disorders.
Nakano, Tamami; Kato, Nobumasa; Kitazawa, Shigeru
2012-04-01
A weak central coherence theory in autism spectrum disorder (ASD) proposes that a cognitive bias toward local processing in ASD derives from a weakness in integrating local elements into a coherent whole. Using this theory, we hypothesized that shape perception through active touch, which requires sequential integration of sensorimotor traces of exploratory finger movements into a shape representation, would be impaired in ASD. Contrary to our expectation, adults with ASD showed superior performance in a haptic-to-visual delayed shape-matching task compared to adults without ASD. Accuracy in discriminating haptic lengths or haptic orientations, which lies within the somatosensory modality, did not differ between adults with ASD and adults without ASD. Moreover, this superior ability in inter-modal haptic-to-visual shape matching was not explained by the score in a unimodal visuospatial rotation task. These results suggest that individuals with ASD are not impaired in integrating sensorimotor traces into a global visual shape and that their multimodal shape representations and haptic-to-visual information transfer are more accurate than those of individuals without ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.
Haptic Recreation of Elbow Spasticity
Kim, Jonghyun; Damiano, Diane L.
2013-01-01
The aim of this paper is to develop a haptic device capable of presenting standardized recreation of elbow spasticity. Using the haptic device, clinicians will be able to repeatedly practice the assessment of spasticity without requiring patient involvement, and these practice opportunities will help improve accuracy and reliability of the assessment itself. Haptic elbow spasticity simulator (HESS) was designed and prototyped according to mechanical requirements to recreate the feel of elbow spasticity. Based on the data collected from subjects with elbow spasticity, a mathematical model representing elbow spasticity is proposed. As an attempt to differentiate the feel of each score in Modified Ashworth Scale (MAS), parameters of the model were obtained respectively for three different MAS scores 1, 1+, and 2. The implemented haptic recreation was evaluated by experienced clinicians who were asked to give MAS scores by manipulating the haptic device. The clinicians who participated in the study were blinded to each other’s scores and to the given models. They distinguished the three models and the MAS scores given to the recreated models matched 100% with the original MAS scores from the patients. PMID:22275660
Control of a haptic gear shifting assistance device utilizing a magnetorheological clutch
NASA Astrophysics Data System (ADS)
Han, Young-Min; Choi, Seung-Bok
2014-10-01
This paper proposes a haptic clutch driven gear shifting assistance device that can help when the driver shifts the gear of a transmission system. In order to achieve this goal, a magnetorheological (MR) fluid-based clutch is devised to be capable of the rotary motion of an accelerator pedal to which the MR clutch is integrated. The proposed MR clutch is then manufactured, and its transmission torque is experimentally evaluated according to the magnetic field intensity. The manufactured MR clutch is integrated with the accelerator pedal to transmit a haptic cue signal to the driver. The impending control issue is to cue the driver to shift the gear via the haptic force. Therefore, a gear-shifting decision algorithm is constructed by considering the vehicle engine speed concerned with engine combustion dynamics, vehicle dynamics and driving resistance. Then, the algorithm is integrated with a compensation strategy for attaining the desired haptic force. In this work, the compensator is also developed and implemented through the discrete version of the inverse hysteretic model. The control performances, such as the haptic force tracking responses and fuel consumption, are experimentally evaluated.
Torque Measurement of 3-DOF Haptic Master Operated by Controllable Electrorheological Fluid
NASA Astrophysics Data System (ADS)
Oh, Jong-Seok; Choi, Seung-Bok; Lee, Yang-Sub
2015-02-01
This work presents a torque measurement method of 3-degree-of-freedom (3-DOF) haptic master featuring controllable electrorheological (ER) fluid. In order to reflect the sense of an organ for a surgeon, the ER haptic master which can generate the repulsive torque of an organ is utilized as a remote controller for a surgery robot. Since accurate representation of organ feeling is essential for the success of the robot-assisted surgery, it is indispensable to develop a proper torque measurement method of 3-DOF ER haptic master. After describing the structural configuration of the haptic master, the torque models of ER spherical joint are mathematically derived based on the Bingham model of ER fluid. A new type of haptic device which has pitching, rolling, and yawing motions is then designed and manufactured using a spherical joint mechanism. Subsequently, the field-dependent parameters of the Bingham model are identified and generating repulsive torque according to applied electric field is measured. In addition, in order to verify the effectiveness of the proposed torque model, a comparative work between simulated and measured torques is undertaken.
Dynamics modeling for parallel haptic interfaces with force sensing and control.
Bernstein, Nicholas; Lawrence, Dale; Pao, Lucy
2013-01-01
Closed-loop force control can be used on haptic interfaces (HIs) to mitigate the effects of mechanism dynamics. A single multidimensional force-torque sensor is often employed to measure the interaction force between the haptic device and the user's hand. The parallel haptic interface at the University of Colorado (CU) instead employs smaller 1D force sensors oriented along each of the five actuating rods to build up a 5D force vector. This paper shows that a particular manipulandum/hand partition in the system dynamics is induced by the placement and type of force sensing, and discusses the implications on force and impedance control for parallel haptic interfaces. The details of a "squaring down" process are also discussed, showing how to obtain reduced degree-of-freedom models from the general six degree-of-freedom dynamics formulation.
Development of haptic system for surgical robot
NASA Astrophysics Data System (ADS)
Gang, Han Gyeol; Park, Jiong Min; Choi, Seung-Bok; Sohn, Jung Woo
2017-04-01
In this paper, a new type of haptic system for surgical robot application is proposed and its performances are evaluated experimentally. The proposed haptic system consists of an effective master device and a precision slave robot. The master device has 3-DOF rotational motion as same as human wrist motion. It has lightweight structure with a gyro sensor and three small-sized MR brakes for position measurement and repulsive torque generation, respectively. The slave robot has 3-DOF rotational motion using servomotors, five bar linkage and a torque sensor is used to measure resistive torque. It has been experimentally demonstrated that the proposed haptic system has good performances on tracking control of desired position and repulsive torque. It can be concluded that the proposed haptic system can be effectively applied to the surgical robot system in real field.
fMRI-Compatible Electromagnetic Haptic Interface.
Riener, R; Villgrattner, T; Kleiser, R; Nef, T; Kollias, S
2005-01-01
A new haptic interface device is suggested, which can be used for functional magnetic resonance imaging (fMRI) studies. The basic component of this 1 DOF haptic device are two coils that produce a Lorentz force induced by the large static magnetic field of the MR scanner. A MR-compatible optical angular encoder and a optical force sensor enable the implementation of different control architectures for haptic interactions. The challenge was to provide a large torque, and not to affect image quality by the currents applied in the device. The haptic device was tested in a 3T MR scanner. With a current of up to 1A and a distance of 1m to the focal point of the MR-scanner it was possible to generate torques of up to 4 Nm. Within these boundaries image quality was not affected.
Barmpoutis, Angelos; Alzate, Jose; Beekhuizen, Samantha; Delgado, Horacio; Donaldson, Preston; Hall, Andrew; Lago, Charlie; Vidal, Kevin; Fox, Emily J
2016-01-01
In this paper a prototype system is presented for home-based physical tele-therapy using a wearable device for haptic feedback. The haptic feedback is generated as a sequence of vibratory cues from 8 vibrator motors equally spaced along an elastic wearable band. The motors guide the patients' movement as they perform a prescribed exercise routine in a way that replaces the physical therapists' haptic guidance in an unsupervised or remotely supervised home-based therapy session. A pilot study of 25 human subjects was performed that focused on: a) testing the capability of the system to guide the users in arbitrary motion paths in the space and b) comparing the motion of the users during typical physical therapy exercises with and without haptic-based guidance. The results demonstrate the efficacy of the proposed system.
Haptic Cues for Balance: Use of a Cane Provides Immediate Body Stabilization
Sozzi, Stefania; Crisafulli, Oscar; Schieppati, Marco
2017-01-01
Haptic cues are important for balance. Knowledge of the temporal features of their effect may be crucial for the design of neural prostheses. Touching a stable surface with a fingertip reduces body sway in standing subjects eyes closed (EC), and removal of haptic cue reinstates a large sway pattern. Changes in sway occur rapidly on changing haptic conditions. Here, we describe the effects and time-course of stabilization produced by a haptic cue derived from a walking cane. We intended to confirm that cane use reduces body sway, to evaluate the effect of vision on stabilization by a cane, and to estimate the delay of the changes in body sway after addition and withdrawal of haptic input. Seventeen healthy young subjects stood in tandem position on a force platform, with eyes closed or open (EO). They gently lowered the cane onto and lifted it from a second force platform. Sixty trials per direction of haptic shift (Touch → NoTouch, T-NT; NoTouch → Touch, NT-T) and visual condition (EC-EO) were acquired. Traces of Center of foot Pressure (CoP) and the force exerted by cane were filtered, rectified, and averaged. The position in space of a reflective marker positioned on the cane tip was also acquired by an optoelectronic device. Cross-correlation (CC) analysis was performed between traces of cane tip and CoP displacement. Latencies of changes in CoP oscillation in the frontal plane EC following the T-NT and NT-T haptic shift were statistically estimated. The CoP oscillations were larger in EC than EO under both T and NT (p < 0.001) and larger during NT than T conditions (p < 0.001). Haptic-induced effect under EC (Romberg quotient NT/T ~ 1.2) was less effective than that of vision under NT condition (EC/EO ~ 1.5) (p < 0.001). With EO cane had little effect. Cane displacement lagged CoP displacement under both EC and EO. Latencies to changes in CoP oscillations were longer after addition (NT-T, about 1.6 s) than withdrawal (T-NT, about 0.9 s) of haptic input (p < 0.001). These latencies were similar to those occurring on fingertip touch, as previously shown. Overall, data speak in favor of substantial equivalence of the haptic information derived from both “direct” fingertip contact and “indirect” contact with the floor mediated by the cane. Cane, finger and visual inputs would be similarly integrated in the same neural centers for balance control. Haptic input from a walking aid and its processing time should be considered when designing prostheses for locomotion. PMID:29311785
Perceptual Grouping in Haptic Search: The Influence of Proximity, Similarity, and Good Continuation
ERIC Educational Resources Information Center
Overvliet, Krista E.; Krampe, Ralf Th.; Wagemans, Johan
2012-01-01
We conducted a haptic search experiment to investigate the influence of the Gestalt principles of proximity, similarity, and good continuation. We expected faster search when the distractors could be grouped. We chose edges at different orientations as stimuli because they are processed similarly in the haptic and visual modality. We therefore…
Immediate Memory for Haptically-Examined Braille Symbols by Blind and Sighted Subjects.
ERIC Educational Resources Information Center
Newman, Slater E.; And Others
The paper reports on two experiments in Braille learning which compared blind and sighted subjects on the immediate recall of haptically-examined Braille symbols. In the first study, sighted subjects (N=64) haptically examined each of a set of Braille symbols with their preferred or nonpreferred hand and immediately recalled the symbol by drawing…
Haptic Cues Used for Outdoor Wayfinding by Individuals with Visual Impairments
ERIC Educational Resources Information Center
Koutsoklenis, Athanasios; Papadopoulos, Konstantinos
2014-01-01
Introduction: The study presented here examines which haptic cues individuals with visual impairments use more frequently and determines which of these cues are deemed by these individuals to be the most important for way-finding in urban environments. It also investigates the ways in which these haptic cues are used by individuals with visual…
Physical Student-Robot Interaction with the ETHZ Haptic Paddle
ERIC Educational Resources Information Center
Gassert, R.; Metzger, J.; Leuenberger, K.; Popp, W. L.; Tucker, M. R.; Vigaru, B.; Zimmermann, R.; Lambercy, O.
2013-01-01
Haptic paddles--low-cost one-degree-of-freedom force feedback devices--have been used with great success at several universities throughout the US to teach the basic concepts of dynamic systems and physical human-robot interaction (pHRI) to students. The ETHZ haptic paddle was developed for a new pHRI course offered in the undergraduate…
Eye-in-Hand Manipulation for Remote Handling: Experimental Setup
NASA Astrophysics Data System (ADS)
Niu, Longchuan; Suominen, Olli; Aref, Mohammad M.; Mattila, Jouni; Ruiz, Emilio; Esque, Salvador
2018-03-01
A prototype for eye-in-hand manipulation in the context of remote handling in the International Thermonuclear Experimental Reactor (ITER)1 is presented in this paper. The setup consists of an industrial robot manipulator with a modified open control architecture and equipped with a pair of stereoscopic cameras, a force/torque sensor, and pneumatic tools. It is controlled through a haptic device in a mock-up environment. The industrial robot controller has been replaced by a single industrial PC running Xenomai that has a real-time connection to both the robot controller and another Linux PC running as the controller for the haptic device. The new remote handling control environment enables further development of advanced control schemes for autonomous and semi-autonomous manipulation tasks. This setup benefits from a stereovision system for accurate tracking of the target objects with irregular shapes. The overall environmental setup successfully demonstrates the required robustness and precision that remote handling tasks need.
Smart glove: hand master using magnetorheological fluid actuators
NASA Astrophysics Data System (ADS)
Nam, Y. J.; Park, M. K.; Yamane, R.
2007-12-01
In this study, a hand master using five miniature magneto-rheological (MR) actuators, which is called 'the smart glove', is introduced. This hand master is intended to display haptic feedback to the fingertip of the human user interacting with any virtual objects in virtual environment. For the smart glove, two effective approaches are proposed: (i) by using the MR actuator which can be considered as a passive actuator, the smart glove is made simple in structure, high in power, low in inertia, safe in interface and stable in haptic feedback, and (ii) with a novel flexible link mechanism designed for the position-force transmission between the fingertips and the actuators, the number of the actuator and the weight of the smart glove can be reduced. These features lead to the improvement in the manipulability and portability of the smart glove. The feasibility of the constructed smart glove is verified through basic performance evaluation.
Force, Torque and Stiffness: Interactions in Perceptual Discrimination
Wu, Bing; Klatzky, Roberta L.; Hollis, Ralph L.
2011-01-01
Three experiments investigated whether force and torque cues interact in haptic discrimination of force, torque and stiffness, and if so, how. The statistical relation between force and torque was manipulated across four experimental conditions: Either one type of cue varied while the other was constant, or both varied so as to be positively correlated, negatively correlated, or uncorrelated. Experiment 1 showed that the subjects’ ability to discriminate force was improved by positively correlated torque but impaired with uncorrelated torque, as compared to the constant torque condition. Corresponding effects were found in Experiment 2 for the influence of force on torque discrimination. These findings indicate that force and torque are integrated in perception, rather than being processed as separate dimensions. A further experiment demonstrated facilitation of stiffness discrimination by correlated force and torque, whether the correlation was positive or negative. The findings suggest new means of augmenting haptic feedback to facilitate perception of the properties of soft objects. PMID:21359137
From conditioning shampoo to nanomechanics and haptics of human hair.
Wood, Claudia; Sugiharto, Albert Budiman; Max, Eva; Fery, Andreas
2011-01-01
Shampoo treatment and hair conditioning have a direct impact on our wellbeing via properties like combability and haptic perception of hair. Therefore, systematic investigations leading to quality improvement of hair care products are of major interest. The aim of our work is a better understanding of complex testing and the correlation with quantitative parameters. The motivation for the development of physical testing methods for hair feel relates to the fact that an ingredient supplier like BASF can only find new, so far not yet toxicologically approved chemistries for hair cosmetics, if an in-vitro method exists.In this work, the effects of different shampoo treatments with conditioning polymers are investigated. The employed physical test method, dry friction measurements and AFM observe friction phenomena on a macroscopic as well as on a nanoscale directly on hair. They are an approach to complement sensoric evaluation with an objective in-vitro method.
Samosky, Joseph T; Allen, Pete; Boronyak, Steve; Branstetter, Barton; Hein, Steven; Juhas, Mark; Nelson, Douglas A; Orebaugh, Steven; Pinto, Rohan; Smelko, Adam; Thompson, Mitch; Weaver, Robert A
2011-01-01
We are developing a simulator of peripheral nerve block utilizing a mixed-reality approach: the combination of a physical model, an MRI-derived virtual model, mechatronics and spatial tracking. Our design uses tangible (physical) interfaces to simulate surface anatomy, haptic feedback during needle insertion, mechatronic display of muscle twitch corresponding to the specific nerve stimulated, and visual and haptic feedback for the injection syringe. The twitch response is calculated incorporating the sensed output of a real neurostimulator. The virtual model is isomorphic with the physical model and is derived from segmented MRI data. This model provides the subsurface anatomy and, combined with electromagnetic tracking of a sham ultrasound probe and a standard nerve block needle, supports simulated ultrasound display and measurement of needle location and proximity to nerves and vessels. The needle tracking and virtual model also support objective performance metrics of needle targeting technique.
Perceptual grouping determines haptic contextual modulation.
Overvliet, K E; Sayim, B
2016-09-01
Since the early phenomenological demonstrations of Gestalt principles, one of the major challenges of Gestalt psychology has been to quantify these principles. Here, we show that contextual modulation, i.e. the influence of context on target perception, can be used as a tool to quantify perceptual grouping in the haptic domain, similar to the visual domain. We investigated the influence of target-flanker grouping on performance in haptic vernier offset discrimination. We hypothesized that when, despite the apparent differences between vision and haptics, similar grouping principles are operational, a similar pattern of flanker interference would be observed in the haptic as in the visual domain. Participants discriminated the offset of a haptic vernier. The vernier was flanked by different flanker configurations: no flankers, single flanking lines, 10 flanking lines, rectangles and single perpendicular lines, varying the degree to which the vernier grouped with the flankers. Additionally, we used two different flanker widths (same width as and narrower than the target), again to vary target-flanker grouping. Our results show a clear effect of flankers: performance was much better when the vernier was presented alone compared to when it was presented with flankers. In the majority of flanker configurations, grouping between the target and the flankers determined the strength of interference, similar to the visual domain. However, in the same width rectangular flanker condition we found aberrant results. We discuss the results of our study in light of similarities and differences between vision and haptics and the interaction between different grouping principles. We conclude that in haptics, similar organization principles apply as in visual perception and argue that grouping and Gestalt are key organization principles not only of vision, but of the perceptual system in general. Copyright © 2015 Elsevier Ltd. All rights reserved.
Haptic biofeedback for improving compliance with lower-extremity partial weight bearing.
Fu, Michael C; DeLuke, Levi; Buerba, Rafael A; Fan, Richard E; Zheng, Ying Jean; Leslie, Michael P; Baumgaertner, Michael R; Grauer, Jonathan N
2014-11-01
After lower-extremity orthopedic trauma and surgery, patients are often advised to restrict weight bearing on the affected limb. Conventional training methods are not effective at enabling patients to comply with recommendations for partial weight bearing. The current study assessed a novel method of using real-time haptic (vibratory/vibrotactile) biofeedback to improve compliance with instructions for partial weight bearing. Thirty healthy, asymptomatic participants were randomized into 1 of 3 groups: verbal instruction, bathroom scale training, and haptic biofeedback. Participants were instructed to restrict lower-extremity weight bearing in a walking boot with crutches to 25 lb, with an acceptable range of 15 to 35 lb. A custom weight bearing sensor and biofeedback system was attached to all participants, but only those in the haptic biofeedback group were given a vibrotactile signal if they exceeded the acceptable range. Weight bearing in all groups was measured with a separate validated commercial system. The verbal instruction group bore an average of 60.3±30.5 lb (mean±standard deviation). The bathroom scale group averaged 43.8±17.2 lb, whereas the haptic biofeedback group averaged 22.4±9.1 lb (P<.05). As a percentage of body weight, the verbal instruction group averaged 40.2±19.3%, the bathroom scale group averaged 32.5±16.9%, and the haptic biofeedback group averaged 14.5±6.3% (P<.05). In this initial evaluation of the use of haptic biofeedback to improve compliance with lower-extremity partial weight bearing, haptic biofeedback was superior to conventional physical therapy methods. Further studies in patients with clinical orthopedic trauma are warranted. Copyright 2014, SLACK Incorporated.
Enhanced operator perception through 3D vision and haptic feedback
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren
2012-06-01
Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.
[Haptic tracking control for minimally invasive robotic surgery].
Xu, Zhaohong; Song, Chengli; Wu, Wenwu
2012-06-01
Haptic feedback plays a significant role in minimally invasive robotic surgery (MIRS). A major deficiency of the current MIRS is the lack of haptic perception for the surgeon, including the commercially available robot da Vinci surgical system. In this paper, a dynamics model of a haptic robot is established based on Newton-Euler method. Because it took some period of time in exact dynamics solution, we used a digital PID arithmetic dependent on robot dynamics to ensure real-time bilateral control, and it could improve tracking precision and real-time control efficiency. To prove the proposed method, an experimental system in which two Novint Falcon haptic devices acting as master-slave system has been developed. Simulations and experiments showed proposed methods could give instrument force feedbacks to operator, and bilateral control strategy is an effective method to master-slave MIRS. The proposed methods could be used to tele-robotic system.
Human-computer interface including haptically controlled interactions
Anderson, Thomas G.
2005-10-11
The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.
Moll, F H
2015-02-01
The use of artifacts and objects from scientific medical collections and museums for academic teaching purposes are one of the main qualifying tasks of those institutions. In recent years, this aspect of scientific collections has again become on focus within academics. The collections offer a unique chance for visual and haptic forms of teaching in many fields. Due to the potential of scientific collections, educators in all branches in academic learning should be familiar with handling objects for such purposes.
Fisher, J Brian; Porter, Susan M
2002-01-01
This paper describes an application of a display approach which uses chromakey techniques to composite real and computer-generated images allowing a user to see his hands and medical instruments collocated with the display of virtual objects during a medical training simulation. Haptic feedback is provided through the use of a PHANTOM force feedback device in addition to tactile augmentation, which allows the user to touch virtual objects by introducing corresponding real objects in the workspace. A simplified catheter introducer insertion simulation was developed to demonstrate the capabilities of this approach.
A design of hardware haptic interface for gastrointestinal endoscopy simulation.
Gu, Yunjin; Lee, Doo Yong
2011-01-01
Gastrointestinal endoscopy simulations have been developed to train endoscopic procedures which require hundreds of practices to be competent in the skills. Even though realistic haptic feedback is important to provide realistic sensation to the user, most of previous simulations including commercialized simulation have mainly focused on providing realistic visual feedback. In this paper, we propose a novel design of portable haptic interface, which provides 2DOF force feedback, for the gastrointestinal endoscopy simulation. The haptic interface consists of translational and rotational force feedback mechanism which are completely decoupled, and gripping mechanism for controlling connection between the endoscope and the force feedback mechanism.
Haptic Guidance Improves the Visuo-Manual Tracking of Trajectories
Bluteau, Jérémy; Coquillart, Sabine; Payan, Yohan; Gentaz, Edouard
2008-01-01
Background Learning to perform new movements is usually achieved by following visual demonstrations. Haptic guidance by a force feedback device is a recent and original technology which provides additional proprioceptive cues during visuo-motor learning tasks. The effects of two types of haptic guidances-control in position (HGP) or in force (HGF)–on visuo-manual tracking (“following”) of trajectories are still under debate. Methodology/Principals Findings Three training techniques of haptic guidance (HGP, HGF or control condition, NHG, without haptic guidance) were evaluated in two experiments. Movements produced by adults were assessed in terms of shapes (dynamic time warping) and kinematics criteria (number of velocity peaks and mean velocity) before and after the training sessions. Trajectories consisted of two Arabic and two Japanese-inspired letters in Experiment 1 and ellipses in Experiment 2. We observed that the use of HGF globally improves the fluency of the visuo-manual tracking of trajectories while no significant improvement was found for HGP or NHG. Conclusion/Significance These results show that the addition of haptic information, probably encoded in force coordinates, play a crucial role on the visuo-manual tracking of new trajectories. PMID:18335049
The effect of perceptual grouping on haptic numerosity perception.
Verlaers, K; Wagemans, J; Overvliet, K E
2015-01-01
We used a haptic enumeration task to investigate whether enumeration can be facilitated by perceptual grouping in the haptic modality. Eight participants were asked to count tangible dots as quickly and accurately as possible, while moving their finger pad over a tactile display. In Experiment 1, we manipulated the number and organization of the dots, while keeping the total exploration area constant. The dots were either evenly distributed on a horizontal line (baseline condition) or organized into groups based on either proximity (dots placed in closer proximity to each other) or configural cues (dots placed in a geometric configuration). In Experiment 2, we varied the distance between the subsets of dots. We hypothesized that when subsets of dots can be grouped together, the enumeration time will be shorter and accuracy will be higher than in the baseline condition. The results of both experiments showed faster enumeration for the configural condition than for the baseline condition, indicating that configural grouping also facilitates haptic enumeration. In Experiment 2, faster enumeration was also observed for the proximity condition than for the baseline condition. Thus, perceptual grouping speeds up haptic enumeration by both configural and proximity cues, suggesting that similar mechanisms underlie perceptual grouping in both visual and haptic enumeration.
Acquisition and Visualization Techniques of Human Motion Using Master-Slave System and Haptograph
NASA Astrophysics Data System (ADS)
Katsura, Seiichiro; Ohishi, Kiyoshi
Artificial acquisition and reproduction of human sensations are basic technologies of communication engineering. For example, auditory information is obtained by a microphone, and a speaker reproduces it by artificial means. Furthermore, a video camera and a television make it possible to transmit visual sensation by broadcasting. On the contrary, since tactile or haptic information is subject to the Newton's “law of action and reaction” in the real world, a device which acquires, transmits, and reproduces the information has not been established. From the point of view, real-world haptics is the key technology for future haptic communication engineering. This paper proposes a novel acquisition method of haptic information named “haptograph”. The haptograph visualizes the haptic information like photograph. Since temporal and spatial analyses are conducted to represent haptic information as the haptograph, it is possible to be recognized and to be evaluated intuitively. In this paper, the proposed haptograph is applied to visualization of human motion. It is possible to represent the motion characteristics, the expert's skill and the personal habit, and so on. In other words, a personal encyclopedia is attained. Once such a personal encyclopedia is stored in ubiquitous environment, the future human support technology will be developed.
Haptic Stylus and Empirical Studies on Braille, Button, and Texture Display
Kyung, Ki-Uk; Lee, Jun-Young; Park, Junseok
2008-01-01
This paper presents a haptic stylus interface with a built-in compact tactile display module and an impact module as well as empirical studies on Braille, button, and texture display. We describe preliminary evaluations verifying the tactile display's performance indicating that it can satisfactorily represent Braille numbers for both the normal and the blind. In order to prove haptic feedback capability of the stylus, an experiment providing impact feedback mimicking the click of a button has been conducted. Since the developed device is small enough to be attached to a force feedback device, its applicability to combined force and tactile feedback display in a pen-held haptic device is also investigated. The handle of pen-held haptic interface was replaced by the pen-like interface to add tactile feedback capability to the device. Since the system provides combination of force, tactile and impact feedback, three haptic representation methods for texture display have been compared on surface with 3 texture groups which differ in direction, groove width, and shape. In addition, we evaluate its capacity to support touch screen operations by providing tactile sensations when a user rubs against an image displayed on a monitor. PMID:18317520
Haptic stylus and empirical studies on braille, button, and texture display.
Kyung, Ki-Uk; Lee, Jun-Young; Park, Junseok
2008-01-01
This paper presents a haptic stylus interface with a built-in compact tactile display module and an impact module as well as empirical studies on Braille, button, and texture display. We describe preliminary evaluations verifying the tactile display's performance indicating that it can satisfactorily represent Braille numbers for both the normal and the blind. In order to prove haptic feedback capability of the stylus, an experiment providing impact feedback mimicking the click of a button has been conducted. Since the developed device is small enough to be attached to a force feedback device, its applicability to combined force and tactile feedback display in a pen-held haptic device is also investigated. The handle of pen-held haptic interface was replaced by the pen-like interface to add tactile feedback capability to the device. Since the system provides combination of force, tactile and impact feedback, three haptic representation methods for texture display have been compared on surface with 3 texture groups which differ in direction, groove width, and shape. In addition, we evaluate its capacity to support touch screen operations by providing tactile sensations when a user rubs against an image displayed on a monitor.
ERIC Educational Resources Information Center
Lahav, Orly; Schloerb, David W.; Srinivasan, Mandayam A.
2015-01-01
Introduction: The BlindAid, a virtual system developed for orientation and mobility (O&M) training of people who are blind or have low vision, allows interaction with different virtual components (structures and objects) via auditory and haptic feedback. This research examined if and how the BlindAid that was integrated within an O&M…
Low cost heads-up virtual reality (HUVR) with optical tracking and haptic feedback
NASA Astrophysics Data System (ADS)
Margolis, Todd; DeFanti, Thomas A.; Dawe, Greg; Prudhomme, Andrew; Schulze, Jurgen P.; Cutchin, Steve
2011-03-01
Researchers at the University of California, San Diego, have created a new, relatively low-cost augmented reality system that enables users to touch the virtual environment they are immersed in. The Heads-Up Virtual Reality device (HUVR) couples a consumer 3D HD flat screen TV with a half-silvered mirror to project any graphic image onto the user's hands and into the space surrounding them. With his or her head position optically tracked to generate the correct perspective view, the user maneuvers a force-feedback (haptic) device to interact with the 3D image, literally 'touching' the object's angles and contours as if it was a tangible physical object. HUVR can be used for training and education in structural and mechanical engineering, archaeology and medicine as well as other tasks that require hand-eye coordination. One of the most unique characteristics of HUVR is that a user can place their hands inside of the virtual environment without occluding the 3D image. Built using open-source software and consumer level hardware, HUVR offers users a tactile experience in an immersive environment that is functional, affordable and scalable.
The Posture of Putting One's Palms Together Modulates Visual Motion Event Perception.
Saito, Godai; Gyoba, Jiro
2018-02-01
We investigated the effect of an observer's hand postures on visual motion perception using the stream/bounce display. When two identical visual objects move across collinear horizontal trajectories toward each other in a two-dimensional display, observers perceive them as either streaming or bouncing. In our previous study, we found that when observers put their palms together just below the coincidence point of the two objects, the percentage of bouncing responses increased, mainly depending on the proprioceptive information from their own hands. However, it remains unclear if the tactile or haptic (force) information produced by the postures mostly influences the stream/bounce perception. We solved this problem by changing the tactile and haptic information on the palms of the hands. Experiment 1 showed that the promotion of bouncing perception was observed only when the posture of directly putting one's palms together was used, while there was no effect when a brick was sandwiched between the participant's palms. Experiment 2 demonstrated that the strength of force used when putting the palms together had no effect on increasing bounce perception. Our findings indicate that the hands-induced bounce effect derives from the tactile information produced by the direct contact between both palms.
Sharing control with haptics: seamless driver support from manual to automatic control.
Mulder, Mark; Abbink, David A; Boer, Erwin R
2012-10-01
Haptic shared control was investigated as a human-machine interface that can intuitively share control between drivers and an automatic controller for curve negotiation. As long as automation systems are not fully reliable, a role remains for the driver to be vigilant to the system and the environment to catch any automation errors. The conventional binary switches between supervisory and manual control has many known issues, and haptic shared control is a promising alternative. A total of 42 respondents of varying age and driving experience participated in a driving experiment in a fixed-base simulator, in which curve negotiation behavior during shared control was compared to during manual control, as well as to three haptic tunings of an automatic controller without driver intervention. Under the experimental conditions studied, the main beneficial effect of haptic shared control compared to manual control was that less control activity (16% in steering wheel reversal rate, 15% in standard deviation of steering wheel angle) was needed for realizing an improved safety performance (e.g., 11% in peak lateral error). Full automation removed the need for any human control activity and improved safety performance (e.g., 35% in peak lateral error) but put the human in a supervisory position. Haptic shared control kept the driver in the loop, with enhanced performance at reduced control activity, mitigating the known issues that plague full automation. Haptic support for vehicular control ultimately seeks to intuitively combine human intelligence and creativity with the benefits of automation systems.
Design of a 7-DOF slave robot integrated with a magneto-rheological haptic master
NASA Astrophysics Data System (ADS)
Hwang, Yong-Hoon; Cha, Seung-Woo; Kang, Seok-Rae; Choi, Seung-Bok
2017-04-01
In this study, a 7-DOF slave robot integrated with the haptic master is designed and its dynamic motion is controlled. The haptic master is made using a controllable magneto-rheological (MR) clutch and brake and it provides the surgeon with a sense of touch by using both kinetic and kinesthetic information. Due to the size constraint of the slave robot, a wire actuating is adopted to make the desired motion of the end-effector which has 3-DOF instead of a conventional direct-driven motor. Another motions of the link parts that have 4-DOF use direct-driven motor. In total system, for working as a haptic device, the haptic master need to receive the information of repulsive forces applied on the slave robot. Therefore, repulsive forces on the end-effector are sensed by using three uniaxial torque transducer inserted in the wire actuating system and another repulsive forces applied on link part are sensed by using 6-axis transducer that is able to sense forces and torques. Using another 6-axis transducer, verify the reliability of force information on final end of slave robot. Lastly, integrated with a MR haptic master, psycho-physical test is conducted by different operators who can feel the different repulsive force or torque generated from the haptic master which is equivalent to the force or torque occurred on the end-effector to demonstrate the effectiveness of the proposed system.
Campolo, Domenico; Tommasino, Paolo; Gamage, Kumudu; Klein, Julius; Hughes, Charmayne M L; Masia, Lorenzo
2014-09-30
In the last decades more robotic manipulanda have been employed to investigate the effect of haptic environments on motor learning and rehabilitation. However, implementing complex haptic renderings can be challenging from technological and control perspectives. We propose a novel robot (H-Man) characterized by a mechanical design based on cabled differential transmission providing advantages over current robotic technology. The H-Man transmission translates to extremely simplified kinematics and homogenous dynamic properties, offering the possibility to generate haptic channels by passively blocking the mechanics, and eliminating stability concerns. We report results of experiments characterizing the performance of the device (haptic bandwidth, Z-width, and perceived impedance). We also present the results of a study investigating the influence of haptic channel compliance on motor learning in healthy individuals, which highlights the effects of channel compliance in enhancing proprioceptive information. The generation of haptic channels to study motor redundancy is not easy for actual robots because of the needs of powerful actuation and complex real-time control implementation. The mechanical design of H-Man affords the possibility to promptly create haptic channels by mechanical stoppers (on one of the motors) without compromising the superior backdriveability and high isotropic manipulability. This paper presents a novel robotic device for motor control studies and robotic rehabilitation. The hardware was designed with specific emphasis on the mechanics that result in a system that is easy to control, homogeneous, and is intrinsically safe for use. Copyright © 2014 Elsevier B.V. All rights reserved.
A simulator for surgery training: optimal sensory stimuli in a bone pinning simulation
NASA Astrophysics Data System (ADS)
Daenzer, Stefan; Fritzsche, Klaus
2008-03-01
Currently available low cost haptic devices allow inexpensive surgical training with no risk to patients. Major drawbacks of lower cost devices include limited maximum feedback force and the incapability to expose occurring moments. Aim of this work was the design and implementation of a surgical simulator that allows the evaluation of multi-sensory stimuli in order to overcome the occurring drawbacks. The simulator was built following a modular architecture to allow flexible combinations and thorough evaluation of different multi-sensory feedback modules. A Kirschner-Wire (K-Wire) tibial fracture fixation procedure was defined and implemented as a first test scenario. A set of computational metrics has been derived from the clinical requirements of the task to objectively assess the trainees performance during simulation. Sensory feedback modules for haptic and visual feedback have been developed, each in a basic and additionally in an enhanced form. First tests have shown that specific visual concepts can overcome some of the drawbacks coming along with low cost haptic devices. The simulator, the metrics and the surgery scenario together represent an important step towards a better understanding of the perception of multi-sensory feedback in complex surgical training tasks. Field studies on top of the architecture can open the way to risk-less and inexpensive surgical simulations that can keep up with traditional surgical training.
Bridges, Susan M; Zhu, Frank; Leung, W Keung; Burrow, Michael F; Poolton, Jamie; Masters, Rich SW
2017-01-01
Background There is little evidence considering the relationship between movement-specific reinvestment (a dimension of personality which refers to the propensity for individuals to consciously monitor and control their movements) and working memory during motor skill performance. Functional near-infrared spectroscopy (fNIRS) measuring oxyhemoglobin demands in the frontal cortex during performance of virtual reality (VR) psychomotor tasks can be used to examine this research gap. Objective The aim of this study was to determine the potential relationship between the propensity to reinvest and blood flow to the dorsolateral prefrontal cortices of the brain. A secondary aim was to determine the propensity to reinvest and performance during 2 dental tasks carried out using haptic VR simulators. Methods We used fNIRS to assess oxygen demands in 24 undergraduate dental students during 2 dental tasks (clinical, nonclinical) on a VR haptic simulator. We used the Movement-Specific Reinvestment Scale questionnaire to assess the students’ propensity to reinvest. Results Students with a high propensity for movement-specific reinvestment displayed significantly greater oxyhemoglobin demands in an area associated with working memory during the nonclinical task (Spearman correlation, rs=.49, P=.03). Conclusions This small-scale study suggests that neurophysiological differences are evident between high and low reinvesters during a dental VR task in terms of oxyhemoglobin demands in an area associated with working memory. PMID:29233801
Ascending and Descending in Virtual Reality: Simple and Safe System Using Passive Haptics.
Nagao, Ryohei; Matsumoto, Keigo; Narumi, Takuji; Tanikawa, Tomohiro; Hirose, Michitaka
2018-04-01
This paper presents a novel interactive system that provides users with virtual reality (VR) experiences, wherein users feel as if they are ascending/descending stairs through passive haptic feedback. The passive haptic stimuli are provided by small bumps under the feet of users; these stimuli are provided to represent the edges of the stairs in the virtual environment. The visual stimuli of the stairs and shoes, provided by head-mounted displays, evoke a visuo-haptic interaction that modifies a user's perception of the floor shape. Our system enables users to experience all types of stairs, such as half-turn and spiral stairs, in a VR setting. We conducted a preliminary user study and two experiments to evaluate the proposed technique. The preliminary user study investigated the effectiveness of the basic idea associated with the proposed technique for the case of a user ascending stairs. The results demonstrated that the passive haptic feedback produced by the small bumps enhanced the user's feeling of presence and sense of ascending. We subsequently performed an experiment to investigate an improved viewpoint manipulation method and the interaction of the manipulation and haptics for both the ascending and descending cases. The experimental results demonstrated that the participants had a feeling of presence and felt a steep stair gradient under the condition of haptic feedback and viewpoint manipulation based on the characteristics of actual stair walking data. However, these results also indicated that the proposed system may not be as effective in providing a sense of descending stairs without an optimization of the haptic stimuli. We then redesigned the shape of the small bumps, and evaluated the design in a second experiment. The results indicated that the best shape to present haptic stimuli is a right triangle cross section in both the ascending and descending cases. Although it is necessary to install small protrusions in the determined direction, by using this optimized shape the users feeling of presence of the stairs and the sensation of walking up and down was enhanced.
Multidigit movement synergies of the human hand in an unconstrained haptic exploration task.
Thakur, Pramodsingh H; Bastian, Amy J; Hsiao, Steven S
2008-02-06
Although the human hand has a complex structure with many individual degrees of freedom, joint movements are correlated. Studies involving simple tasks (grasping) or skilled tasks (typing or finger spelling) have shown that a small number of combined joint motions (i.e., synergies) can account for most of the variance in observed hand postures. However, those paradigms evoked a limited set of hand postures and as such the reported correlation patterns of joint motions may be task-specific. Here, we used an unconstrained haptic exploration task to evoke a set of hand postures that is representative of most naturalistic postures during object manipulation. Principal component analysis on this set revealed that the first seven principal components capture >90% of the observed variance in hand postures. Further, we identified nine eigenvectors (or synergies) that are remarkably similar across multiple subjects and across manipulations of different sets of objects within a subject. We then determined that these synergies are used broadly by showing that they account for the changes in hand postures during other tasks. These include hand motions such as reach and grasp of objects that vary in width, curvature and angle, and skilled motions such as precision pinch. Our results demonstrate that the synergies reported here generalize across tasks, and suggest that they represent basic building blocks underlying natural human hand motions.
A survey of telerobotic surface finishing
NASA Astrophysics Data System (ADS)
Höglund, Thomas; Alander, Jarmo; Mantere, Timo
2018-05-01
This is a survey of research published on the subjects of telerobotics, haptic feedback, and mixed reality applied to surface finishing. The survey especially focuses on how visuo-haptic feedback can be used to improve a grinding process using a remote manipulator or robot. The benefits of teleoperation and reasons for using haptic feedback are presented. The use of genetic algorithms for optimizing haptic sensing is briefly discussed. Ways of augmenting the operator's vision are described. Visual feedback can be used to find defects and analyze the quality of the surface resulting from the surface finishing process. Visual cues can also be used to aid a human operator in manipulating a robot precisely and avoiding collisions.
Targeted Muscle Reinnervation for Transradial Amputation: Description of Operative Technique.
Morgan, Emily N; Kyle Potter, Benjamin; Souza, Jason M; Tintle, Scott M; Nanos, George P
2016-12-01
Targeted muscle reinnervation (TMR) is a revolutionary surgical technique that, together with advances in upper extremity prostheses and advanced neuromuscular pattern recognition, allows intuitive and coordinated control in multiple planes of motion for shoulder disarticulation and transhumeral amputees. TMR also may provide improvement in neuroma-related pain and may represent an opportunity for sensory reinnervation as advances in prostheses and haptic feedback progress. Although most commonly utilized following shoulder disarticulation and transhumeral amputations, TMR techniques also represent an exciting opportunity for improvement in integrated prosthesis control and neuroma-related pain improvement in patients with transradial amputations. As there are no detailed descriptions of this technique in the literature to date, we provide our surgical technique for TMR in transradial amputations.
Force sharing and other collaborative strategies in a dyadic force perception task
Tatti, Fabio
2018-01-01
When several persons perform a physical task jointly, such as transporting an object together, the interaction force that each person experiences is the sum of the forces applied by all other persons on the same object. Therefore, there is a fundamental ambiguity about the origin of the force that each person experiences. This study investigated the ability of a dyad (two persons) to identify the direction of a small force produced by a haptic device and applied to a jointly held object. In this particular task, the dyad might split the force produced by the haptic device (the external force) in an infinite number of ways, depending on how the two partners interacted physically. A major objective of this study was to understand how the two partners coordinated their action to perceive the direction of the third force that was applied to the jointly held object. This study included a condition where each participant responded independently and another one where the two participants had to agree upon a single negotiated response. The results showed a broad range of behaviors. In general, the external force was not split in a way that would maximize the joint performance. In fact, the external force was often split very unequally, leaving one person without information about the external force. However, the performance was better than expected in this case, which led to the discovery of an unanticipated strategy whereby the person who took all the force transmitted this information to the partner by moving the jointly held object. When the dyad could negotiate the response, we found that the participant with less force information tended to switch his or her response more often. PMID:29474433
Forces on intraocular lens haptics induced by capsular fibrosis. An experimental study.
Guthoff, R; Abramo, F; Draeger, J; Chumbley, L C; Lang, G K; Neumann, W
1990-01-01
Electronic dynamometry measurements, performed upon intraocular lens (IOL) haptics of prototype one-piece three-loop silicone lenses, accurately defined the relationships between elastic force and haptic displacement. Lens implantations in the capsular bag of dogs (loop span equal to capsular bag diameter, loops underformed immediately after the operation) were evaluated macrophotographically 5-8 months postoperatively. The highly constant elastic property of silicon rubber permitted quantitative correlation of subsequent in vivo haptic displacement with the resultant force vectors responsible for tissue contraction. The lens optics were well centered in 17 (85%) and slightly offcenter in 3 (15%) of 20 implanted eyes. Of the 60 supporting loops, 28 could be visualized sufficiently well to permit reliable haptic measurement. Of these 28, 20 (71%) were clearly displaced, ranging from 0.45 mm away from to 1.4 mm towards the lens' optic center. These extremes represented resultant vector forces of 0.20 and 1.23 mN respectively. Quantitative vector analysis permits better understanding of IOL-capsular interactions.
A haptic-inspired audio approach for structural health monitoring decision-making
NASA Astrophysics Data System (ADS)
Mao, Zhu; Todd, Michael; Mascareñas, David
2015-03-01
Haptics is the field at the interface of human touch (tactile sensation) and classification, whereby tactile feedback is used to train and inform a decision-making process. In structural health monitoring (SHM) applications, haptic devices have been introduced and applied in a simplified laboratory scale scenario, in which nonlinearity, representing the presence of damage, was encoded into a vibratory manual interface. In this paper, the "spirit" of haptics is adopted, but here ultrasonic guided wave scattering information is transformed into audio (rather than tactile) range signals. After sufficient training, the structural damage condition, including occurrence and location, can be identified through the encoded audio waveforms. Different algorithms are employed in this paper to generate the transformed audio signals and the performance of each encoding algorithms is compared, and also compared with standard machine learning classifiers. In the long run, the haptic decision-making is aiming to detect and classify structural damages in a more rigorous environment, and approaching a baseline-free fashion with embedded temperature compensation.
Real-time dual-band haptic music player for mobile devices.
Hwang, Inwook; Lee, Hyeseon; Choi, Seungmoon
2013-01-01
We introduce a novel dual-band haptic music player for real-time simultaneous vibrotactile playback with music in mobile devices. Our haptic music player features a new miniature dual-mode actuator that can produce vibrations consisting of two principal frequencies and a real-time vibration generation algorithm that can extract vibration commands from a music file for dual-band playback (bass and treble). The algorithm uses a "haptic equalizer" and provides plausible sound-to-touch modality conversion based on human perceptual data. In addition, we present a user study carried out to evaluate the subjective performance (precision, harmony, fun, and preference) of the haptic music player, in comparison with the current practice of bass-band-only vibrotactile playback via a single-frequency voice-coil actuator. The evaluation results indicated that the new dual-band playback outperforms the bass-only rendering, also providing several insights for further improvements. The developed system and experimental findings have implications for improving the multimedia experience with mobile devices.
Learning to perceive haptic distance-to-break in the presence of friction.
Altenhoff, Bliss M; Pagano, Christopher C; Kil, Irfan; Burg, Timothy C
2017-02-01
Two experiments employed attunement and calibration training to investigate whether observers are able to identify material break points in compliant materials through haptic force application. The task required participants to attune to a recently identified haptic invariant, distance-to-break (DTB), rather than haptic stimulation not related to the invariant, including friction. In the first experiment participants probed simulated force-displacement relationships (materials) under 3 levels of friction with the aim of pushing as far as possible into the materials without breaking them. In a second experiment a different set of participants pulled on the materials. Results revealed that participants are sensitive to DTB for both pushing and pulling, even in the presence of varying levels of friction, and this sensitivity can be improved through training. The results suggest that the simultaneous presence of friction may assist participants in perceiving DTB. Potential applications include the development of haptic training programs for minimally invasive (laparoscopic) surgery to reduce accidental tissue damage. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Ergonomic evaluation of 3D plane positioning using a mouse and a haptic device.
Paul, Laurent; Cartiaux, Olivier; Docquier, Pierre-Louis; Banse, Xavier
2009-12-01
Preoperative planning and intraoperative assistance are needed to improve accuracy in tumour surgery. To be accepted, these processes must be efficient. An experiment was conducted to compare a mouse and a haptic device, with and without force feedback, to perform plan positioning in a 3D space. Ergonomics and performance factors were investigated during the experiment. Positioning strategies were observed. The task completion time, number of 3D orientations and failure rate were analysed. A questionnaire on ergonomics was filled out by each participant. The haptic device showed a significantly lower failure rate and was quicker and more ergonomic than the mouse. The force feedback was not beneficial to the accomplishment of the task. The haptic device is intuitive, ergonomic and more efficient than the mouse for positioning a 3D plane into a 3D space. Useful observations regarding positioning strategies will improve the integration of haptic devices into medical applications. Copyright (c) 2009 John Wiley & Sons, Ltd.
Detection of Membrane Puncture with Haptic Feedback using a Tip-Force Sensing Needle.
Elayaperumal, Santhi; Bae, Jung Hwa; Daniel, Bruce L; Cutkosky, Mark R
2014-09-01
This paper presents calibration and user test results of a 3-D tip-force sensing needle with haptic feedback. The needle is a modified MRI-compatible biopsy needle with embedded fiber Bragg grating (FBG) sensors for strain detection. After calibration, the needle is interrogated at 2 kHz, and dynamic forces are displayed remotely with a voice coil actuator. The needle is tested in a single-axis master/slave system, with the voice coil haptic display at the master, and the needle at the slave end. Tissue phantoms with embedded membranes were used to determine the ability of the tip-force sensors to provide real-time haptic feedback as compared to external sensors at the needle base during needle insertion via the master/slave system. Subjects were able to determine the position of the embedded membranes with significantly better accuracy using FBG tip feedback than with base feedback using a commercial force/torque sensor (p = 0.045) or with no added haptic feedback (p = 0.0024).
Detection of Membrane Puncture with Haptic Feedback using a Tip-Force Sensing Needle
Elayaperumal, Santhi; Bae, Jung Hwa; Daniel, Bruce L.; Cutkosky, Mark R.
2015-01-01
This paper presents calibration and user test results of a 3-D tip-force sensing needle with haptic feedback. The needle is a modified MRI-compatible biopsy needle with embedded fiber Bragg grating (FBG) sensors for strain detection. After calibration, the needle is interrogated at 2 kHz, and dynamic forces are displayed remotely with a voice coil actuator. The needle is tested in a single-axis master/slave system, with the voice coil haptic display at the master, and the needle at the slave end. Tissue phantoms with embedded membranes were used to determine the ability of the tip-force sensors to provide real-time haptic feedback as compared to external sensors at the needle base during needle insertion via the master/slave system. Subjects were able to determine the position of the embedded membranes with significantly better accuracy using FBG tip feedback than with base feedback using a commercial force/torque sensor (p = 0.045) or with no added haptic feedback (p = 0.0024). PMID:26509101
The role of haptic versus visual volume cues in the size-weight illusion.
Ellis, R R; Lederman, S J
1993-03-01
Three experiments establish the size-weight illusion as a primarily haptic phenomenon, despite its having been more traditionally considered an example of vision influencing haptic processing. Experiment 1 documents, across a broad range of stimulus weights and volumes, the existence of a purely haptic size-weight illusion, equal in strength to the traditional illusion. Experiment 2 demonstrates that haptic volume cues are both sufficient and necessary for a full-strength illusion. In contrast, visual volume cues are merely sufficient, and produce a relatively weaker effect. Experiment 3 establishes that congenitally blind subjects experience an effect as powerful as that of blindfolded sighted observers, thus demonstrating that visual imagery is also unnecessary for a robust size-weight illusion. The results are discussed in terms of their implications for both sensory and cognitive theories of the size-weight illusion. Applications of this work to a human factors design and to sensor-based systems for robotic manipulation are also briefly considered.
Visual and visually mediated haptic illusions with Titchener's ⊥.
Landwehr, Klaus
2014-05-01
For a replication and expansion of a previous experiment of mine, 14 newly recruited participants provided haptic and verbal estimates of the lengths of the two lines that make up Titchener's ⊥. The stimulus was presented at two different orientations (frontoparallel vs. horizontal) and rotated in steps of 45 deg around 2π. Haptically, the divided line of the ⊥ was generally underestimated, especially at a horizontal orientation. Verbal judgments also differed according to presentation condition and to which line was the target, with the overestimation of the undivided line ranging between 6.2 % and 15.3 %. The results are discussed with reference to the two-visual-systems theory of perception and action, neuroscientific accounts, and also recent historical developments (the use of handheld touchscreens, in particular), because the previously reported "haptic induction effect" (the scaling of haptic responses to the divided line of the ⊥, depending on the length of the undivided one) did not replicate.
ERIC Educational Resources Information Center
Wang, Liping; Li, Xianchun; Hsiao, Steven S.; Bodner, Mark; Lenz, Fred; Zhou, Yong-Di
2012-01-01
Previous studies suggested that primary somatosensory (SI) neurons in well-trained monkeys participated in the haptic-haptic unimodal delayed matching-to-sample (DMS) task. In this study, 585 SI neurons were recorded in monkeys performing a task that was identical to that in the previous studies but without requiring discrimination and active…
Stoycheva, Polina; Tiippana, Kaisa
2018-03-14
The brain's left hemisphere often displays advantages in processing verbal information, while the right hemisphere favours processing non-verbal information. In the haptic domain due to contra-lateral innervations, this functional lateralization is reflected in a hand advantage during certain functions. Findings regarding the hand-hemisphere advantage for haptic information remain contradictory, however. This study addressed these laterality effects and their interaction with memory retention times in the haptic modality. Participants performed haptic discrimination of letters, geometric shapes and nonsense shapes at memory retention times of 5, 15 and 30 s with the left and right hand separately, and we measured the discriminability index d'. The d' values were significantly higher for letters and geometric shapes than for nonsense shapes. This might result from dual coding (naming + spatial) or/and from a low stimulus complexity. There was no stimulus-specific laterality effect. However, we found a time-dependent laterality effect, which revealed that the performance of the left hand-right hemisphere was sustained up to 15 s, while the performance of the right-hand-left hemisphere decreased progressively throughout all retention times. This suggests that haptic memory traces are more robust to decay when they are processed by the left hand-right hemisphere.
Simulation and training of lumbar punctures using haptic volume rendering and a 6DOF haptic device
NASA Astrophysics Data System (ADS)
Färber, Matthias; Heller, Julika; Handels, Heinz
2007-03-01
The lumbar puncture is performed by inserting a needle into the spinal chord of the patient to inject medicaments or to extract liquor. The training of this procedure is usually done on the patient guided by experienced supervisors. A virtual reality lumbar puncture simulator has been developed in order to minimize the training costs and the patient's risk. We use a haptic device with six degrees of freedom (6DOF) to feedback forces that resist needle insertion and rotation. An improved haptic volume rendering approach is used to calculate the forces. This approach makes use of label data of relevant structures like skin, bone, muscles or fat and original CT data that contributes information about image structures that can not be segmented. A real-time 3D visualization with optional stereo view shows the punctured region. 2D visualizations of orthogonal slices enable a detailed impression of the anatomical context. The input data consisting of CT and label data and surface models of relevant structures is defined in an XML file together with haptic rendering and visualization parameters. In a first evaluation the visible human male data has been used to generate a virtual training body. Several users with different medical experience tested the lumbar puncture trainer. The simulator gives a good haptic and visual impression of the needle insertion and the haptic volume rendering technique enables the feeling of unsegmented structures. Especially, the restriction of transversal needle movement together with rotation constraints enabled by the 6DOF device facilitate a realistic puncture simulation.
Three Dimensional Projection Environment for Molecular Design and Surgical Simulation
2011-08-01
bypasses the cumbersome meshing process . The deformation model is only comprised of mass nodes, which are generated by sampling the object volume before...force should minimize the penetration volume, the haptic feedback force is derived directly. Additionally, a post- processing technique is developed to...render distinct physi-cal tissue properties across different interaction areas. The proposed approach does not require any pre- processing and is
ERIC Educational Resources Information Center
Ballesteros, Soledad; Bardisa, Dolores; Millar, Susanna; Reales, Jose M.
2005-01-01
A new psychological test battery was designed to provide a much-needed comprehensive tool for assessing the perceptual and cognitive abilities of visually handicapped children in using active touch. The test materials consist of raised-line, raised-dot, raised-surface shapes and displays, and familiar and novel 3-D objects. The research used 20…
Bornhoft, J M; Strabala, K W; Wortman, T D; Lehman, A C; Oleynikov, D; Farritor, S M
2011-01-01
The objective of this research is to study the effectiveness of using a stereoscopic visualization system for performing remote surgery. The use of stereoscopic vision has become common with the advent of the da Vinci® system (Intuitive, Sunnyvale CA). This system creates a virtual environment that consists of a 3-D display for visual feedback and haptic tactile feedback, together providing an intuitive environment for remote surgical applications. This study will use simple in vivo robotic surgical devices and compare the performance of surgeons using the stereoscopic interfacing system to the performance of surgeons using one dimensional monitors. The stereoscopic viewing system consists of two cameras, two monitors, and four mirrors. The cameras are mounted to a multi-functional miniature in vivo robot; and mimic the depth perception of the actual human eyes. This is done by placing the cameras at a calculated angle and distance apart. Live video streams from the left and right cameras are displayed on the left and right monitors, respectively. A system of angled mirrors allows the left and right eyes to see the video stream from the left and right monitor, respectively, creating the illusion of depth. The haptic interface consists of two PHANTOM Omni® (SensAble, Woburn Ma) controllers. These controllers measure the position and orientation of a pen-like end effector with three degrees of freedom. As the surgeon uses this interface, they see a 3-D image and feel force feedback for collision and workspace limits. The stereoscopic viewing system has been used in several surgical training tests and shows a potential improvement in depth perception and 3-D vision. The haptic system accurately gives force feedback that aids in surgery. Both have been used in non-survival animal surgeries, and have successfully been used in suturing and gallbladder removal. Bench top experiments using the interfacing system have also been conducted. A group of participants completed two different surgical training tasks using both a two dimensional visual system and the stereoscopic visual system. Results suggest that the stereoscopic visual system decreased the amount of time taken to complete the tasks. All participants also reported that the stereoscopic system was easier to utilize than the two dimensional system. Haptic controllers combined with stereoscopic vision provides for a more intuitive virtual environment. This system provides the surgeon with 3-D vision, depth perception, and the ability to receive feedback through forces applied in the haptic controller while performing surgery. These capabilities potentially enable the performance of more complex surgeries with a higher level of precision.
NASA Astrophysics Data System (ADS)
Nomura, Shusaku; Sasaki, Shuntaro; Hirakawa, Masato; Hiwaki, Osamu
2010-11-01
We investigated the brain potential in relation with the recognition of Müller-Lyer (ML) illusionary figure, which was a famous optical illusion. Although it is frequently assumed that the ML illusionary effect could be derived from its geometrical construction, it derives the same length miss-estimation effect on the sense of touch; haptic illusion. Moreover it occurs in people who are blindfolded or congenital blind. Thus somehow higher information processing than the optical one within the brain could be expected to involve with the recognition of ML figure while few brain studies have demonstrated it. We then investigated the brain waves under subjects' perceiving ML illusionary figure. As a result the marked difference of the brain potential between ML and the control condition around the midline of parietal brain, where the multi-modal perception information was thought to associate within the brain, was observed. This result implies that the anticipatory processing on the perception of ML illusionary figures would be derived by integrating multi-sensory information.
Modeling and modification of medical 3D objects. The benefit of using a haptic modeling tool.
Kling-Petersen, T; Rydmark, M
2000-01-01
The Computer Laboratory of the medical faculty in Goteborg (Mednet) has since the end of 1998 been one of a limited numbers of participants in the development of a new modeling tool together with SensAble Technologies Inc [http:¿www.sensable.com/]. The software called SensAble FreeForm was officially released at Siggraph September 1999. Briefly, the software mimics the modeling techniques traditionally used by clay artists. An imported model or a user defined block of "clay" can be modified using different tools such as a ball, square block, scrape etc via the use of a SensAble Technologies PHANToM haptic arm. The model will deform in 3D as a result of touching the "clay" with any selected tool and the amount of deformation is linear to the force applied. By getting instantaneous haptic as well as visual feedback, precise and intuitive changes are easily made. While SensAble FreeForm lacks several of the features normally associated with a 3D modeling program (such as text handling, application of surface and bumpmaps, high-end rendering engines, etc) it's strength lies in the ability to rapidly create non-geometric 3D models. For medical use, very few anatomically correct models are created from scratch. However, FreeForm features tools enable advanced modification of reconstructed or 3D scanned models. One of the main problems with 3D laserscanning of medical specimens is that the technique usually leaves holes or gaps in the dataset corresponding to areas in shadows such as orifices, deep grooves etc. By using FreeForms different tools, these defects are easily corrected and gaps are filled out. Similarly, traditional 3D reconstruction (based on serial sections etc) often shows artifacts as a result of the triangulation and/or tessellation processes. These artifacts usually manifest as unnatural ridges or uneven areas ("the accordion effect"). FreeForm contains a smoothing algorithm that enables the user to select an area to be modified and subsequently apply any given amount of smoothing to the object. While the final objects need to be exported for further 3D graphic manipulation, FreeForm addresses one of the most time consuming problems of 3D modeling: modification and creation of non-geometric 3D objects.
Force modeling for incision surgery into tissue with haptic application
NASA Astrophysics Data System (ADS)
Kim, Pyunghwa; Kim, Soomin; Choi, Seung-Hyun; Oh, Jong-Seok; Choi, Seung-Bok
2015-04-01
This paper presents a novel force modeling for an incision surgery into tissue and its haptic application for a surgeon. During the robot-assisted incision surgery, it is highly urgent to develop the haptic system for realizing sense of touch in the surgical area because surgeons cannot sense sensations. To achieve this goal, the force modeling related to reaction force of biological tissue is proposed in the perspective on energy. The force model describes reaction force focused on the elastic feature of tissue during the incision surgery. Furthermore, the force is realized using calculated information from the model by haptic device using magnetorheological fluid (MRF). The performance of realized force that is controlled by PID controller with open loop control is evaluated.
A Model for Steering with Haptic-Force Guidance
NASA Astrophysics Data System (ADS)
Yang, Xing-Dong; Irani, Pourang; Boulanger, Pierre; Bischof, Walter F.
Trajectory-based tasks are common in many applications and have been widely studied. Recently, researchers have shown that even very simple tasks, such as selecting items from cascading menus, can benefit from haptic-force guidance. Haptic guidance is also of significant value in many applications such as medical training, handwriting learning, and in applications requiring precise manipulations. There are, however, only very few guiding principles for selecting parameters that are best suited for proper force guiding. In this paper, we present a model, derived from the steering law that relates movement time to the essential components of a tunneling task in the presence of haptic-force guidance. Results of an experiment show that our model is highly accurate for predicting performance times in force-enhanced tunneling tasks.
A mass-density model can account for the size-weight illusion.
Wolf, Christian; Bergmann Tiest, Wouter M; Drewing, Knut
2018-01-01
When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object's mass, and the other from the object's density, with estimates' weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects' density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object's density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception.
A new selective developmental deficit: Impaired object recognition with normal face recognition.
Germine, Laura; Cashdollar, Nathan; Düzel, Emrah; Duchaine, Bradley
2011-05-01
Studies of developmental deficits in face recognition, or developmental prosopagnosia, have shown that individuals who have not suffered brain damage can show face recognition impairments coupled with normal object recognition (Duchaine and Nakayama, 2005; Duchaine et al., 2006; Nunn et al., 2001). However, no developmental cases with the opposite dissociation - normal face recognition with impaired object recognition - have been reported. The existence of a case of non-face developmental visual agnosia would indicate that the development of normal face recognition mechanisms does not rely on the development of normal object recognition mechanisms. To see whether a developmental variant of non-face visual object agnosia exists, we conducted a series of web-based object and face recognition tests to screen for individuals showing object recognition memory impairments but not face recognition impairments. Through this screening process, we identified AW, an otherwise normal 19-year-old female, who was then tested in the lab on face and object recognition tests. AW's performance was impaired in within-class visual recognition memory across six different visual categories (guns, horses, scenes, tools, doors, and cars). In contrast, she scored normally on seven tests of face recognition, tests of memory for two other object categories (houses and glasses), and tests of recall memory for visual shapes. Testing confirmed that her impairment was not related to a general deficit in lower-level perception, object perception, basic-level recognition, or memory. AW's results provide the first neuropsychological evidence that recognition memory for non-face visual object categories can be selectively impaired in individuals without brain damage or other memory impairment. These results indicate that the development of recognition memory for faces does not depend on intact object recognition memory and provide further evidence for category-specific dissociations in visual recognition. Copyright © 2010 Elsevier Srl. All rights reserved.
Park, George D; Reed, Catherine L
2015-10-01
Despite attentional prioritization for grasping space near the hands, tool-use appears to transfer attentional bias to the tool's end/functional part. The contributions of haptic and visual inputs to attentional distribution along a tool were investigated as a function of tool-use in near (Experiment 1) and far (Experiment 2) space. Visual attention was assessed with a 50/50, go/no-go, target discrimination task, while a tool was held next to targets appearing near the tool-occupied hand or tool-end. Target response times (RTs) and sensitivity (d-prime) were measured at target locations, before and after functional tool practice for three conditions: (1) open-tool: tool-end visible (visual + haptic inputs), (2) hidden-tool: tool-end visually obscured (haptic input only), and (3) short-tool: stick missing tool's length/end (control condition: hand occupied but no visual/haptic input). In near space, both open- and hidden-tool groups showed a tool-end, attentional bias (faster RTs toward tool-end) before practice; after practice, RTs near the hand improved. In far space, the open-tool group showed no bias before practice; after practice, target RTs near the tool-end improved. However, the hidden-tool group showed a consistent tool-end bias despite practice. Lack of short-tool group results suggested that hidden-tool group results were specific to haptic inputs. In conclusion, (1) allocation of visual attention along a tool due to tool practice differs in near and far space, and (2) visual attention is drawn toward the tool's end even when visually obscured, suggesting haptic input provides sufficient information for directing attention along the tool.
Haptic spatial matching in near peripersonal space.
Kaas, Amanda L; Mier, Hanneke I van
2006-04-01
Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \\) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.
Mixed reality temporal bone surgical dissector: mechanical design.
Hochman, Jordan Brent; Sepehri, Nariman; Rampersad, Vivek; Kraut, Jay; Khazraee, Milad; Pisa, Justyn; Unger, Bertram
2014-08-08
The Development of a Novel Mixed Reality (MR) Simulation. An evolving training environment emphasizes the importance of simulation. Current haptic temporal bone simulators have difficulty representing realistic contact forces and while 3D printed models convincingly represent vibrational properties of bone, they cannot reproduce soft tissue. This paper introduces a mixed reality model, where the effective elements of both simulations are combined; haptic rendering of soft tissue directly interacts with a printed bone model. This paper addresses one aspect in a series of challenges, specifically the mechanical merger of a haptic device with an otic drill. This further necessitates gravity cancelation of the work assembly gripper mechanism. In this system, the haptic end-effector is replaced by a high-speed drill and the virtual contact forces need to be repositioned to the drill tip from the mid wand. Previous publications detail generation of both the requisite printed and haptic simulations. Custom software was developed to reposition the haptic interaction point to the drill tip. A custom fitting, to hold the otic drill, was developed and its weight was offset using the haptic device. The robustness of the system to disturbances and its stable performance during drilling were tested. The experiments were performed on a mixed reality model consisting of two drillable rapid-prototyped layers separated by a free-space. Within the free-space, a linear virtual force model is applied to simulate drill contact with soft tissue. Testing illustrated the effectiveness of gravity cancellation. Additionally, the system exhibited excellent performance given random inputs and during the drill's passage between real and virtual components of the model. No issues with registration at model boundaries were encountered. These tests provide a proof of concept for the initial stages in the development of a novel mixed-reality temporal bone simulator.
Chae, Sanghoon; Jung, Sung-Weon
2018-01-01
A survey of 67 experienced orthopedic surgeons indicated that precise portal placement was the most important skill in arthroscopic surgery. However, none of the currently available virtual reality simulators include simulation / training in portal placement, including haptic feedback of the necessary puncture force. This study aimed to: (1) measure the in vivo force and stiffness during a portal placement procedure in an actual operating room and (2) implement active haptic simulation of a portal placement procedure using the measured in vivo data. We measured the force required for port placement and the stiffness of the joint capsule during portal placement procedures performed by an experienced arthroscopic surgeon. Based on the acquired mechanical property values, we developed a cable-driven active haptic simulator designed to train the portal placement skill and evaluated the validity of the simulated haptics. Ten patients diagnosed with rotator cuff tears were enrolled in this experiment. The maximum peak force and joint capsule stiffness during posterior portal placement procedures were 66.46 (±10.76N) and 2560.82(±252.92) N/m, respectively. We then designed an active haptic simulator using the acquired data. Our cable-driven mechanism structure had a friction force of 3.763 ± 0.341 N, less than 6% of the mean puncture force. Simulator performance was evaluated by comparing the target stiffness and force with the stiffness and force reproduced by the device. R-squared values were 0.998 for puncture force replication and 0.902 for stiffness replication, indicating that the in vivo data can be used to implement a realistic haptic simulator. PMID:29494691
Towards open-source, low-cost haptics for surgery simulation.
Suwelack, Stefan; Sander, Christian; Schill, Julian; Serf, Manuel; Danz, Marcel; Asfour, Tamim; Burger, Wolfgang; Dillmann, Rüdiger; Speidel, Stefanie
2014-01-01
In minimally invasive surgery (MIS), virtual reality (VR) training systems have become a promising education tool. However, the adoption of these systems in research and clinical settings is still limited by the high costs of dedicated haptics hardware for MIS. In this paper, we present ongoing research towards an open-source, low-cost haptic interface for MIS simulation. We demonstrate the basic mechanical design of the device, the sensor setup as well as its software integration.
NASA Astrophysics Data System (ADS)
Yang, Tae-Heon; Koo, Jeong-Hoi
2017-12-01
Humans can experience a realistic and vivid haptic sensations by the sense of touch. In order to have a fully immersive haptic experience, both kinaesthetic and vibrotactile information must be presented to human users. Currently, little haptic research has been performed on small haptic actuators that can covey both vibrotactile feedback based on the frequency of vibrations up to the human-perceivable limit and multiple levels of kinaesthetic feedback rapidly. Therefore, this study intends to design a miniature haptic device based on MR fluid and experimentally evaluate its ability to convey vibrotactile feedback up to 300 Hz along with kinaesthetic feedback. After constructing a prototype device, a series of testing was performed to evaluate its performance of the prototype using an experimental setup, consisting of a precision dynamic mechanical analyzer and an accelerometer. The kinaesthetic testing results show that the prototype device can provide the force rate up to 89% at 5 V (360 mA), which can be discretized into multiple levels of ‘just noticeable difference’ force rate, indicating that the device can convey a wide range of kinaesthetic sensations. To evaluate the high frequency vibrotactile feedback performance of the device, its acceleration responses were measured and processed using the FFT analysis. The results indicate that the device can convey high frequency vibrotactile sensations up to 300 Hz with the sufficiently large intensity of accelerations that human can feel.
Haptic Edge Detection Through Shear
NASA Astrophysics Data System (ADS)
Platkiewicz, Jonathan; Lipson, Hod; Hayward, Vincent
2016-03-01
Most tactile sensors are based on the assumption that touch depends on measuring pressure. However, the pressure distribution at the surface of a tactile sensor cannot be acquired directly and must be inferred from the deformation field induced by the touched object in the sensor medium. Currently, there is no consensus as to which components of strain are most informative for tactile sensing. Here, we propose that shape-related tactile information is more suitably recovered from shear strain than normal strain. Based on a contact mechanics analysis, we demonstrate that the elastic behavior of a haptic probe provides a robust edge detection mechanism when shear strain is sensed. We used a jamming-based robot gripper as a tactile sensor to empirically validate that shear strain processing gives accurate edge information that is invariant to changes in pressure, as predicted by the contact mechanics study. This result has implications for the design of effective tactile sensors as well as for the understanding of the early somatosensory processing in mammals.
Haptic Edge Detection Through Shear
Platkiewicz, Jonathan; Lipson, Hod; Hayward, Vincent
2016-01-01
Most tactile sensors are based on the assumption that touch depends on measuring pressure. However, the pressure distribution at the surface of a tactile sensor cannot be acquired directly and must be inferred from the deformation field induced by the touched object in the sensor medium. Currently, there is no consensus as to which components of strain are most informative for tactile sensing. Here, we propose that shape-related tactile information is more suitably recovered from shear strain than normal strain. Based on a contact mechanics analysis, we demonstrate that the elastic behavior of a haptic probe provides a robust edge detection mechanism when shear strain is sensed. We used a jamming-based robot gripper as a tactile sensor to empirically validate that shear strain processing gives accurate edge information that is invariant to changes in pressure, as predicted by the contact mechanics study. This result has implications for the design of effective tactile sensors as well as for the understanding of the early somatosensory processing in mammals. PMID:27009331
Haptic Edge Detection Through Shear.
Platkiewicz, Jonathan; Lipson, Hod; Hayward, Vincent
2016-03-24
Most tactile sensors are based on the assumption that touch depends on measuring pressure. However, the pressure distribution at the surface of a tactile sensor cannot be acquired directly and must be inferred from the deformation field induced by the touched object in the sensor medium. Currently, there is no consensus as to which components of strain are most informative for tactile sensing. Here, we propose that shape-related tactile information is more suitably recovered from shear strain than normal strain. Based on a contact mechanics analysis, we demonstrate that the elastic behavior of a haptic probe provides a robust edge detection mechanism when shear strain is sensed. We used a jamming-based robot gripper as a tactile sensor to empirically validate that shear strain processing gives accurate edge information that is invariant to changes in pressure, as predicted by the contact mechanics study. This result has implications for the design of effective tactile sensors as well as for the understanding of the early somatosensory processing in mammals.
Yasuda, Kazuhiro; Saichi, Kenta; Iwata, Hiroyasu
2018-01-01
Falls and fall-induced injuries are major global public health problems, and sensory input impairment in older adults results in significant limitations in feedback-type postural control. A haptic-based biofeedback (BF) system can be used for augmenting somatosensory input in older adults, and the application of this BF system can increase the objectivity of the feedback and encourage comparison with that provided by a trainer. Nevertheless, an optimal BF system that focuses on interpersonal feedback for balance training in older adults has not been proposed. Thus, we proposed a haptic-based perception-empathy BF system that provides information regarding the older adult's center-of-foot pressure pattern to the trainee and trainer for refining the motor learning effect. The first objective of this study was to examine the effect of this balance training regimen in healthy older adults performing a postural learning task. Second, this study aimed to determine whether BF training required high cognitive load to clarify its practicability in real-life settings. Twenty older adults were assigned to two groups: BF and control groups. Participants in both groups tried balance training in the single-leg stance while performing a cognitive task (i.e., serial subtraction task). Retention was tested 24 h later. Testing comprised balance performance measures (i.e., 95% confidence ellipse area and mean velocity of sway) and dual-task performance (number of responses and correct answers). Measurements of postural control using a force plate revealed that the stability of the single-leg stance was significantly lower in the BF group than in the control group during the balance task. The BF group retained the improvement in the 95% confidence ellipse area 24 h after the retention test. Results of dual-task performance during the balance task were not different between the two groups. These results confirmed the potential benefit of the proposed balance training regimen in designing successful motor learning programs for preventing falls in older adults. PMID:29868597
Yasuda, Kazuhiro; Saichi, Kenta; Iwata, Hiroyasu
2018-01-01
Falls and fall-induced injuries are major global public health problems, and sensory input impairment in older adults results in significant limitations in feedback-type postural control. A haptic-based biofeedback (BF) system can be used for augmenting somatosensory input in older adults, and the application of this BF system can increase the objectivity of the feedback and encourage comparison with that provided by a trainer. Nevertheless, an optimal BF system that focuses on interpersonal feedback for balance training in older adults has not been proposed. Thus, we proposed a haptic-based perception-empathy BF system that provides information regarding the older adult's center-of-foot pressure pattern to the trainee and trainer for refining the motor learning effect. The first objective of this study was to examine the effect of this balance training regimen in healthy older adults performing a postural learning task. Second, this study aimed to determine whether BF training required high cognitive load to clarify its practicability in real-life settings. Twenty older adults were assigned to two groups: BF and control groups. Participants in both groups tried balance training in the single-leg stance while performing a cognitive task (i.e., serial subtraction task). Retention was tested 24 h later. Testing comprised balance performance measures (i.e., 95% confidence ellipse area and mean velocity of sway) and dual-task performance (number of responses and correct answers). Measurements of postural control using a force plate revealed that the stability of the single-leg stance was significantly lower in the BF group than in the control group during the balance task. The BF group retained the improvement in the 95% confidence ellipse area 24 h after the retention test. Results of dual-task performance during the balance task were not different between the two groups. These results confirmed the potential benefit of the proposed balance training regimen in designing successful motor learning programs for preventing falls in older adults.
Haptic-STM: a human-in-the-loop interface to a scanning tunneling microscope.
Perdigão, Luís M A; Saywell, Alex
2011-07-01
The operation of a haptic device interfaced with a scanning tunneling microscope (STM) is presented here. The user moves the STM tip in three dimensions by means of a stylus attached to the haptic instrument. The tunneling current measured by the STM is converted to a vertical force, applied to the stylus and felt by the user, with the user being incorporated into the feedback loop that controls the tip-surface distance. A haptic-STM interface of this nature allows the user to feel atomic features on the surface and facilitates the tactile manipulation of the adsorbate/substrate system. The operation of this device is demonstrated via the room temperature STM imaging of C(60) molecules adsorbed on an Au(111) surface in ultra-high vacuum.
NASA Astrophysics Data System (ADS)
Erickson, David; Lacheray, Hervé; Lambert, Jason Michel; Mantegh, Iraj; Crymble, Derry; Daly, John; Zhao, Yan
2012-06-01
State-of-the-art robotic explosive ordnance disposal robotics have not, in general, adopted recent advances in control technology and man-machine interfaces and lag many years behind academia. This paper describes the Haptics-based Immersive Telerobotic System project investigating an immersive telepresence envrionment incorporating advanced vehicle control systems, Augmented immersive sensory feedback, dynamic 3D visual information, and haptic feedback for explosive ordnance disposal operators. The project aim is to provide operatiors a more sophisticated interface and expand sensory input to perform complex tasks to defeat improvised explosive devices successfully. The introduction of haptics and immersive teleprescence has the potential to shift the way teleprescence systems work for explosive ordnance disposal tasks or more widely for first responders scenarios involving remote unmanned ground vehicles.
Haptic Technologies for MEMS Design
NASA Astrophysics Data System (ADS)
Calis, Mustafa; Desmulliez, Marc P. Y.
2006-04-01
This paper presents for the first time a design methodology for MEMS/NEMS based on haptic sensing technologies. The software tool created as a result of this methodology will enable designers to model and interact in real time with their virtual prototype. One of the main advantages of haptic sensing is the ability to bring unusual microscopic forces back to the designer's world. Other significant benefits for developing such a methodology include gain productivity and the capability to include manufacturing costs within the design cycle.
Whitwell, Robert L.; Ganel, Tzvi; Byrne, Caitlin M.; Goodale, Melvyn A.
2015-01-01
Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. “Natural” prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object (“haptics-based object information”) once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets (“grip scaling”) when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF’s grip scaling slopes. In the second experiment, we examined an “unnatural” grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts. PMID:25999834
Whitwell, Robert L; Ganel, Tzvi; Byrne, Caitlin M; Goodale, Melvyn A
2015-01-01
Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. "Natural" prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object ("haptics-based object information") once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets ("grip scaling") when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF's grip scaling slopes. In the second experiment, we examined an "unnatural" grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts.
A haptic device for guide wire in interventional radiology procedures.
Moix, Thomas; Ilic, Dejan; Bleuler, Hannes; Zoethout, Jurjen
2006-01-01
Interventional Radiology (IR) is a minimally invasive procedure where thin tubular instruments, guide wires and catheters, are steered through the patient's vascular system under X-ray imaging. In order to perform these procedures, a radiologist has to be trained to master hand-eye coordination, instrument manipulation and procedure protocols. The existing simulation systems all have major drawbacks: the use of modified instruments, unrealistic insertion lengths, high inertia of the haptic device that creates a noticeably degraded dynamic behavior or excessive friction that is not properly compensated for. In this paper we propose a quality training environment dedicated to IR. The system is composed of a virtual reality (VR) simulation of the patient's anatomy linked to a robotic interface providing haptic force feedback. This paper focuses on the requirements, design and prototyping of a specific haptic interface for guide wires.
An Enhanced Soft Vibrotactile Actuator Based on ePVC Gel with Silicon Dioxide Nanoparticles.
Park, Won-Hyeong; Shin, Eun-Jae; Yun, Sungryul; Kim, Sang-Youn
2018-01-01
In this paper, we propose a soft vibrotactile actuator made by mixing silicon dioxide nanoparticles and plasticized PVC gel. The effect of the silicon dioxide nanoparticles in the plasticized PVC gel for the haptic performance is investigated in terms of electric, dielectric, and mechanical properties. Furthermore, eight soft vibrotactile actuators are prepared as a function of the content. Experiments are conducted to examine the haptic performance of the prepared eight soft vibrotactile actuators and to find the best weight ratio of the plasticized PVC gel to the nanoparticles. The experiments should show that the plasticized PVC gel with silicon dioxide nanoparticles improves the haptic performance of the plasticized PVC gel-based vibrotactile actuator, and the proposed vibrotactile actuator can create a variety of haptic sensations in a wide frequency range.
Morphologic compatibility or intraocular lens haptics and the lens capsule.
Nagamoto, T; Eguchi, G
1997-10-01
To evaluate the mechanical relationship between the intraocular lens (IOL) haptic and the capsular bag by quantitatively analyzing the fit of the haptic with the capsule equator and the capsular bag deformity induced by the implanted lens haptics. Division of Morphogenesis, Department of Developmental Biology, National Institute for Basic Biology, Okazaki, Japan. Following implantation of a poly(methyl methacrylate)(PMMA) ring in three excised human capsular bags with continuous curvilinear capsulorhexis (CCC), IOLs with different overall lengths or haptic designs were implanted in the bags and photographed. The straight length of the area of contact between the haptic and the capsule equator on the photographs was measured to provide a quantitative index of in-the-bag fixation and the length from the external margin of the PMMA ring to the external margin of the loop along the maximal diameter of the capsular bag, to indicate the quantitative degree of capsular deformity induced by an IOL. An IOL with modified-C loops produced better fit along the capsule equator and less deformity than an IOL with modified-J loops, and an IOL with an overall length of 12.0 or 12.5 mm produced a sufficiently good fit and less distortion of the capsular bag than an IOL with an overall length over 13.0 mm. An IOL with modified-C loops and an overall length of 12.0 or 12.5 mm is adequate for in-the-bag implantation following CCC.
Veras, Eduardo J; De Laurentis, Kathryn J; Dubey, Rajiv
2008-01-01
This paper describes the design and implementation of a control system that integrates visual and haptic information to give assistive force feedback through a haptic controller (Omni Phantom) to the user. A sensor-based assistive function and velocity scaling program provides force feedback that helps the user complete trajectory following exercises for rehabilitation purposes. This system also incorporates a PUMA robot for teleoperation, which implements a camera and a laser range finder, controlled in real time by a PC, were implemented into the system to help the user to define the intended path to the selected target. The real-time force feedback from the remote robot to the haptic controller is made possible by using effective multithreading programming strategies in the control system design and by novel sensor integration. The sensor-based assistant function concept applied to teleoperation as well as shared control enhances the motion range and manipulation capabilities of the users executing rehabilitation exercises such as trajectory following along a sensor-based defined path. The system is modularly designed to allow for integration of different master devices and sensors. Furthermore, because this real-time system is versatile the haptic component can be used separately from the telerobotic component; in other words, one can use the haptic device for rehabilitation purposes for cases in which assistance is needed to perform tasks (e.g., stroke rehab) and also for teleoperation with force feedback and sensor assistance in either supervisory or automatic modes.
Development of a Robotic Colonoscopic Manipulation System, Using Haptic Feedback Algorithm.
Woo, Jaehong; Choi, Jae Hyuk; Seo, Jong Tae; Kim, Tae Il; Yi, Byung Ju
2017-01-01
Colonoscopy is one of the most effective diagnostic and therapeutic tools for colorectal diseases. We aim to propose a master-slave robotic colonoscopy that is controllable in remote site using conventional colonoscopy. The master and slave robot were developed to use conventional flexible colonoscopy. The robotic colonoscopic procedure was performed using a colonoscope training model by one expert endoscopist and two unexperienced engineers. To provide the haptic sensation, the insertion force and the rotating torque were measured and sent to the master robot. A slave robot was developed to hold the colonoscopy and its knob, and perform insertion, rotation, and two tilting motions of colonoscope. A master robot was designed to teach motions of the slave robot. These measured force and torque were scaled down by one tenth to provide the operator with some reflection force and torque at the haptic device. The haptic sensation and feedback system was successful and helpful to feel the constrained force or torque in colon. The insertion time using robotic system decreased with repeated procedures. This work proposed a robotic approach for colonoscopy using haptic feedback algorithm, and this robotic device would effectively perform colonoscopy with reduced burden and comparable safety for patients in remote site.
Experimental Study on the Perception Characteristics of Haptic Texture by Multidimensional Scaling.
Wu, Juan; Li, Na; Liu, Wei; Song, Guangming; Zhang, Jun
2015-01-01
Recent works regarding real texture perception demonstrate that physical factors such as stiffness and spatial period play a fundamental role in texture perception. This research used a multidimensional scaling (MDS) analysis to further characterize and quantify the effects of the simulation parameters on haptic texture rendering and perception. In a pilot experiment, 12 haptic texture samples were generated by using a 3-degrees-of-freedom (3-DOF) force-feedback device with varying spatial period, height, and stiffness coefficient parameter values. The subjects' perceptions of the virtual textures indicate that roughness, denseness, flatness and hardness are distinguishing characteristics of texture. In the main experiment, 19 participants rated the dissimilarities of the textures and estimated the magnitudes of their characteristics. The MDS method was used to recover the underlying perceptual space and reveal the significance of the space from the recorded data. The physical parameters and their combinations have significant effects on the perceptual characteristics. A regression model was used to quantitatively analyze the parameters and their effects on the perceptual characteristics. This paper is to illustrate that haptic texture perception based on force feedback can be modeled in two- or three-dimensional space and provide suggestions on improving perception-based haptic texture rendering.
A Review of Simulators with Haptic Devices for Medical Training.
Escobar-Castillejos, David; Noguez, Julieta; Neri, Luis; Magana, Alejandra; Benes, Bedrich
2016-04-01
Medical procedures often involve the use of the tactile sense to manipulate organs or tissues by using special tools. Doctors require extensive preparation in order to perform them successfully; for example, research shows that a minimum of 750 operations are needed to acquire sufficient experience to perform medical procedures correctly. Haptic devices have become an important training alternative and they have been considered to improve medical training because they let users interact with virtual environments by adding the sense of touch to the simulation. Previous articles in the field state that haptic devices enhance the learning of surgeons compared to current training environments used in medical schools (corpses, animals, or synthetic skin and organs). Consequently, virtual environments use haptic devices to improve realism. The goal of this paper is to provide a state of the art review of recent medical simulators that use haptic devices. In particular we focus on stitching, palpation, dental procedures, endoscopy, laparoscopy, and orthopaedics. These simulators are reviewed and compared from the viewpoint of used technology, the number of degrees of freedom, degrees of force feedback, perceived realism, immersion, and feedback provided to the user. In the conclusion, several observations per area and suggestions for future work are provided.
Jurado-Berbel, Patricia; Costa-Miserachs, David; Torras-Garcia, Meritxell; Coll-Andreu, Margalida; Portell-Cortés, Isabel
2010-02-11
The present work examined whether post-training systemic epinephrine (EPI) is able to modulate short-term (3h) and long-term (24 h and 48 h) memory of standard object recognition, as well as long-term (24 h) memory of separate "what" (object identity) and "where" (object location) components of object recognition. Although object recognition training is associated to low arousal levels, all the animals received habituation to the training box in order to further reduce emotional arousal. Post-training EPI improved long-term (24 h and 48 h), but not short-term (3 h), memory in the standard object recognition task, as well as 24 h memory for both object identity and object location. These data indicate that post-training epinephrine: (1) facilitates long-term memory for standard object recognition; (2) exerts separate facilitatory effects on "what" (object identity) and "where" (object location) components of object recognition; and (3) is capable of improving memory for a low arousing task even in highly habituated rats.
Experience moderates overlap between object and face recognition, suggesting a common ability
Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.
2014-01-01
Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021
Experience moderates overlap between object and face recognition, suggesting a common ability.
Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E
2014-07-03
Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. © 2014 ARVO.
Adaptive displays and controllers using alternative feedback.
Repperger, D W
2004-12-01
Investigations on the design of haptic (force reflecting joystick or force display) controllers were conducted by viewing the display of force information within the context of several different paradigms. First, using analogies from electrical and mechanical systems, certain schemes of the haptic interface were hypothesized which may improve the human-machine interaction with respect to various criteria. A discussion is given on how this interaction benefits the electrical and mechanical system. To generalize this concept to the design of human-machine interfaces, three studies with haptic mechanisms were then synthesized and analyzed.
Improved haptic interface for colonoscopy simulation.
Woo, Hyun Soo; Kim, Woo Seok; Ahn, Woojin; Lee, Doo Yong; Yi, Sun Young
2007-01-01
This paper presents an improved haptic interface of the KAIST-Ewha colonoscopy simulator II. The haptic interface enables the distal portion of the colonoscope to be freely bent while guaranteeing enough workspace and reflective forces for colonoscopy simulation. Its force-torque sensor measures profiles of the user. Manipulation of the colonoscope tip is monitored by four deflection sensors, and triggers computation to render accurate graphic images corresponding to the angle knob rotation. Tack switches are attached on the valve-actuation buttons of the colonoscope to simulate air-injection or suction, and the corresponding deformation of the colon.
Generalization between canonical and non-canonical views in object recognition
Ghose, Tandra; Liu, Zili
2013-01-01
Viewpoint generalization in object recognition is the process that allows recognition of a given 3D object from many different viewpoints despite variations in its 2D projections. We used the canonical view effects as a foundation to empirically test the validity of a major theory in object recognition, the view-approximation model (Poggio & Edelman, 1990). This model predicts that generalization should be better when an object is first seen from a non-canonical view and then a canonical view than when seen in the reversed order. We also manipulated object similarity to study the degree to which this view generalization was constrained by shape details and task instructions (object vs. image recognition). Old-new recognition performance for basic and subordinate level objects was measured in separate blocks. We found that for object recognition, view generalization between canonical and non-canonical views was comparable for basic level objects. For subordinate level objects, recognition performance was more accurate from non-canonical to canonical views than the other way around. When the task was changed from object recognition to image recognition, the pattern of the results reversed. Interestingly, participants responded “old” to “new” images of “old” objects with a substantially higher rate than to “new” objects, despite instructions to the contrary, thereby indicating involuntary view generalization. Our empirical findings are incompatible with the prediction of the view-approximation theory, and argue against the hypothesis that views are stored independently. PMID:23283692
Melman, T; de Winter, J C F; Abbink, D A
2017-01-01
An important issue in road traffic safety is that drivers show adverse behavioral adaptation (BA) to driver assistance systems. Haptic steering guidance is an upcoming assistance system which facilitates lane-keeping performance while keeping drivers in the loop, and which may be particularly prone to BA. Thus far, experiments on haptic steering guidance have measured driver performance while the vehicle speed was kept constant. The aim of the present driving simulator study was to examine whether haptic steering guidance causes BA in the form of speeding, and to evaluate two types of haptic steering guidance designed not to suffer from BA. Twenty-four participants drove a 1.8m wide car for 13.9km on a curved road, with cones demarcating a single 2.2m narrow lane. Participants completed four conditions in a counterbalanced design: no guidance (Manual), continuous haptic guidance (Cont), continuous guidance that linearly reduced feedback gains from full guidance at 125km/h towards manual control at 130km/h and above (ContRF), and haptic guidance provided only when the predicted lateral position was outside a lateral bandwidth (Band). Participants were familiarized with each condition prior to the experimental runs and were instructed to drive as they normally would while minimizing the number of cone hits. Compared to Manual, the Cont condition yielded a significantly higher driving speed (on average by 7km/h), whereas ContRF and Band did not. All three guidance conditions yielded better lane-keeping performance than Manual, whereas Cont and ContRF yielded lower self-reported workload than Manual. In conclusion, continuous steering guidance entices drivers to increase their speed, thereby diminishing its potential safety benefits. It is possible to prevent BA while retaining safety benefits by making a design adjustment either in lateral (Band) or in longitudinal (ContRF) direction. Copyright © 2016. Published by Elsevier Ltd.
Feel, imagine and learn! - Haptic augmented simulation and embodied instruction in physics learning
NASA Astrophysics Data System (ADS)
Han, In Sook
The purpose of this study was to investigate the potentials and effects of an embodied instructional model in abstract concept learning. This embodied instructional process included haptic augmented educational simulation as an instructional tool to provide perceptual experiences as well as further instruction to activate those previous experiences with perceptual simulation. In order to verify the effectiveness of this instructional model, haptic augmented simulation with three different haptic levels (force and kinesthetic, kinesthetic, and non-haptic) and instructional materials (narrative and expository) were developed and their effectiveness tested. 220 fifth grade students were recruited to participate in the study from three elementary schools located in lower SES neighborhoods in Bronx, New York. The study was conducted for three consecutive weeks in regular class periods. The data was analyzed using ANCOVA, ANOVA, and MANOVA. The result indicates that haptic augmented simulations, both the force and kinesthetic and the kinesthetic simulations, was more effective than the non-haptic simulation in providing perceptual experiences and helping elementary students to create multimodal representations about machines' movements. However, in most cases, force feedback was needed to construct a fully loaded multimodal representation that could be activated when the instruction with less sensory modalities was being given. In addition, the force and kinesthetic simulation was effective in providing cognitive grounding to comprehend a new learning content based on the multimodal representation created with enhanced force feedback. Regarding the instruction type, it was found that the narrative and the expository instructions did not make any difference in activating previous perceptual experiences. These findings suggest that it is important to help students to make a solid cognitive ground with perceptual anchor. Also, sequential abstraction process would deepen students' understanding by providing an opportunity to practice their mental simulation by removing sensory modalities used one by one and to gradually reach abstract level of understanding where students can imagine the machine's movements and working mechanisms with only abstract language without any perceptual supports.
Kim, K; Lee, S
2015-05-01
Diagnosis of skin conditions is dependent on the assessment of skin surface properties that are represented by more tactile properties such as stiffness, roughness, and friction than visual information. Due to this reason, adding tactile feedback to existing vision based diagnosis systems can help dermatologists diagnose skin diseases or disorders more accurately. The goal of our research was therefore to develop a tactile rendering system for skin examinations by dynamic touch. Our development consists of two stages: converting a single image to a 3D haptic surface and rendering the generated haptic surface in real-time. Converting to 3D surfaces from 2D single images was implemented with concerning human perception data collected by a psychophysical experiment that measured human visual and haptic sensibility to 3D skin surface changes. For the second stage, we utilized real skin biomechanical properties found by prior studies. Our tactile rendering system is a standalone system that can be used with any single cameras and haptic feedback devices. We evaluated the performance of our system by conducting an identification experiment with three different skin images with five subjects. The participants had to identify one of the three skin surfaces by using a haptic device (Falcon) only. No visual cue was provided for the experiment. The results indicate that our system provides sufficient performance to render discernable tactile rendering with different skin surfaces. Our system uses only a single skin image and automatically generates a 3D haptic surface based on human haptic perception. Realistic skin interactions can be provided in real-time for the purpose of skin diagnosis, simulations, or training. Our system can also be used for other applications like virtual reality and cosmetic applications. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Buck, Ursula; Naether, Silvio; Braun, Marcel; Thali, Michael
2008-09-18
Non-invasive documentation methods such as surface scanning and radiological imaging are gaining in importance in the forensic field. These three-dimensional technologies provide digital 3D data, which are processed and handled in the computer. However, the sense of touch gets lost using the virtual approach. The haptic device enables the use of the sense of touch to handle and feel digital 3D data. The multifunctional application of a haptic device for forensic approaches is evaluated and illustrated in three different cases: the representation of bone fractures of the lower extremities, by traffic accidents, in a non-invasive manner; the comparison of bone injuries with the presumed injury-inflicting instrument; and in a gunshot case, the identification of the gun by the muzzle imprint, and the reconstruction of the holding position of the gun. The 3D models of the bones are generated from the Computed Tomography (CT) images. The 3D models of the exterior injuries, the injury-inflicting tools and the bone injuries, where a higher resolution is necessary, are created by the optical surface scan. The haptic device is used in combination with the software FreeForm Modelling Plus for touching the surface of the 3D models to feel the minute injuries and the surface of tools, to reposition displaced bone parts and to compare an injury-causing instrument with an injury. The repositioning of 3D models in a reconstruction is easier, faster and more precisely executed by means of using the sense of touch and with the user-friendly movement in the 3D space. For representation purposes, the fracture lines of bones are coloured. This work demonstrates that the haptic device is a suitable and efficient application in forensic science. The haptic device offers a new way in the handling of digital data in the virtual 3D space.
Haptic interfaces using dielectric electroactive polymers
NASA Astrophysics Data System (ADS)
Ozsecen, Muzaffer Y.; Sivak, Mark; Mavroidis, Constantinos
2010-04-01
Quality, amplitude and frequency of the interaction forces between a human and an actuator are essential traits for haptic applications. A variety of Electro-Active Polymer (EAP) based actuators can provide these characteristics simultaneously with quiet operation, low weight, high power density and fast response. This paper demonstrates a rolled Dielectric Elastomer Actuator (DEA) being used as a telepresence device in a heart beat measurement application. In the this testing, heart signals were acquired from a remote location using a wireless heart rate sensor, sent through a network and DEA was used to haptically reproduce the heart beats at the medical expert's location. A series of preliminary human subject tests were conducted that demonstrated that a) DE based haptic feeling can be used in heart beat measurement tests and b) through subjective testing the stiffness and actuator properties of the EAP can be tuned for a variety of applications.
A three-axis force sensor for dual finger haptic interfaces.
Fontana, Marco; Marcheschi, Simone; Salsedo, Fabio; Bergamasco, Massimo
2012-10-10
In this work we present the design process, the characterization and testing of a novel three-axis mechanical force sensor. This sensor is optimized for use in closed-loop force control of haptic devices with three degrees of freedom. In particular the sensor has been conceived for integration with a dual finger haptic interface that aims at simulating forces that occur during grasping and surface exploration. The sensing spring structure has been purposely designed in order to match force and layout specifications for the application. In this paper the design of the sensor is presented, starting from an analytic model that describes the characteristic matrix of the sensor. A procedure for designing an optimal overload protection mechanism is proposed. In the last part of the paper the authors describe the experimental characterization and the integrated test on a haptic hand exoskeleton showing the improvements in the controller performances provided by the inclusion of the force sensor.
Blindness enhances tactile acuity and haptic 3-D shape discrimination.
Norman, J Farley; Bartholomew, Ashley N
2011-10-01
This study compared the sensory and perceptual abilities of the blind and sighted. The 32 participants were required to perform two tasks: tactile grating orientation discrimination (to determine tactile acuity) and haptic three-dimensional (3-D) shape discrimination. The results indicated that the blind outperformed their sighted counterparts (individually matched for both age and sex) on both tactile tasks. The improvements in tactile acuity that accompanied blindness occurred for all blind groups (congenital, early, and late). However, the improvements in haptic 3-D shape discrimination only occurred for the early-onset and late-onset blindness groups; the performance of the congenitally blind was no better than that of the sighted controls. The results of the present study demonstrate that blindness does lead to an enhancement of tactile abilities, but they also suggest that early visual experience may play a role in facilitating haptic 3-D shape discrimination.
Verticality perception during and after galvanic vestibular stimulation.
Volkening, Katharina; Bergmann, Jeannine; Keller, Ingo; Wuehr, Max; Müller, Friedemann; Jahn, Klaus
2014-10-03
The human brain constructs verticality perception by integrating vestibular, somatosensory, and visual information. Here we investigated whether galvanic vestibular stimulation (GVS) has an effect on verticality perception both during and after application, by assessing the subjective verticals (visual, haptic and postural) in healthy subjects at those times. During stimulation the subjective visual vertical and the subjective haptic vertical shifted towards the anode, whereas this shift was reversed towards the cathode in all modalities once stimulation was turned off. Overall, the effects were strongest for the haptic modality. Additional investigation of the time course of GVS-induced changes in the haptic vertical revealed that anodal shifts persisted for the entire 20-min stimulation interval in the majority of subjects. Aftereffects exhibited different types of decay, with a preponderance for an exponential decay. The existence of such reverse effects after stimulation could have implications for GVS-based therapy. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E
2017-07-01
According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.
Evaluation of haptic interfaces for simulation of drill vibration in virtual temporal bone surgery.
Ghasemloonia, Ahmad; Baxandall, Shalese; Zareinia, Kourosh; Lui, Justin T; Dort, Joseph C; Sutherland, Garnette R; Chan, Sonny
2016-11-01
Surgical training is evolving from an observership model towards a new paradigm that includes virtual-reality (VR) simulation. In otolaryngology, temporal bone dissection has become intimately linked with VR simulation as the complexity of anatomy demands a high level of surgeon aptitude and confidence. While an adequate 3D visualization of the surgical site is available in current simulators, the force feedback rendered during haptic interaction does not convey vibrations. This lack of vibration rendering limits the simulation fidelity of a surgical drill such as that used in temporal bone dissection. In order to develop an immersive simulation platform capable of haptic force and vibration feedback, the efficacy of hand controllers for rendering vibration in different drilling circumstances needs to be investigated. In this study, the vibration rendering ability of four different haptic hand controllers were analyzed and compared to find the best commercial haptic hand controller. A test-rig was developed to record vibrations encountered during temporal bone dissection and a software was written to render the recorded signals without adding hardware to the system. An accelerometer mounted on the end-effector of each device recorded the rendered vibration signals. The newly recorded vibration signal was compared with the input signal in both time and frequency domains by coherence and cross correlation analyses to quantitatively measure the fidelity of these devices in terms of rendering vibrotactile drilling feedback in different drilling conditions. This method can be used to assess the vibration rendering ability in VR simulation systems and selection of ideal haptic devices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Advanced Maintenance Simulation by Means of Hand-Based Haptic Interfaces
NASA Astrophysics Data System (ADS)
Nappi, Michele; Paolino, Luca; Ricciardi, Stefano; Sebillo, Monica; Vitiello, Giuliana
Aerospace industry has been involved in virtual simulation for design and testing since the birth of virtual reality. Today this industry is showing a growing interest in the development of haptic-based maintenance training applications, which represent the most advanced way to simulate maintenance and repair tasks within a virtual environment by means of a visual-haptic approach. The goal is to allow the trainee to experiment the service procedures not only as a workflow reproduced at a visual level but also in terms of the kinaesthetic feedback involved with the manipulation of tools and components. This study, conducted in collaboration with aerospace industry specialists, is aimed to the development of an immersive virtual capable of immerging the trainees into a virtual environment where mechanics and technicians can perform maintenance simulation or training tasks by directly manipulating 3D virtual models of aircraft parts while perceiving force feedback through the haptic interface. The proposed system is based on ViRstperson, a virtual reality engine under development at the Italian Center for Aerospace Research (CIRA) to support engineering and technical activities such as design-time maintenance procedure validation, and maintenance training. This engine has been extended to support haptic-based interaction, enabling a more complete level of interaction, also in terms of impedance control, and thus fostering the development of haptic knowledge in the user. The user’s “sense of touch” within the immersive virtual environment is simulated through an Immersion CyberForce® hand-based force-feedback device. Preliminary testing of the proposed system seems encouraging.
High-fidelity bilateral teleoperation systems and the effect of multimodal haptics.
Tavakoli, Mahdi; Aziminejad, Arash; Patel, Rajni V; Moallem, Mehrdad
2007-12-01
In master-slave teleoperation applications that deal with a delicate and sensitive environment, it is important to provide haptic feedback of slave/environment interactions to the user's hand as it improves task performance and teleoperation transparency (fidelity), which is the extent of telepresence of the remote environment available to the user through the master-slave system. For haptic teleoperation, in addition to a haptics-capable master interface, often one or more force sensors are also used, which warrant new bilateral control architectures while increasing the cost and the complexity of the teleoperation system. In this paper, we investigate the added benefits of using force sensors that measure hand/master and slave/environment interactions and of utilizing local feedback loops on the teleoperation transparency. We compare the two-channel and the four-channel bilateral control systems in terms of stability and transparency, and study the stability and performance robustness of the four-channel method against nonidealities that arise during bilateral control implementation, which include master-slave communication latency and changes in the environment dynamics. The next issue addressed in the paper deals with the case where the master interface is not haptics capable, but the slave is equipped with a force sensor. In the context of robotics-assisted soft-tissue surgical applications, we explore through human factors experiments whether slave/environment force measurements can be of any help with regard to improving task performance. The last problem we study is whether slave/environment force information, with and without haptic capability in the master interface, can help improve outcomes under degraded visual conditions.
NASA Astrophysics Data System (ADS)
Pekedis, Mahmut; Mascerañas, David; Turan, Gursoy; Ercan, Emre; Farrar, Charles R.; Yildiz, Hasan
2015-08-01
For the last two decades, developments in damage detection algorithms have greatly increased the potential for autonomous decisions about structural health. However, we are still struggling to build autonomous tools that can match the ability of a human to detect and localize the quantity of damage in structures. Therefore, there is a growing interest in merging the computational and cognitive concepts to improve the solution of structural health monitoring (SHM). The main object of this research is to apply the human-machine cooperative approach on a tower structure to detect damage. The cooperation approach includes haptic tools to create an appropriate collaboration between SHM sensor networks, statistical compression techniques and humans. Damage simulation in the structure is conducted by releasing some of the bolt loads. Accelerometers are bonded to various locations of the tower members to acquire the dynamic response of the structure. The obtained accelerometer results are encoded in three different ways to represent them as a haptic stimulus for the human subjects. Then, the participants are subjected to each of these stimuli to detect the bolt loosened damage in the tower. Results obtained from the human-machine cooperation demonstrate that the human subjects were able to recognize the damage with an accuracy of 88 ± 20.21% and response time of 5.87 ± 2.33 s. As a result, it is concluded that the currently developed human-machine cooperation SHM may provide a useful framework to interact with abstract entities such as data from a sensor network.
Simple display system of mechanical properties of cells and their dispersion.
Shimizu, Yuji; Kihara, Takanori; Haghparast, Seyed Mohammad Ali; Yuba, Shunsuke; Miyake, Jun
2012-01-01
The mechanical properties of cells are unique indicators of their states and functions. Though, it is difficult to recognize the degrees of mechanical properties, due to small size of the cell and broad distribution of the mechanical properties. Here, we developed a simple virtual reality system for presenting the mechanical properties of cells and their dispersion using a haptic device and a PC. This system simulates atomic force microscopy (AFM) nanoindentation experiments for floating cells in virtual environments. An operator can virtually position the AFM spherical probe over a round cell with the haptic handle on the PC monitor and feel the force interaction. The Young's modulus of mesenchymal stem cells and HEK293 cells in the floating state was measured by AFM. The distribution of the Young's modulus of these cells was broad, and the distribution complied with a log-normal pattern. To represent the mechanical properties together with the cell variance, we used log-normal distribution-dependent random number determined by the mode and variance values of the Young's modulus of these cells. The represented Young's modulus was determined for each touching event of the probe surface and the cell object, and the haptic device-generating force was calculated using a Hertz model corresponding to the indentation depth and the fixed Young's modulus value. Using this system, we can feel the mechanical properties and their dispersion in each cell type in real time. This system will help us not only recognize the degrees of mechanical properties of diverse cells but also share them with others.
Simple Display System of Mechanical Properties of Cells and Their Dispersion
Shimizu, Yuji; Kihara, Takanori; Haghparast, Seyed Mohammad Ali; Yuba, Shunsuke; Miyake, Jun
2012-01-01
The mechanical properties of cells are unique indicators of their states and functions. Though, it is difficult to recognize the degrees of mechanical properties, due to small size of the cell and broad distribution of the mechanical properties. Here, we developed a simple virtual reality system for presenting the mechanical properties of cells and their dispersion using a haptic device and a PC. This system simulates atomic force microscopy (AFM) nanoindentation experiments for floating cells in virtual environments. An operator can virtually position the AFM spherical probe over a round cell with the haptic handle on the PC monitor and feel the force interaction. The Young's modulus of mesenchymal stem cells and HEK293 cells in the floating state was measured by AFM. The distribution of the Young's modulus of these cells was broad, and the distribution complied with a log-normal pattern. To represent the mechanical properties together with the cell variance, we used log-normal distribution-dependent random number determined by the mode and variance values of the Young's modulus of these cells. The represented Young's modulus was determined for each touching event of the probe surface and the cell object, and the haptic device-generating force was calculated using a Hertz model corresponding to the indentation depth and the fixed Young's modulus value. Using this system, we can feel the mechanical properties and their dispersion in each cell type in real time. This system will help us not only recognize the degrees of mechanical properties of diverse cells but also share them with others. PMID:22479595
Eye movements during object recognition in visual agnosia.
Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe
2012-07-01
This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.
The role of color information on object recognition: a review and meta-analysis.
Bramão, Inês; Reis, Alexandra; Petersson, Karl Magnus; Faísca, Luís
2011-09-01
In this study, we systematically review the scientific literature on the effect of color on object recognition. Thirty-five independent experiments, comprising 1535 participants, were included in a meta-analysis. We found a moderate effect of color on object recognition (d=0.28). Specific effects of moderator variables were analyzed and we found that color diagnosticity is the factor with the greatest moderator effect on the influence of color in object recognition; studies using color diagnostic objects showed a significant color effect (d=0.43), whereas a marginal color effect was found in studies that used non-color diagnostic objects (d=0.18). The present study did not permit the drawing of specific conclusions about the moderator effect of the object recognition task; while the meta-analytic review showed that color information improves object recognition mainly in studies using naming tasks (d=0.36), the literature review revealed a large body of evidence showing positive effects of color information on object recognition in studies using a large variety of visual recognition tasks. We also found that color is important for the ability to recognize artifacts and natural objects, to recognize objects presented as types (line-drawings) or as tokens (photographs), and to recognize objects that are presented without surface details, such as texture or shadow. Taken together, the results of the meta-analysis strongly support the contention that color plays a role in object recognition. This suggests that the role of color should be taken into account in models of visual object recognition. Copyright © 2011 Elsevier B.V. All rights reserved.
The role of perceptual load in object recognition.
Lavie, Nilli; Lin, Zhicheng; Zokaei, Nahid; Thoma, Volker
2009-10-01
Predictions from perceptual load theory (Lavie, 1995, 2005) regarding object recognition across the same or different viewpoints were tested. Results showed that high perceptual load reduces distracter recognition levels despite always presenting distracter objects from the same view. They also showed that the levels of distracter recognition were unaffected by a change in the distracter object view under conditions of low perceptual load. These results were found both with repetition priming measures of distracter recognition and with performance on a surprise recognition memory test. The results support load theory proposals that distracter recognition critically depends on the level of perceptual load. The implications for the role of attention in object recognition theories are discussed. PsycINFO Database Record (c) 2009 APA, all rights reserved.
Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image
NASA Astrophysics Data System (ADS)
Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti
2016-06-01
An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.
It Takes Two–Skilled Recognition of Objects Engages Lateral Areas in Both Hemispheres
Bilalić, Merim; Kiesel, Andrea; Pohl, Carsten; Erb, Michael; Grodd, Wolfgang
2011-01-01
Our object recognition abilities, a direct product of our experience with objects, are fine-tuned to perfection. Left temporal and lateral areas along the dorsal, action related stream, as well as left infero-temporal areas along the ventral, object related stream are engaged in object recognition. Here we show that expertise modulates the activity of dorsal areas in the recognition of man-made objects with clearly specified functions. Expert chess players were faster than chess novices in identifying chess objects and their functional relations. Experts' advantage was domain-specific as there were no differences between groups in a control task featuring geometrical shapes. The pattern of eye movements supported the notion that experts' extensive knowledge about domain objects and their functions enabled superior recognition even when experts were not directly fixating the objects of interest. Functional magnetic resonance imaging (fMRI) related exclusively the areas along the dorsal stream to chess specific object recognition. Besides the commonly involved left temporal and parietal lateral brain areas, we found that only in experts homologous areas on the right hemisphere were also engaged in chess specific object recognition. Based on these results, we discuss whether skilled object recognition does not only involve a more efficient version of the processes found in non-skilled recognition, but also qualitatively different cognitive processes which engage additional brain areas. PMID:21283683
Holdstock, J S; Mayes, A R; Roberts, N; Cezayirli, E; Isaac, C L; O'Reilly, R C; Norman, K A
2002-01-01
The claim that recognition memory is spared relative to recall after focal hippocampal damage has been disputed in the literature. We examined this claim by investigating object and object-location recall and recognition memory in a patient, YR, who has adult-onset selective hippocampal damage. Our aim was to identify the conditions under which recognition was spared relative to recall in this patient. She showed unimpaired forced-choice object recognition but clearly impaired recall, even when her control subjects found the object recognition task to be numerically harder than the object recall task. However, on two other recognition tests, YR's performance was not relatively spared. First, she was clearly impaired at an equivalently difficult yes/no object recognition task, but only when targets and foils were very similar. Second, YR was clearly impaired at forced-choice recognition of object-location associations. This impairment was also unrelated to difficulty because this task was no more difficult than the forced-choice object recognition task for control subjects. The clear impairment of yes/no, but not of forced-choice, object recognition after focal hippocampal damage, when targets and foils are very similar, is predicted by the neural network-based Complementary Learning Systems model of recognition. This model postulates that recognition is mediated by hippocampally dependent recollection and cortically dependent familiarity; thus hippocampal damage should not impair item familiarity. The model postulates that familiarity is ineffective when very similar targets and foils are shown one at a time and subjects have to identify which items are old (yes/no recognition). In contrast, familiarity is effective in discriminating which of similar targets and foils, seen together, is old (forced-choice recognition). Independent evidence from the remember/know procedure also indicates that YR's familiarity is normal. The Complementary Learning Systems model can also accommodate the clear impairment of forced-choice object-location recognition memory if it incorporates the view that the most complete convergence of spatial and object information, represented in different cortical regions, occurs in the hippocampus.
Higher-Order Neural Networks Applied to 2D and 3D Object Recognition
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Reid, Max B.
1994-01-01
A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.
Development of a Robotic Colonoscopic Manipulation System, Using Haptic Feedback Algorithm
Woo, Jaehong; Choi, Jae Hyuk; Seo, Jong Tae
2017-01-01
Purpose Colonoscopy is one of the most effective diagnostic and therapeutic tools for colorectal diseases. We aim to propose a master-slave robotic colonoscopy that is controllable in remote site using conventional colonoscopy. Materials and Methods The master and slave robot were developed to use conventional flexible colonoscopy. The robotic colonoscopic procedure was performed using a colonoscope training model by one expert endoscopist and two unexperienced engineers. To provide the haptic sensation, the insertion force and the rotating torque were measured and sent to the master robot. Results A slave robot was developed to hold the colonoscopy and its knob, and perform insertion, rotation, and two tilting motions of colonoscope. A master robot was designed to teach motions of the slave robot. These measured force and torque were scaled down by one tenth to provide the operator with some reflection force and torque at the haptic device. The haptic sensation and feedback system was successful and helpful to feel the constrained force or torque in colon. The insertion time using robotic system decreased with repeated procedures. Conclusion This work proposed a robotic approach for colonoscopy using haptic feedback algorithm, and this robotic device would effectively perform colonoscopy with reduced burden and comparable safety for patients in remote site. PMID:27873506
Improved PMMA single-piece haptic materials
NASA Astrophysics Data System (ADS)
Healy, Donald D.; Wilcox, Christopher D.
1991-12-01
During the past fifteen years, Intraocular lens (IOL) haptic preferences have shifted from a variety of multi-piece haptic materials to single-piece PMMA. This is due in part to the research of David Apple, M.D., and other who have suggested that All-PMMA implants result in reduced cell flare and better centration. Consequently, single-piece IOLs now represent 45% of all IOL implants. However, many surgeons regard single-piece IOL designs as nonflexible and more difficult to implant than multipiece IOLs. These handling characteristics have slowed the shift from multi-piece to single-piece IOLs. As a result of these handling characteristics, single-piece lenses experience relatively high breakage rates because of handling before insertion and during insertion. To improve these characteristics, manufacturers have refined single-piece IOL haptic designs by pushing the limits of PMMA's physical properties. Furthermore, IOL manufacturers have begun to alter the material itself to change its physical properties. In particular, two new PMMA materials have emerged in the marketplace: Flexeon trademark, a crosslinked polymer and CM trademark, a material with molecularly realigned PMMA. This paper examines three specific measurements of a haptic's strength and flexibility: tensile strength, plastic memory and material plasticity/elasticity. The paper compares with Flexeon trademark and CM trademark lenses to noncrosslinked one-piece lenses and standard polypropylene multi-piece lenses.
Discriminating Tissue Stiffness with a Haptic Catheter: Feeling the Inside of the Beating Heart.
Kesner, Samuel B; Howe, Robert D
2011-01-01
Catheter devices allow physicians to access the inside of the human body easily and painlessly through natural orifices and vessels. Although catheters allow for the delivery of fluids and drugs, the deployment of devices, and the acquisition of the measurements, they do not allow clinicians to assess the physical properties of tissue inside the body due to the tissue motion and transmission limitations of the catheter devices, including compliance, friction, and backlash. The goal of this research is to increase the tactile information available to physicians during catheter procedures by providing haptic feedback during palpation procedures. To accomplish this goal, we have developed the first motion compensated actuated catheter system that enables haptic perception of fast moving tissue structures. The actuated catheter is instrumented with a distal tip force sensor and a force feedback interface that allows users to adjust the position of the catheter while experiencing the forces on the catheter tip. The efficacy of this device and interface is evaluated through a psychophyisical study comparing how accurately users can differentiate various materials attached to a cardiac motion simulator using the haptic device and a conventional manual catheter. The results demonstrate that haptics improves a user's ability to differentiate material properties and decreases the total number of errors by 50% over the manual catheter system.
Palpation imaging using a haptic system for virtual reality applications in medicine.
Khaled, W; Reichling, S; Bruhns, O T; Boese, H; Baumann, M; Monkman, G; Egersdoerfer, S; Klein, D; Tunayar, A; Freimuth, H; Lorenz, A; Pessavento, A; Ermert, H
2004-01-01
In the field of medical diagnosis, there is a strong need to determine mechanical properties of biological tissue, which are of histological and pathological relevance. Malignant tumors are significantly stiffer than surrounding healthy tissue. One of the established diagnosis procedures is the palpation of body organs and tissue. Palpation is used to measure swelling, detect bone fracture, find and measure pulse, or to locate changes in the pathological state of tissue and organs. Current medical practice routinely uses sophisticated diagnostic tests through magnetic resonance imaging (MRI), computed tomography (CT) and ultrasound (US) imaging. However, they cannot provide direct measure of tissue elasticity. Last year we presented the concept of the first haptic sensor actuator system to visualize and reconstruct mechanical properties of tissue using ultrasonic elastography and a haptic display with electrorheological fluids. We developed a real time strain imaging system for tumor diagnosis. It allows biopsies simultaneously to conventional ultrasound B-Mode and strain imaging investigations. We deduce the relative mechanical properties by using finite element simulations and numerical solution models solving the inverse problem. Various modifications on the haptic sensor actuator system have been investigated. This haptic system has the potential of inducing real time substantial forces, using a compact lightweight mechanism which can be applied to numerous areas including intraoperative navigation, telemedicine, teaching and telecommunication.
A pervasive visual-haptic framework for virtual delivery training.
Abate, Andrea F; Acampora, Giovanni; Loia, Vincenzo; Ricciardi, Stefano; Vasilakos, Athanasios V
2010-03-01
Thanks to the advances of voltage regulator (VR) technologies and haptic systems, virtual simulators are increasingly becoming a viable alternative to physical simulators in medicine and surgery, though many challenges still remain. In this study, a pervasive visual-haptic framework aimed to the training of obstetricians and midwives to vaginal delivery is described. The haptic feedback is provided by means of two hand-based haptic devices able to reproduce force-feedbacks on fingers and arms, thus enabling a much more realistic manipulation respect to stylus-based solutions. The interactive simulation is not solely driven by an approximated model of complex forces and physical constraints but, instead, is approached by a formal modeling of the whole labor and of the assistance/intervention procedures performed by means of a timed automata network and applied to a parametrical 3-D model of the anatomy, able to mimic a wide range of configurations. This novel methodology is able to represent not only the sequence of the main events associated to either a spontaneous or to an operative childbirth process, but also to help in validating the manual intervention as the actions performed by the user during the simulation are evaluated according to established medical guidelines. A discussion on the first results as well as on the challenges still unaddressed is included.
Somatosensory attunement to the rigid body laws.
Shockley, K; Grocki, M; Carello, C; Turvey, M T
2001-01-01
In the most general case, haptic perception of an object's heaviness is most likely the perception of the object's resistance to movement, determined jointly by the object's mass and mass distribution. In two experiments with occluded objects wielded freely in three dimensions, we showed additive effects on perceived heaviness of mass and the inertia tensor. Our manipulations of the inertia tensor were directed specifically at the volume and symmetry of the inertia ellipsoid, quantities that can be understood as important to controlling the level and patterning of muscular forces, respectively. Ellipsoid volume and symmetry were found to have separate effects on perceptual reports of heaviness that were invariant over different tensors. Independent sensitivities to translational inertia and particular characterizations of rotational inertia suggest specialized somatosensory attunement to the rigid body laws.
Whisking mechanics and active sensing.
Bush, Nicholas E; Solla, Sara A; Hartmann, Mitra Jz
2016-10-01
We describe recent advances in quantifying the three-dimensional (3D) geometry and mechanics of whisking. Careful delineation of relevant 3D reference frames reveals important geometric and mechanical distinctions between the localization problem ('where' is an object) and the feature extraction problem ('what' is an object). Head-centered and resting-whisker reference frames lend themselves to quantifying temporal and kinematic cues used for object localization. The whisking-centered reference frame lends itself to quantifying the contact mechanics likely associated with feature extraction. We offer the 'windowed sampling' hypothesis for active sensing: that rats can estimate an object's spatial features by integrating mechanical information across whiskers during brief (25-60ms) windows of 'haptic enclosure' with the whiskers, a motion that resembles a hand grasp. Copyright © 2016. Published by Elsevier Ltd.
Method and System for Object Recognition Search
NASA Technical Reports Server (NTRS)
Duong, Tuan A. (Inventor); Duong, Vu A. (Inventor); Stubberud, Allen R. (Inventor)
2012-01-01
A method for object recognition using shape and color features of the object to be recognized. An adaptive architecture is used to recognize and adapt the shape and color features for moving objects to enable object recognition.
Peterson, M A; de Gelder, B; Rapcsak, S Z; Gerhardstein, P C; Bachoud-Lévi, A
2000-01-01
In three experiments we investigated whether conscious object recognition is necessary or sufficient for effects of object memories on figure assignment. In experiment 1, we examined a brain-damaged participant, AD, whose conscious object recognition is severely impaired. AD's responses about figure assignment do reveal effects from memories of object structure, indicating that conscious object recognition is not necessary for these effects, and identifying the figure-ground test employed here as a new implicit test of access to memories of object structure. In experiments 2 and 3, we tested a second brain-damaged participant, WG, for whom conscious object recognition was relatively spared. Nevertheless, effects from memories of object structure on figure assignment were not evident in WG's responses about figure assignment in experiment 2, indicating that conscious object recognition is not sufficient for effects of object memories on figure assignment. WG's performance sheds light on AD's performance, and has implications for the theoretical understanding of object memory effects on figure assignment.
Fast neuromimetic object recognition using FPGA outperforms GPU implementations.
Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph
2013-08-01
Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.
Role of multisensory stimuli in vigilance enhancement- a single trial event related potential study.
Abbasi, Nida Itrat; Bodala, Indu Prasad; Bezerianos, Anastasios; Yu Sun; Al-Nashash, Hasan; Thakor, Nitish V
2017-07-01
Development of interventions to prevent vigilance decrement has important applications in sensitive areas like transportation and defence. The objective of this work is to use multisensory (visual and haptic) stimuli for cognitive enhancement during mundane tasks. Two different epoch intervals representing sensory perception and motor response were analysed using minimum variance distortionless response (MVDR) based single trial ERP estimation to understand the performance dependency on both factors. Bereitschaftspotential (BP) latency L3 (r=0.6 in phase 1 (visual) and r=0.71 in phase 2 (visual and haptic)) was significantly correlated with reaction time as compared to that of sensory ERP latency L2 (r=0.1 in both phase 1 and phase 2). This implies that low performance in monotonous tasks is predominantly dependent on the prolonged neural interaction with the muscles to initiate movement. Further, negative relationship was found between the ERP latencies related to sensory perception and Bereitschaftspotential (BP) and occurrence of epochs when multisensory cues are provided. This means that vigilance decrement is reduced with the help of multisensory stimulus presentation in prolonged monotonous tasks.
Hall, Kara L; Phillips, Chandler A; Reynolds, David B; Mohler, Stanley R; Rogers, Dana B; Neidhard-Doll, Amy T
2015-01-01
Pneumatic muscle actuators (PMAs) have a high power to weight ratio and possess unique characteristics which make them ideal actuators for applications involving human interaction. PMAs are difficult to control due to nonlinear dynamics, presenting challenges in system implementation. Despite these challenges, PMAs have great potential as a source of resistance for strength training and rehabilitation. The objective of this work was to control a PMA for use in isokinetic exercise, potentially benefiting anyone in need of optimal strength training through a joint's range of motion. The controller, based on an inverse three-element phenomenological model and adaptive nonlinear control, allows the system to operate as a type of haptic device. A human quadriceps dynamic simulator was developed (as described in Part I of this work) so that control effectiveness and accommodation could be tested prior to human implementation. Tracking error results indicate that the control system is effective at producing PMA displacement and resistance necessary for a scaled, simulated neuromuscular actuator to maintain low-velocity isokinetic movement during simulated concentric and eccentric knee extension.
Ribeiro de Oliveira, Marcelo Magaldi; Nicolato, Arthur; Santos, Marcilea; Godinho, Joao Victor; Brito, Rafael; Alvarenga, Alexandre; Martins, Ana Luiza Valle; Prosdocimi, André; Trivelato, Felipe Padovani; Sabbagh, Abdulrahman J; Reis, Augusto Barbosa; Maestro, Rolando Del
2016-05-01
OBJECT The development of neurointerventional treatments of central nervous system disorders has resulted in the need for adequate training environments for novice interventionalists. Virtual simulators offer anatomical definition but lack adequate tactile feedback. Animal models, which provide more lifelike training, require an appropriate infrastructure base. The authors describe a training model for neurointerventional procedures using the human placenta (HP), which affords haptic training with significantly fewer resource requirements, and discuss its validation. METHODS Twelve HPs were prepared for simulated endovascular procedures. Training exercises performed by interventional neuroradiologists and novice fellows were placental angiography, stent placement, aneurysm coiling, and intravascular liquid embolic agent injection. RESULTS The endovascular training exercises proposed can be easily reproduced in the HP. Face, content, and construct validity were assessed by 6 neurointerventional radiologists and 6 novice fellows in interventional radiology. CONCLUSIONS The use of HP provides an inexpensive training model for the training of neurointerventionalists. Preliminary validation results show that this simulation model has face and content validity and has demonstrated construct validity for the interventions assessed in this study.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-15
... INTERNATIONAL TRADE COMMISSION [DN 2875] Certain Mobile Electronic Devices Incorporating Haptics.... International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby given that the U.S. International Trade Commission has received an amended complaint entitled Certain Mobile Electronic Devices...
Training Surgical Residents With a Haptic Robotic Central Venous Catheterization Simulator.
Pepley, David F; Gordon, Adam B; Yovanoff, Mary A; Mirkin, Katelin A; Miller, Scarlett R; Han, David C; Moore, Jason Z
Ultrasound guided central venous catheterization (CVC) is a common surgical procedure with complication rates ranging from 5 to 21 percent. Training is typically performed using manikins that do not simulate anatomical variations such as obesity and abnormal vessel positioning. The goal of this study was to develop and validate the effectiveness of a new virtual reality and force haptic based simulation platform for CVC of the right internal jugular vein. A CVC simulation platform was developed using a haptic robotic arm, 3D position tracker, and computer visualization. The haptic robotic arm simulated needle insertion force that was based on cadaver experiments. The 3D position tracker was used as a mock ultrasound device with realistic visualization on a computer screen. Upon completion of a practice simulation, performance feedback is given to the user through a graphical user interface including scoring factors based on good CVC practice. The effectiveness of the system was evaluated by training 13 first year surgical residents using the virtual reality haptic based training system over a 3 month period. The participants' performance increased from 52% to 96% on the baseline training scenario, approaching the average score of an expert surgeon: 98%. This also resulted in improvement in positive CVC practices including a 61% decrease between final needle tip position and vein center, a decrease in mean insertion attempts from 1.92 to 1.23, and a 12% increase in time spent aspirating the syringe throughout the procedure. A virtual reality haptic robotic simulator for CVC was successfully developed. Surgical residents training on the simulation improved to near expert levels after three robotic training sessions. This suggests that this system could act as an effective training device for CVC. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Effects of visual information regarding allocentric processing in haptic parallelity matching.
Van Mier, Hanneke I
2013-10-01
Research has revealed that haptic perception of parallelity deviates from physical reality. Large and systematic deviations have been found in haptic parallelity matching most likely due to the influence of the hand-centered egocentric reference frame. Providing information that increases the influence of allocentric processing has been shown to improve performance on haptic matching. In this study allocentric processing was stimulated by providing informative vision in haptic matching tasks that were performed using hand- and arm-centered reference frames. Twenty blindfolded participants (ten men, ten women) explored the orientation of a reference bar with the non-dominant hand and subsequently matched (task HP) or mirrored (task HM) its orientation on a test bar with the dominant hand. Visual information was provided by means of informative vision with participants having full view of the test bar, while the reference bar was blocked from their view (task VHP). To decrease the egocentric bias of the hands, participants also performed a visual haptic parallelity drawing task (task VHPD) using an arm-centered reference frame, by drawing the orientation of the reference bar. In all tasks, the distance between and orientation of the bars were manipulated. A significant effect of task was found; performance improved from task HP, to VHP to VHPD, and HM. Significant effects of distance were found in the first three tasks, whereas orientation and gender effects were only significant in tasks HP and VHP. The results showed that stimulating allocentric processing by means of informative vision and reducing the egocentric bias by using an arm-centered reference frame led to most accurate performance on parallelity matching. © 2013 Elsevier B.V. All rights reserved.
Cuppone, Anna Vera; Squeri, Valentina; Semprini, Marianna; Masia, Lorenzo; Konczak, Jürgen
2016-01-01
This study examined the trainability of the proprioceptive sense and explored the relationship between proprioception and motor learning. With vision blocked, human learners had to perform goal-directed wrist movements relying solely on proprioceptive/haptic cues to reach several haptically specified targets. One group received additional somatosensory movement error feedback in form of vibro-tactile cues applied to the skin of the forearm. We used a haptic robotic device for the wrist and implemented a 3-day training regimen that required learners to make spatially precise goal-directed wrist reaching movements without vision. We assessed whether training improved the acuity of the wrist joint position sense. In addition, we checked if sensory learning generalized to the motor domain and improved spatial precision of wrist tracking movements that were not trained. The main findings of the study are: First, proprioceptive acuity of the wrist joint position sense improved after training for the group that received the combined proprioceptive/haptic and vibro-tactile feedback (VTF). Second, training had no impact on the spatial accuracy of the untrained tracking task. However, learners who had received VTF significantly reduced their reliance on haptic guidance feedback when performing the untrained motor task. That is, concurrent VTF was highly salient movement feedback and obviated the need for haptic feedback. Third, VTF can be also provided by the limb not involved in the task. Learners who received VTF to the contralateral limb equally benefitted. In conclusion, somatosensory training can significantly enhance proprioceptive acuity within days when learning is coupled with vibro-tactile sensory cues that provide feedback about movement errors. The observable sensory improvements in proprioception facilitates motor learning and such learning may generalize to the sensorimotor control of the untrained motor tasks. The implications of these findings for neurorehabilitation are discussed.
The development of newborn object recognition in fast and slow visual worlds
Wood, Justin N.; Wood, Samantha M. W.
2016-01-01
Object recognition is central to perception and cognition. Yet relatively little is known about the environmental factors that cause invariant object recognition to emerge in the newborn brain. Is this ability a hardwired property of vision? Or does the development of invariant object recognition require experience with a particular kind of visual environment? Here, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) require visual experience with slowly changing objects to develop invariant object recognition abilities. When newborn chicks were raised with a slowly rotating virtual object, the chicks built invariant object representations that generalized across novel viewpoints and rotation speeds. In contrast, when newborn chicks were raised with a virtual object that rotated more quickly, the chicks built viewpoint-specific object representations that failed to generalize to novel viewpoints and rotation speeds. Moreover, there was a direct relationship between the speed of the object and the amount of invariance in the chick's object representation. Thus, visual experience with slowly changing objects plays a critical role in the development of invariant object recognition. These results indicate that invariant object recognition is not a hardwired property of vision, but is learned rapidly when newborns encounter a slowly changing visual world. PMID:27097925
Niimi, Ryosuke; Yokosawa, Kazuhiko
2009-01-01
Visual recognition of three-dimensional (3-D) objects is relatively impaired for some particular views, called accidental views. For most familiar objects, the front and top views are considered to be accidental views. Previous studies have shown that foreshortening of the axes of elongation of objects in these views impairs recognition, but the influence of other possible factors is largely unknown. Using familiar objects without a salient axis of elongation, we found that a foreshortened symmetry plane of the object and low familiarity of the viewpoint accounted for the relatively worse recognition for front views and top views, independently of the effect of a foreshortened axis of elongation. We found no evidence that foreshortened front-back axes impaired recognition in front views. These results suggest that the viewpoint dependence of familiar object recognition is not a unitary phenomenon. The possible role of symmetry (either 2-D or 3-D) in familiar object recognition is also discussed.
Using haptic feedback to increase seat belt use of service vehicle drivers.
DOT National Transportation Integrated Search
2011-01-01
This study pilot-tested a new application of a technology-based intervention to increase seat belt use. The technology was based on a : contingency in which unbelted drivers experienced sustained haptic feedback to the gas pedal when they exceeded 25...
Giudice, Nicholas A.; Betty, Maryann R.; Loomis, Jack M.
2012-01-01
This research examines whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In three experiments, participants learned four-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most importantly, show that learning from the two modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images. PMID:21299331
Design and Calibration of a New 6 DOF Haptic Device
Qin, Huanhuan; Song, Aiguo; Liu, Yuqing; Jiang, Guohua; Zhou, Bohe
2015-01-01
For many applications such as tele-operational robots and interactions with virtual environments, it is better to have performance with force feedback than without. Haptic devices are force reflecting interfaces. They can also track human hand positions simultaneously. A new 6 DOF (degree-of-freedom) haptic device was designed and calibrated in this study. It mainly contains a double parallel linkage, a rhombus linkage, a rotating mechanical structure and a grasping interface. Benefited from the unique design, it is a hybrid structure device with a large workspace and high output capability. Therefore, it is capable of multi-finger interactions. Moreover, with an adjustable base, operators can change different postures without interrupting haptic tasks. To investigate the performance regarding position tracking accuracy and static output forces, we conducted experiments on a three-dimensional electric sliding platform and a digital force gauge, respectively. Displacement errors and force errors are calculated and analyzed. To identify the capability and potential of the device, four application examples were programmed. PMID:26690449
A haptic pedal for surgery assistance.
Díaz, Iñaki; Gil, Jorge Juan; Louredo, Marcos
2014-09-01
The research and development of mechatronic aids for surgery is a persistent challenge in the field of robotic surgery. This paper presents a new haptic pedal conceived to assist surgeons in the operating room by transmitting real-time surgical information through the foot. An effective human-robot interaction system for medical practice must exchange appropriate information with the operator as quickly and accurately as possible. Moreover, information must flow through the appropriate sensory modalities for a natural and simple interaction. However, users of current robotic systems might experience cognitive overload and be increasingly overwhelmed by data streams from multiple modalities. A new haptic channel is thus explored to complement and improve existing systems. A preliminary set of experiments has been carried out to evaluate the performance of the proposed system in a virtual surgical drilling task. The results of the experiments show the effectiveness of the haptic pedal in providing surgical information through the foot. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Meli, Leonardo; Pacchierotti, Claudio; Prattichizzo, Domenico
2014-04-01
This study presents a novel approach to force feedback in robot-assisted surgery. It consists of substituting haptic stimuli, composed of a kinesthetic component and a skin deformation, with cutaneous stimuli only. The force generated can then be thought as a subtraction between the complete haptic interaction, cutaneous, and kinesthetic, and the kinesthetic part of it. For this reason, we refer to this approach as sensory subtraction. Sensory subtraction aims at outperforming other nonkinesthetic feedback techniques in teleoperation (e.g., sensory substitution) while guaranteeing the stability and safety of the system. We tested the proposed approach in a challenging 7-DoF bimanual teleoperation task, similar to the Pegboard experiment of the da Vinci Skills Simulator. Sensory subtraction showed improved performance in terms of completion time, force exerted, and total displacement of the rings with respect to two popular sensory substitution techniques. Moreover, it guaranteed a stable interaction in the presence of a communication delay in the haptic loop.
Influence of surgical gloves on haptic perception thresholds.
Hatzfeld, Christian; Dorsch, Sarah; Neupert, Carsten; Kupnik, Mario
2018-02-01
Impairment of haptic perception by surgical gloves could reduce requirements on haptic systems for surgery. While grip forces and manipulation capabilities were not impaired in previous studies, no data is available for perception thresholds. Absolute and differential thresholds (20 dB above threshold) of 24 subjects were measured for frequencies of 25 and 250 Hz with a Ψ-method. Effects of wearing a surgical glove, moisture on the contact surface and subject's experience with gloves were incorporated in a full-factorial experimental design. Absolute thresholds of 12.8 dB and -29.6 dB (means for 25 and 250 Hz, respectively) and differential thresholds of -12.6 dB and -9.5 dB agree with previous studies. A relevant effect of the frequency on absolute thresholds was found. Comparisons of glove- and no-glove-conditions did not reveal a significant mean difference. Wearing a single surgical glove does not affect absolute and differential haptic perception thresholds. Copyright © 2017 John Wiley & Sons, Ltd.
A Three-Axis Force Sensor for Dual Finger Haptic Interfaces
Fontana, Marco; Marcheschi, Simone; Salsedo, Fabio; Bergamasco, Massimo
2012-01-01
In this work we present the design process, the characterization and testing of a novel three-axis mechanical force sensor. This sensor is optimized for use in closed-loop force control of haptic devices with three degrees of freedom. In particular the sensor has been conceived for integration with a dual finger haptic interface that aims at simulating forces that occur during grasping and surface exploration. The sensing spring structure has been purposely designed in order to match force and layout specifications for the application. In this paper the design of the sensor is presented, starting from an analytic model that describes the characteristic matrix of the sensor. A procedure for designing an optimal overload protection mechanism is proposed. In the last part of the paper the authors describe the experimental characterization and the integrated test on a haptic hand exoskeleton showing the improvements in the controller performances provided by the inclusion of the force sensor. PMID:23202012
Development of a novel haptic glove for improving finger dexterity in poststroke rehabilitation.
Lin, Chi-Ying; Tsai, Chia-Min; Shih, Pei-Cheng; Wu, Hsiao-Ching
2015-01-01
Almost all stroke patients experience a certain degree of fine motor impairment, and impeded finger movement may limit activities in daily life. Thus, to improve the quality of life of stroke patients, designing an efficient training device for fine motor rehabilitation is crucial. This study aimed to develop a novel fine motor training glove that integrates a virtual-reality based interactive environment with vibrotactile feedback for more effective post stroke hand rehabilitation. The proposed haptic rehabilitation device is equipped with small DC vibration motors for vibrotactile feedback stimulation and piezoresistive thin-film force sensors for motor function evaluation. Two virtual-reality based games ``gopher hitting'' and ``musical note hitting'' were developed as a haptic interface. According to the designed rehabilitation program, patients intuitively push and practice their fingers to improve the finger isolation function. Preliminary tests were conducted to assess the feasibility of the developed haptic rehabilitation system and to identify design concerns regarding the practical use in future clinical testing.
The Effect of Visual Experience on Perceived Haptic Verticality When Tilted in the Roll Plane
Cuturi, Luigi F.; Gori, Monica
2017-01-01
The orientation of the body in space can influence perception of verticality leading sometimes to biases consistent with priors peaked at the most common head and body orientation, that is upright. In this study, we investigate haptic perception of verticality in sighted individuals and early and late blind adults when tilted counterclockwise in the roll plane. Participants were asked to perform a stimulus orientation discrimination task with their body tilted to their left ear side 90° relative to gravity. Stimuli were presented by using a motorized haptic bar. In order to test whether different reference frames relative to the head influenced perception of verticality, we varied the position of the stimulus on the body longitudinal axis. Depending on the stimulus position sighted participants tended to have biases away or toward their body tilt. Visually impaired individuals instead show a different pattern of verticality estimations. A bias toward head and body tilt (i.e., Aubert effect) was observed in late blind individuals. Interestingly, no strong biases were observed in early blind individuals. Overall, these results posit visual sensory information to be fundamental in influencing the haptic readout of proprioceptive and vestibular information about body orientation relative to gravity. The acquisition of an idiotropic vector signaling the upright might take place through vision during development. Regarding early blind individuals, independent spatial navigation experience likely enhanced by echolocation behavior might have a role in such acquisition. In the case of participants with late onset blindness, early experience of vision might lead them to anchor their visually acquired priors to the haptic modality with no disambiguation between head and body references as observed in sighted individuals (Fraser et al., 2015). With our study, we aim to investigate haptic perception of gravity direction in unusual body tilts when vision is absent due to visual impairment. Insofar, our findings throw light on the influence of proprioceptive/vestibular sensory information on haptic perceived verticality in blind individuals showing how this phenomenon is affected by visual experience. PMID:29270109
Gueguen, Marc; Vuillerme, Nicolas; Isableu, Brice
2012-01-01
Background The selection of appropriate frames of reference (FOR) is a key factor in the elaboration of spatial perception and the production of robust interaction with our environment. The extent to which we perceive the head axis orientation (subjective head orientation, SHO) with both accuracy and precision likely contributes to the efficiency of these spatial interactions. A first goal of this study was to investigate the relative contribution of both the visual and egocentric FOR (centre-of-mass) in the SHO processing. A second goal was to investigate humans' ability to process SHO in various sensory response modalities (visual, haptic and visuo-haptic), and the way they modify the reliance to either the visual or egocentric FORs. A third goal was to question whether subjects combined visual and haptic cues optimally to increase SHO certainty and to decrease the FORs disruption effect. Methodology/Principal Findings Thirteen subjects were asked to indicate their SHO while the visual and/or egocentric FORs were deviated. Four results emerged from our study. First, visual rod settings to SHO were altered by the tilted visual frame but not by the egocentric FOR alteration, whereas no haptic settings alteration was observed whether due to the egocentric FOR alteration or the tilted visual frame. These results are modulated by individual analysis. Second, visual and egocentric FOR dependency appear to be negatively correlated. Third, the response modality enrichment appears to improve SHO. Fourth, several combination rules of the visuo-haptic cues such as the Maximum Likelihood Estimation (MLE), Winner-Take-All (WTA) or Unweighted Mean (UWM) rule seem to account for SHO improvements. However, the UWM rule seems to best account for the improvement of visuo-haptic estimates, especially in situations with high FOR incongruence. Finally, the data also indicated that FOR reliance resulted from the application of UWM rule. This was observed more particularly, in the visual dependent subject. Conclusions: Taken together, these findings emphasize the importance of identifying individual spatial FOR preferences to assess the efficiency of our interaction with the environment whilst performing spatial tasks. PMID:22509295
Haptic-assistive technologies for audition and vision sensory disabilities.
Sorgini, Francesca; Caliò, Renato; Carrozza, Maria Chiara; Oddo, Calogero Maria
2018-05-01
The aim of this review is to analyze haptic sensory substitution technologies for deaf, blind and deaf-blind individuals. The literature search has been performed in Scopus, PubMed and Google Scholar databases using selected keywords, analyzing studies from 1960s to present. Search on databases for scientific publications has been accompanied by web search for commercial devices. Results have been classified by sensory disability and functionality, and analyzed by assistive technology. Complementary analyses have also been carried out on websites of public international agencies, such as the World Health Organization (WHO), and of associations representing sensory disabled persons. The reviewed literature provides evidences that sensory substitution aids are able to mitigate in part the deficits in language learning, communication and navigation for deaf, blind and deaf-blind individuals, and that the tactile sense can be a means of communication to provide some kind of information to sensory disabled individuals. A lack of acceptance emerged from the discussion of capabilities and limitations of haptic assistive technologies. Future researches shall go towards miniaturized, custom-designed and low-cost haptic interfaces and integration with personal devices such as smartphones for a major diffusion of sensory aids among disabled. Implications for rehabilitation Systematic review of state of the art of haptic assistive technologies for vision and audition sensory disabilities. Sensory substitution systems for visual and hearing disabilities have a central role in the transmission of information for patients with sensory impairments, enabling users to interact with the not disabled community in daily activities. Visual and auditory inputs are converted in haptic feedback via different actuation technologies. The information is presented in the form of static or dynamic stimulation of the skin. Their effectiveness and ease of use make haptic sensory substitution systems suitable for patients with different levels of disabilities. They constitute a cheaper and less invasive alternative to implantable partial sensory restitution systems. Future researches are oriented towards the optimization of the stimulation parameters together with the development of miniaturized, custom-designed and low-cost aids operating in synergy in networks, aiming to increase patients' acceptability of these technologies.
Automatic anatomy recognition via multiobject oriented active shape models.
Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A
2010-12-01
This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a recognition accuracy of > or = 90% yielded a TPVF > or = 95% and FPVF < or = 0.5%. Over the three data sets and over all tested objects, in 97% of the cases, the optimal solutions found by the proposed method constituted the true global optimum. The experimental results showed the feasibility and efficacy of the proposed automatic anatomy recognition system. Increasing the number of objects in the model can significantly improve both recognition and delineation accuracy. More spread out arrangement of objects in the model can lead to improved recognition and delineation accuracy. Including larger objects in the model also improved recognition and delineation. The proposed method almost always finds globally optimum solutions.
Learning from vision-to-touch is different than learning from touch-to-vision.
Wismeijer, Dagmar A; Gegenfurtner, Karl R; Drewing, Knut
2012-01-01
We studied whether vision can teach touch to the same extent as touch seems to teach vision. In a 2 × 2 between-participants learning study, we artificially correlated visual gloss cues with haptic compliance cues. In two "natural" tasks, we tested whether visual gloss estimations have an influence on haptic estimations of softness and vice versa. In two "novel" tasks, in which participants were either asked to haptically judge glossiness or to visually judge softness, we investigated how perceptual estimates transfer from one sense to the other. Our results showed that vision does not teach touch as efficient as touch seems to teach vision.
NASA Astrophysics Data System (ADS)
Ledermann, Christoph; Pauer, Hendrikje; Woern, Heinz
2014-05-01
In minimally invasive surgery, exible mechatronic instruments promise to improve the overall performance of surgical interventions. However, those instruments require highly developed sensors in order to provide haptic feedback to the surgeon or to enable (semi-)autonomous tasks. Precisely, haptic sensors and a shape sensor are required. In this paper, we present our ber optical sensor system of Fiber Bragg Gratings, which consists of a shape sensor, a kinesthetic sensor and a tactile sensor. The status quo of each of the three sensors is described, as well as the concept to integrate them into one ber optical sensor system.
Sensorimotor enhancement with a mixed reality system for balance and mobility rehabilitation.
Fung, Joyce; Perez, Claire F
2011-01-01
We have developed a mixed reality system incorporating virtual reality (VR), surface perturbations and light touch for gait rehabilitation. Haptic touch has emerged as a novel and efficient technique to improve postural control and dynamic stability. Our system combines visual display with the manipulation of physical environments and addition of haptic feedback to enhance balance and mobility post stroke. A research study involving 9 participants with stroke and 9 age-matched healthy individuals show that the haptic cue provided while walking is an effective means of improving gait stability in people post stroke, especially during challenging environmental conditions such as downslope walking.
DORSAL HIPPOCAMPAL PROGESTERONE INFUSIONS ENHANCE OBJECT RECOGNITION IN YOUNG FEMALE MICE
Orr, Patrick T.; Lewis, Michael C.; Frick, Karyn M.
2009-01-01
The effects of progesterone on memory are not nearly as well studied as the effects of estrogens. Although progesterone can reportedly enhance spatial and/or object recognition in female rodents when given immediately after training, previous studies have injected progesterone systemically, and therefore, the brain regions mediating this enhancement are not clear. As such, this study was designed to determine the role of the dorsal hippocampus in mediating the beneficial effect of progesterone on object recognition. Young ovariectomized C57BL/6 mice were trained in a hippocampal-dependent object recognition task utilizing two identical objects, and then immediately or 2 hrs afterwards, received bilateral dorsal hippocampal infusions of vehicle or 0.01, 0.1, or 1.0 μg/μl water-soluble progesterone. Forty-eight hours later, object recognition memory was tested using a previously explored object and a novel object. Relative to the vehicle group, memory for the familiar object was enhanced in all groups receiving immediate infusions of progesterone. Progesterone infusion delayed 2 hrs after training did not affect object recognition. These data suggest that the dorsal hippocampus may play a critical role in progesterone-induced enhancement of object recognition. PMID:19477194
Interactive object recognition assistance: an approach to recognition starting from target objects
NASA Astrophysics Data System (ADS)
Geisler, Juergen; Littfass, Michael
1999-07-01
Recognition of target objects in remotely sensed imagery required detailed knowledge about the target object domain as well as about mapping properties of the sensing system. The art of object recognition is to combine both worlds appropriately and to provide models of target appearance with respect to sensor characteristics. Common approaches to support interactive object recognition are either driven from the sensor point of view and address the problem of displaying images in a manner adequate to the sensing system. Or they focus on target objects and provide exhaustive encyclopedic information about this domain. Our paper discusses an approach to assist interactive object recognition based on knowledge about target objects and taking into account the significance of object features with respect to characteristics of the sensed imagery, e.g. spatial and spectral resolution. An `interactive recognition assistant' takes the image analyst through the interpretation process by indicating step-by-step the respectively most significant features of objects in an actual set of candidates. The significance of object features is expressed by pregenerated trees of significance, and by the dynamic computation of decision relevance for every feature at each step of the recognition process. In the context of this approach we discuss the question of modeling and storing the multisensorial/multispectral appearances of target objects and object classes as well as the problem of an adequate dynamic human-machine-interface that takes into account various mental models of human image interpretation.
78 FR 23593 - Certain Mobile Electronic Devices Incorporating Haptics; Termination of Investigation
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-19
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-834] Certain Mobile Electronic Devices... this investigation may be viewed on the Commission's electronic docket (EDIS) at http://edis.usitc.gov... mobile electronic devices incorporating haptics that infringe certain claims of six Immersion patents. 77...
Ottensmeyer, M P; Ben-Ur, E; Salisbury, J K
2000-01-01
Current efforts in surgical simulation very often focus on creating realistic graphical feedback, but neglect some or all tactile and force (haptic) feedback that a surgeon would normally receive. Simulations that do include haptic feedback do not typically use real tissue compliance properties, favoring estimates and user feedback to determine realism. When tissue compliance data are used, there are virtually no in vivo property measurements to draw upon. Together with the Center for Innovative Minimally Invasive Therapy at the Massachusetts General Hospital, the Haptics Group is developing tools to introduce more comprehensive haptic feedback in laparoscopy simulators and to provide biological tissue material property data for our software simulation. The platform for providing haptic feedback is a PHANToM Haptic Interface, produced by SensAble Technologies, Inc. Our devices supplement the PHANToM to provide for grasping and optionally, for the roll axis of the tool. Together with feedback from the PHANToM, which provides the pitch, yaw and thrust axes of a typical laparoscopy tool, we can recreate all of the haptic sensations experienced during laparoscopy. The devices integrate real laparoscopy toolhandles and a compliant torso model to complete the set of visual and tactile sensations. Biological tissues are known to exhibit non-linear mechanical properties, and change their properties dramatically when removed from a living organism. To measure the properties in vivo, two devices are being developed. The first is a small displacement, 1-D indenter. It will measure the linear tissue compliance (stiffness and damping) over a wide range of frequencies. These data will be used as inputs to a finite element or other model. The second device will be able to deflect tissues in 3-D over a larger range, so that the non-linearities due to changes in the tissue geometry will be measured. This will allow us to validate the performance of the model on large tissue deformations. Both devices are designed to pass through standard 12 mm laparoscopy trocars, and will be suitable for use during open or minimally invasive procedures. We plan to acquire data from pigs used by surgeons for training purposes, but conceivably, the tools could be refined for use on humans undergoing surgery. Our work will provide the necessary data input for surgical simulations to accurately model the force interactions that a surgeon would have with tissue, and will provide the force output to create a truly realistic simulation of minimally invasive surgery.
A mass-density model can account for the size-weight illusion
Bergmann Tiest, Wouter M.; Drewing, Knut
2018-01-01
When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object’s mass, and the other from the object’s density, with estimates’ weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects’ density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object’s density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception. PMID:29447183
Introduction to haptics for neurosurgeons.
L'Orsa, Rachael; Macnab, Chris J B; Tavakoli, Mahdi
2013-01-01
Robots are becoming increasingly relevant to neurosurgeons, extending a neurosurgeon's physical capabilities, improving navigation within the surgical landscape when combined with advanced imaging, and propelling the movement toward minimally invasive surgery. Most surgical robots, however, isolate surgeons from the full range of human senses during a procedure. This forces surgeons to rely on vision alone for guidance through the surgical corridor, which limits the capabilities of the system, requires significant operator training, and increases the surgeon's workload. Incorporating haptics into these systems, ie, enabling the surgeon to "feel" forces experienced by the tool tip of the robot, could render these limitations obsolete by making the robot feel more like an extension of the surgeon's own body. Although the use of haptics in neurosurgical robots is still mostly the domain of research, neurosurgeons who keep abreast of this emerging field will be more prepared to take advantage of it as it becomes more prevalent in operating theaters. Thus, this article serves as an introduction to the field of haptics for neurosurgeons. We not only outline the current and future benefits of haptics but also introduce concepts in the fields of robotic technology and computer control. This knowledge will allow readers to be better aware of limitations in the technology that can affect performance and surgical outcomes, and "knowing the right questions to ask" will be invaluable for surgeons who have purchasing power within their departments.
NASA Astrophysics Data System (ADS)
Hwang, Donghyun; Lee, Jaemin; Kim, Keehoon
2017-10-01
This paper proposes a miniature haptic ring that can display touch/pressure and shearing force to the user’s fingerpad. For practical use and wider application of the device, it is developed with the aim of achieving high wearability and mobility/portability as well as cutaneous force feedback functionality. A main body of the device is designed as a ring-shaped lightweight structure with a simple driving mechanism, and thin shape memory alloy (SMA) wires having high energy density are applied as actuating elements. Also, based on a band-type wireless control unit including a wireless data communication module, the whole device could be realized as a wearable mobile haptic device system. These features enable the device to take diverse advantages on functional performances and to provide users with significant usability. In this work, the proposed miniature haptic ring is systematically designed, and its working performances are experimentally evaluated with a fabricated functional prototype. The experimental results obviously demonstrate that the proposed device exhibits higher force-to-weight ratio than conventional finger-wearable haptic devices for cutaneous force feedback. Also, it is investigated that operational performances of the device are strongly influenced by electro-thermomechanical behaviors of the SMA actuator. In addition to the experiments for performance evaluation, we conduct a preliminary user test to assess practical feasibility and usability based on user’s qualitative feedback.
Vibrotactile perception assessment for a haptic interface on an antigravity suit.
Ko, Sang Min; Lee, Kwangil; Kim, Daeho; Ji, Yong Gu
2017-01-01
Haptic technology is used in various fields to transmit information to the user with or without visual and auditory cues. This study aimed to provide preliminary data for use in developing a haptic interface for an antigravity (anti-G) suit. With the structural characteristics of the anti-G suit in mind, we determined five areas on the body (lower back, outer thighs, inner thighs, outer calves, and inner calves) on which to install ten bar-type eccentric rotating mass (ERM) motors as vibration actuators. To determine the design factors of the haptic anti-G suit, we conducted three experiments to find the absolute threshold, moderate intensity, and subjective assessments of vibrotactile stimuli. Twenty-six fighter pilots participated in the experiments, which were conducted in a fixed-based flight simulator. From the results of our study, we recommend 1) absolute thresholds of ∼11.98-15.84 Hz and 102.01-104.06 dB, 2) moderate intensities of 74.36 Hz and 126.98 dB for the lower back and 58.65 Hz and 122.37 dB for either side of the thighs and calves, and 3) subjective assessments of vibrotactile stimuli (displeasure, easy to perceive, and level of comfort). The results of this study will be useful for the design of a haptic anti-G suit. Copyright © 2016 Elsevier Ltd. All rights reserved.
Framework for e-learning assessment in dental education: a global model for the future.
Arevalo, Carolina R; Bayne, Stephen C; Beeley, Josie A; Brayshaw, Christine J; Cox, Margaret J; Donaldson, Nora H; Elson, Bruce S; Grayden, Sharon K; Hatzipanagos, Stylianos; Johnson, Lynn A; Reynolds, Patricia A; Schönwetter, Dieter J
2013-05-01
The framework presented in this article demonstrates strategies for a global approach to e-curricula in dental education by considering a collection of outcome assessment tools. By combining the outcomes for overall assessment, a global model for a pilot project that applies e-assessment tools to virtual learning environments (VLE), including haptics, is presented. Assessment strategies from two projects, HapTEL (Haptics in Technology Enhanced Learning) and UDENTE (Universal Dental E-learning), act as case-user studies that have helped develop the proposed global framework. They incorporate additional assessment tools and include evaluations from questionnaires and stakeholders' focus groups. These measure each of the factors affecting the classical teaching/learning theory framework as defined by Entwistle in a standardized manner. A mathematical combinatorial approach is proposed to join these results together as a global assessment. With the use of haptic-based simulation learning, exercises for tooth preparation assessing enamel and dentine were compared to plastic teeth in manikins. Equivalence for student performance for haptic versus traditional preparation methods was established, thus establishing the validity of the haptic solution for performing these exercises. Further data collected from HapTEL are still being analyzed, and pilots are being conducted to validate the proposed test measures. Initial results have been encouraging, but clearly the need persists to develop additional e-assessment methods for new learning domains.
Teaching bovine abdominal anatomy: use of a haptic simulator.
Kinnison, Tierney; Forrest, Neil David; Frean, Stephen Philip; Baillie, Sarah
2009-01-01
Traditional methods of teaching anatomy to undergraduate medical and veterinary students are being challenged and need to adapt to modern concerns and requirements. There is a move away from the use of cadavers to new technologies as a way of complementing the traditional approaches and addressing resource and ethical problems. Haptic (touch) technology, which allows the student to feel a 3D computer-generated virtual environment, provides a novel way to address some of these challenges. To evaluate the practicalities and usefulness of a haptic simulator, first year veterinary students at the Royal Veterinary College, University of London, were taught basic bovine abdominal anatomy using a rectal palpation simulator: "The Haptic Cow." Over two days, 186 students were taught in small groups and 184 provided feedback via a questionnaire. The results were positive; the majority of students considered that the simulator had been useful for appreciating both the feel and location of key internal anatomical structures, had helped with their understanding of bovine abdominal anatomy and 3D visualization, and the tutorial had been enjoyable. The students were mostly in favor of the small group tutorial format, but some requested more time on the simulator. The findings indicate that the haptic simulator is an engaging way of teaching bovine abdominal anatomy to a large number of students in an efficient manner without using cadavers, thereby addressing some of the current challenges in anatomy teaching.
Discriminating Tissue Stiffness with a Haptic Catheter: Feeling the Inside of the Beating Heart
Kesner, Samuel B.; Howe, Robert D.
2011-01-01
Catheter devices allow physicians to access the inside of the human body easily and painlessly through natural orifices and vessels. Although catheters allow for the delivery of fluids and drugs, the deployment of devices, and the acquisition of the measurements, they do not allow clinicians to assess the physical properties of tissue inside the body due to the tissue motion and transmission limitations of the catheter devices, including compliance, friction, and backlash. The goal of this research is to increase the tactile information available to physicians during catheter procedures by providing haptic feedback during palpation procedures. To accomplish this goal, we have developed the first motion compensated actuated catheter system that enables haptic perception of fast moving tissue structures. The actuated catheter is instrumented with a distal tip force sensor and a force feedback interface that allows users to adjust the position of the catheter while experiencing the forces on the catheter tip. The efficacy of this device and interface is evaluated through a psychophyisical study comparing how accurately users can differentiate various materials attached to a cardiac motion simulator using the haptic device and a conventional manual catheter. The results demonstrate that haptics improves a user's ability to differentiate material properties and decreases the total number of errors by 50% over the manual catheter system. PMID:25285321
Assisting Movement Training and Execution With Visual and Haptic Feedback.
Ewerton, Marco; Rother, David; Weimar, Jakob; Kollegger, Gerrit; Wiemeyer, Josef; Peters, Jan; Maeda, Guilherme
2018-01-01
In the practice of motor skills in general, errors in the execution of movements may go unnoticed when a human instructor is not available. In this case, a computer system or robotic device able to detect movement errors and propose corrections would be of great help. This paper addresses the problem of how to detect such execution errors and how to provide feedback to the human to correct his/her motor skill using a general, principled methodology based on imitation learning. The core idea is to compare the observed skill with a probabilistic model learned from expert demonstrations. The intensity of the feedback is regulated by the likelihood of the model given the observed skill. Based on demonstrations, our system can, for example, detect errors in the writing of characters with multiple strokes. Moreover, by using a haptic device, the Haption Virtuose 6D, we demonstrate a method to generate haptic feedback based on a distribution over trajectories, which could be used as an auxiliary means of communication between an instructor and an apprentice. Additionally, given a performance measurement, the haptic device can help the human discover and perform better movements to solve a given task. In this case, the human first tries a few times to solve the task without assistance. Our framework, in turn, uses a reinforcement learning algorithm to compute haptic feedback, which guides the human toward better solutions.
The role of visuohaptic experience in visually perceived depth.
Ho, Yun-Xian; Serwe, Sascha; Trommershäuser, Julia; Maloney, Laurence T; Landy, Michael S
2009-06-01
Berkeley suggested that "touch educates vision," that is, haptic input may be used to calibrate visual cues to improve visual estimation of properties of the world. Here, we test whether haptic input may be used to "miseducate" vision, causing observers to rely more heavily on misleading visual cues. Human subjects compared the depth of two cylindrical bumps illuminated by light sources located at different positions relative to the surface. As in previous work using judgments of surface roughness, we find that observers judge bumps to have greater depth when the light source is located eccentric to the surface normal (i.e., when shadows are more salient). Following several sessions of visual judgments of depth, subjects then underwent visuohaptic training in which haptic feedback was artificially correlated with the "pseudocue" of shadow size and artificially decorrelated with disparity and texture. Although there were large individual differences, almost all observers demonstrated integration of haptic cues during visuohaptic training. For some observers, subsequent visual judgments of bump depth were unaffected by the training. However, for 5 of 12 observers, training significantly increased the weight given to pseudocues, causing subsequent visual estimates of shape to be less veridical. We conclude that haptic information can be used to reweight visual cues, putting more weight on misleading pseudocues, even when more trustworthy visual cues are available in the scene.
Shared control of a medical robot with haptic guidance.
Xiong, Linfei; Chng, Chin Boon; Chui, Chee Kong; Yu, Peiwu; Li, Yao
2017-01-01
Tele-operation of robotic surgery reduces the radiation exposure during the interventional radiological operations. However, endoscope vision without force feedback on the surgical tool increases the difficulty for precise manipulation and the risk of tissue damage. The shared control of vision and force provides a novel approach of enhanced control with haptic guidance, which could lead to subtle dexterity and better maneuvrability during MIS surgery. The paper provides an innovative shared control method for robotic minimally invasive surgery system, in which vision and haptic feedback are incorporated to provide guidance cues to the clinician during surgery. The incremental potential field (IPF) method is utilized to generate a guidance path based on the anatomy of tissue and surgical tool interaction. Haptic guidance is provided at the master end to assist the clinician during tele-operative surgical robotic task. The approach has been validated with path following and virtual tumor targeting experiments. The experiment results demonstrate that comparing with vision only guidance, the shared control with vision and haptics improved the accuracy and efficiency of surgical robotic manipulation, where the tool-position error distance and execution time are reduced. The validation experiment demonstrates that the shared control approach could help the surgical robot system provide stable assistance and precise performance to execute the designated surgical task. The methodology could also be implemented with other surgical robot with different surgical tools and applications.
Papale, Paolo; Chiesi, Leonardo; Rampinini, Alessandra C; Pietrini, Pietro; Ricciardi, Emiliano
2016-01-01
In the last decades, the rapid growth of functional brain imaging methodologies allowed cognitive neuroscience to address open questions in philosophy and social sciences. At the same time, novel insights from cognitive neuroscience research have begun to influence various disciplines, leading to a turn to cognition and emotion in the fields of planning and architectural design. Since 2003, the Academy of Neuroscience for Architecture has been supporting 'neuro-architecture' as a way to connect neuroscience and the study of behavioral responses to the built environment. Among the many topics related to multisensory perceptual integration and embodiment, the concept of hapticity was recently introduced, suggesting a pivotal role of tactile perception and haptic imagery in architectural appraisal. Arguments have thus risen in favor of the existence of shared cognitive foundations between hapticity and the supramodal functional architecture of the human brain. Precisely, supramodality refers to the functional feature of defined brain regions to process and represent specific information content in a more abstract way, independently of the sensory modality conveying such information to the brain. Here, we highlight some commonalities and differences between the concepts of hapticity and supramodality according to the distinctive perspectives of architecture and cognitive neuroscience. This comparison and connection between these two different approaches may lead to novel observations in regard to people-environment relationships, and even provide empirical foundations for a renewed evidence-based design theory.
Mastoidectomy simulation with combined visual and haptic feedback.
Agus, Marco; Giachetti, Andrea; Gobbetti, Enrico; Zanetti, Gianluigi; Zorcolo, Antonio; John, Nigel W; Stone, Robert J
2002-01-01
Mastoidectomy is one of the most common surgical procedures relating to the petrous bone. In this paper we describe our preliminary results in the realization of a virtual reality mastoidectomy simulator. Our system is designed to work on patient-specific volumetric object models directly derived from 3D CT and MRI images. The paper summarizes the detailed task analysis performed in order to define the system requirements, introduces the architecture of the prototype simulator, and discusses the initial feedback received from selected end users.
1998-01-01
consisted of a videomicroscopy system and a tactile stimulator system. By using this setup, real-time images from the contact region as wvell as the... Videomicroscopy system . 4.3.2 Tactile stimulator svsteln . 4.3.3 Real-time imaging setup. 4.3.4 Active and passive touch experiments. 4.3.5...contact process is an important step. In this study, therefore, a videomicroscopy system was built’to visualize the contact re- gion of the fingerpad
Cyber integrated MEMS microhand for biological applications
NASA Astrophysics Data System (ADS)
Weissman, Adam; Frazier, Athena; Pepen, Michael; Lu, Yen-Wen; Yang, Shanchieh Jay
2009-05-01
Anthropomorphous robotic hands at microscales have been developed to receive information and perform tasks for biological applications. To emulate a human hand's dexterity, the microhand requires a master-slave interface with a wearable controller, force sensors, and perception displays for tele-manipulation. Recognizing the constraints and complexity imposed in developing feedback interface during miniaturization, this project address the need by creating an integrated cyber environment incorporating sensors with a microhand, haptic/visual display, and object model, to emulates human hands' psychophysical perception at microscale.
Intubation simulation with a cross-sectional visual guidance.
Rhee, Chi-Hyoung; Kang, Chul Won; Lee, Chang Ha
2013-01-01
We present an intubation simulation with deformable objects and a cross-sectional visual guidance using a general haptic device. Our method deforms the tube model when it collides with the human model. Mass-Spring model with the Euler integration is used for the tube deformation. For the trainee's more effective understanding of the intubation process, we provide a cross-sectional view of the oral cavity and the tube. Our system also applies a stereoscopic rendering to improve the depth perception and the reality of the simulation.
Infant Visual Attention and Object Recognition
Reynolds, Greg D.
2015-01-01
This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. PMID:25596333
Recognition-induced forgetting is not due to category-based set size.
Maxcey, Ashleigh M
2016-01-01
What are the consequences of accessing a visual long-term memory representation? Previous work has shown that accessing a long-term memory representation via retrieval improves memory for the targeted item and hurts memory for related items, a phenomenon called retrieval-induced forgetting. Recently we found a similar forgetting phenomenon with recognition of visual objects. Recognition-induced forgetting occurs when practice recognizing an object during a two-alternative forced-choice task, from a group of objects learned at the same time, leads to worse memory for objects from that group that were not practiced. An alternative explanation of this effect is that category-based set size is inducing forgetting, not recognition practice as claimed by some researchers. This alternative explanation is possible because during recognition practice subjects make old-new judgments in a two-alternative forced-choice task, and are thus exposed to more objects from practiced categories, potentially inducing forgetting due to set-size. Herein I pitted the category-based set size hypothesis against the recognition-induced forgetting hypothesis. To this end, I parametrically manipulated the amount of practice objects received in the recognition-induced forgetting paradigm. If forgetting is due to category-based set size, then the magnitude of forgetting of related objects will increase as the number of practice trials increases. If forgetting is recognition induced, the set size of exemplars from any given category should not be predictive of memory for practiced objects. Consistent with this latter hypothesis, additional practice systematically improved memory for practiced objects, but did not systematically affect forgetting of related objects. These results firmly establish that recognition practice induces forgetting of related memories. Future directions and important real-world applications of using recognition to access our visual memories of previously encountered objects are discussed.
Mechanisms of object recognition: what we have learned from pigeons
Soto, Fabian A.; Wasserman, Edward A.
2014-01-01
Behavioral studies of object recognition in pigeons have been conducted for 50 years, yielding a large body of data. Recent work has been directed toward synthesizing this evidence and understanding the visual, associative, and cognitive mechanisms that are involved. The outcome is that pigeons are likely to be the non-primate species for which the computational mechanisms of object recognition are best understood. Here, we review this research and suggest that a core set of mechanisms for object recognition might be present in all vertebrates, including pigeons and people, making pigeons an excellent candidate model to study the neural mechanisms of object recognition. Behavioral and computational evidence suggests that error-driven learning participates in object category learning by pigeons and people, and recent neuroscientific research suggests that the basal ganglia, which are homologous in these species, may implement error-driven learning of stimulus-response associations. Furthermore, learning of abstract category representations can be observed in pigeons and other vertebrates. Finally, there is evidence that feedforward visual processing, a central mechanism in models of object recognition in the primate ventral stream, plays a role in object recognition by pigeons. We also highlight differences between pigeons and people in object recognition abilities, and propose candidate adaptive specializations which may explain them, such as holistic face processing and rule-based category learning in primates. From a modern comparative perspective, such specializations are to be expected regardless of the model species under study. The fact that we have a good idea of which aspects of object recognition differ in people and pigeons should be seen as an advantage over other animal models. From this perspective, we suggest that there is much to learn about human object recognition from studying the “simple” brains of pigeons. PMID:25352784
O'Neil, Edward B; Watson, Hilary C; Dhillon, Sonya; Lobaugh, Nancy J; Lee, Andy C H
2015-09-01
Recent work has demonstrated that the perirhinal cortex (PRC) supports conjunctive object representations that aid object recognition memory following visual object interference. It is unclear, however, how these representations interact with other brain regions implicated in mnemonic retrieval and how congruent and incongruent interference influences the processing of targets and foils during object recognition. To address this, multivariate partial least squares was applied to fMRI data acquired during an interference match-to-sample task, in which participants made object or scene recognition judgments after object or scene interference. This revealed a pattern of activity sensitive to object recognition following congruent (i.e., object) interference that included PRC, prefrontal, and parietal regions. Moreover, functional connectivity analysis revealed a common pattern of PRC connectivity across interference and recognition conditions. Examination of eye movements during the same task in a separate study revealed that participants gazed more at targets than foils during correct object recognition decisions, regardless of interference congruency. By contrast, participants viewed foils more than targets for incorrect object memory judgments, but only after congruent interference. Our findings suggest that congruent interference makes object foils appear familiar and that a network of regions, including PRC, is recruited to overcome the effects of interference.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-16
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-834] Certain Mobile Electronic Devices.... 1337 in the importation, sale for importation, and sale within the United States after importation of certain mobile electronic devices incorporating haptics, by reason of the infringement of claims of six...
The Use of Haptic Display Technology in Education
ERIC Educational Resources Information Center
Barfield, Woodrow
2009-01-01
The experience of "virtual reality" can consist of head-tracked and stereoscopic virtual worlds, spatialized sound, haptic feedback, and to a lesser extent olfactory cues. Although virtual reality systems have been proposed for numerous applications, the field of education is one particular application that seems well-suited for virtual…
ERIC Educational Resources Information Center
McLinden, M.
2012-01-01
This article provides a synthesis of literature pertaining to the development of haptic exploratory strategies in children who have visual impairment and intellectual disabilities. The information received through such strategies assumes particular significance for these children, given the restricted information available through their visual…
Azarnoush, Hamed; Siar, Samaneh; Sawaya, Robin; Zhrani, Gmaan Al; Winkler-Schwartz, Alexander; Alotaibi, Fahad Eid; Bugdadi, Abdulgadir; Bajunaid, Khalid; Marwa, Ibrahim; Sabbagh, Abdulrahman Jafar; Del Maestro, Rolando F
2017-07-01
OBJECTIVE Virtual reality simulators allow development of novel methods to analyze neurosurgical performance. The concept of a force pyramid is introduced as a Tier 3 metric with the ability to provide visual and spatial analysis of 3D force application by any instrument used during simulated tumor resection. This study was designed to answer 3 questions: 1) Do study groups have distinct force pyramids? 2) Do handedness and ergonomics influence force pyramid structure? 3) Are force pyramids dependent on the visual and haptic characteristics of simulated tumors? METHODS Using a virtual reality simulator, NeuroVR (formerly NeuroTouch), ultrasonic aspirator force application was continually assessed during resection of simulated brain tumors by neurosurgeons, residents, and medical students. The participants performed simulated resections of 18 simulated brain tumors with different visual and haptic characteristics. The raw data, namely, coordinates of the instrument tip as well as contact force values, were collected by the simulator. To provide a visual and qualitative spatial analysis of forces, the authors created a graph, called a force pyramid, representing force sum along the z-coordinate for different xy coordinates of the tool tip. RESULTS Sixteen neurosurgeons, 15 residents, and 84 medical students participated in the study. Neurosurgeon, resident and medical student groups displayed easily distinguishable 3D "force pyramid fingerprints." Neurosurgeons had the lowest force pyramids, indicating application of the lowest forces, followed by resident and medical student groups. Handedness, ergonomics, and visual and haptic tumor characteristics resulted in distinct well-defined 3D force pyramid patterns. CONCLUSIONS Force pyramid fingerprints provide 3D spatial assessment displays of instrument force application during simulated tumor resection. Neurosurgeon force utilization and ergonomic data form a basis for understanding and modulating resident force application and improving patient safety during tumor resection.
Lee, K; Kim, M; Kim, K
2018-05-11
Skin surface evaluation has been studied using various imaging techniques. However, all these studies had limited impact because they were performed using visual exam only. To improve on this scenario with haptic feedback, we propose 3D reconstruction of the skin surface using a single image. Unlike extant 3D skin surface reconstruction algorithms, we utilize the local texture and global curvature regions, combining the results for reconstruction. The first entails the reconstruction of global curvature, achieved by bilateral filtering that removes noise on the surface while maintaining the edge (ie, furrow) to obtain the overall curvature. The second entails the reconstruction of local texture, representing the fine wrinkles of the skin, using an advanced form of bilateral filtering. The final image is then composed by merging the two reconstructed images. We tested the curvature reconstruction part by comparing the resulting curvatures with measured values from real phantom objects while local texture reconstruction was verified by measuring skin surface roughness. Then, we showed the reconstructed result of our proposed algorithm via the reconstruction of various real skin surfaces. The experimental results demonstrate that our approach is a promising technology to reconstruct an accurate skin surface with a single skin image. We proposed 3D skin surface reconstruction using only a single camera. We highlighted the utility of global curvature, which has not been considered important in the past. Thus, we proposed a new method for 3D reconstruction that can be used for 3D haptic palpation, dividing the concepts of local and global regions. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Visual-perceptual mismatch in robotic surgery.
Abiri, Ahmad; Tao, Anna; LaRocca, Meg; Guan, Xingmin; Askari, Syed J; Bisley, James W; Dutson, Erik P; Grundfest, Warren S
2017-08-01
The principal objective of the experiment was to analyze the effects of the clutch operation of robotic surgical systems on the performance of the operator. The relative coordinate system introduced by the clutch operation can introduce a visual-perceptual mismatch which can potentially have negative impact on a surgeon's performance. We also assess the impact of the introduction of additional tactile sensory information on reducing the impact of visual-perceptual mismatch on the performance of the operator. We asked 45 novice subjects to complete peg transfers using the da Vinci IS 1200 system with grasper-mounted, normal force sensors. The task involves picking up a peg with one of the robotic arms, passing it to the other arm, and then placing it on the opposite side of the view. Subjects were divided into three groups: aligned group (no mismatch), the misaligned group (10 cm z axis mismatch), and the haptics-misaligned group (haptic feedback and z axis mismatch). Each subject performed the task five times, during which the grip force, time of completion, and number of faults were recorded. Compared to the subjects that performed the tasks using a properly aligned controller/arm configuration, subjects with a single-axis misalignment showed significantly more peg drops (p = 0.011) and longer time to completion (p < 0.001). Additionally, it was observed that addition of tactile feedback helps reduce the negative effects of visual-perceptual mismatch in some cases. Grip force data recorded from grasper-mounted sensors showed no difference between the different groups. The visual-perceptual mismatch created by the misalignment of the robotic controls relative to the robotic arms has a negative impact on the operator of a robotic surgical system. Introduction of other sensory information and haptic feedback systems can help in potentially reducing this effect.
The Role of Perceptual Load in Object Recognition
ERIC Educational Resources Information Center
Lavie, Nilli; Lin, Zhicheng; Zokaei, Nahid; Thoma, Volker
2009-01-01
Predictions from perceptual load theory (Lavie, 1995, 2005) regarding object recognition across the same or different viewpoints were tested. Results showed that high perceptual load reduces distracter recognition levels despite always presenting distracter objects from the same view. They also showed that the levels of distracter recognition were…
Yamashita, Wakayo; Wang, Gang; Tanaka, Keiji
2010-01-01
One usually fails to recognize an unfamiliar object across changes in viewing angle when it has to be discriminated from similar distractor objects. Previous work has demonstrated that after long-term experience in discriminating among a set of objects seen from the same viewing angle, immediate recognition of the objects across 30-60 degrees changes in viewing angle becomes possible. The capability for view-invariant object recognition should develop during the within-viewing-angle discrimination, which includes two kinds of experience: seeing individual views and discriminating among the objects. The aim of the present study was to determine the relative contribution of each factor to the development of view-invariant object recognition capability. Monkeys were first extensively trained in a task that required view-invariant object recognition (Object task) with several sets of objects. The animals were then exposed to a new set of objects over 26 days in one of two preparatory tasks: one in which each object view was seen individually, and a second that required discrimination among the objects at each of four viewing angles. After the preparatory period, we measured the monkeys' ability to recognize the objects across changes in viewing angle, by introducing the object set to the Object task. Results indicated significant view-invariant recognition after the second but not first preparatory task. These results suggest that discrimination of objects from distractors at each of several viewing angles is required for the development of view-invariant recognition of the objects when the distractors are similar to the objects.
Audio Haptic Videogaming for Developing Wayfinding Skills in Learners Who are Blind
Sánchez, Jaime; de Borba Campos, Marcia; Espinoza, Matías; Merabet, Lotfi B.
2014-01-01
Interactive digital technologies are currently being developed as a novel tool for education and skill development. Audiopolis is an audio and haptic based videogame designed for developing orientation and mobility (O&M) skills in people who are blind. We have evaluated the cognitive impact of videogame play on O&M skills by assessing performance on a series of behavioral tasks carried out in both indoor and outdoor virtual spaces. Our results demonstrate that the use of Audiopolis had a positive impact on the development and use of O&M skills in school-aged learners who are blind. The impact of audio and haptic information on learning is also discussed. PMID:25485312
Social Touch Technology: A Survey of Haptic Technology for Social Touch.
Huisman, Gijs
2017-01-01
This survey provides an overview of work on haptic technology for social touch. Social touch has been studied extensively in psychology and neuroscience. With the development of new technologies, it is now possible to engage in social touch at a distance or engage in social touch with artificial social agents. Social touch research has inspired research into technology mediated social touch, and this line of research has found effects similar to actual social touch. The importance of haptic stimulus qualities, multimodal cues, and contextual factors in technology mediated social touch is discussed. This survey is concluded by reflecting on the current state of research into social touch technology, and providing suggestions for future research and applications.
Enhancing the Performance of Passive Teleoperation Systems via Cutaneous Feedback.
Pacchierotti, Claudio; Tirmizi, Asad; Bianchini, Gianni; Prattichizzo, Domenico
2015-01-01
We introduce a novel method to improve the performance of passive teleoperation systems with force reflection. It consists of integrating kinesthetic haptic feedback provided by common grounded haptic interfaces with cutaneous haptic feedback. The proposed approach can be used on top of any time-domain control technique that ensures a stable interaction by scaling down kinesthetic feedback when this is required to satisfy stability conditions (e.g., passivity) at the expense of transparency. Performance is recovered by providing a suitable amount of cutaneous force through custom wearable cutaneous devices. The viability of the proposed approach is demonstrated through an experiment of perceived stiffness and an experiment of teleoperated needle insertion in soft tissue.
Keep an eye on your hands: on the role of visual mechanisms in processing of haptic space
Zuidhoek, Sander; Noordzij, Matthijs L.; Kappers, Astrid M. L.
2008-01-01
The present paper reviews research on a haptic orientation processing. Central is a task in which a test bar has to be set parallel to a reference bar at another location. Introducing a delay between inspecting the reference bar and setting the test bar leads to a surprising improvement. Moreover, offering visual background information also elevates performance. Interestingly, (congenitally) blind individuals do not or to a weaker extent show the improvement with time, while in parallel to this, they appear to benefit less from spatial imagery processing. Together this strongly points to an important role for visual processing mechanisms in the perception of haptic inputs. PMID:18196305
Learning from vision-to-touch is different than learning from touch-to-vision
Wismeijer, Dagmar A.; Gegenfurtner, Karl R.; Drewing, Knut
2012-01-01
We studied whether vision can teach touch to the same extent as touch seems to teach vision. In a 2 × 2 between-participants learning study, we artificially correlated visual gloss cues with haptic compliance cues. In two “natural” tasks, we tested whether visual gloss estimations have an influence on haptic estimations of softness and vice versa. In two “novel” tasks, in which participants were either asked to haptically judge glossiness or to visually judge softness, we investigated how perceptual estimates transfer from one sense to the other. Our results showed that vision does not teach touch as efficient as touch seems to teach vision. PMID:23181012
User interaction in smart ambient environment targeted for senior citizen.
Pulli, Petri; Hyry, Jaakko; Pouke, Matti; Yamamoto, Goshiro
2012-11-01
Many countries are facing a problem when the age-structure of the society is changing. The numbers of senior citizen are rising rapidly, and caretaking personnel numbers cannot match the problems and needs of these citizens. Using smart, ubiquitous technologies can offer ways in coping with the need of more nursing staff and the rising costs of taking care of senior citizens for the society. Helping senior citizens with a novel, easy to use interface that guides and helps, could improve their quality of living and make them participate more in daily activities. This paper presents a projection-based display system for elderly people with memory impairments and the proposed user interface for the system. The user's process recognition based on a sensor network is also described. Elderly people wearing the system can interact the projected user interface by tapping physical surfaces (such as walls, tables, or doors) using them as a natural, haptic feedback input surface.
Whisking mechanics and active sensing
Bush, Nicholas E; Solla, Sara A
2017-01-01
We describe recent advances in quantifying the three-dimensional (3D) geometry and mechanics of whisking. Careful delineation of relevant 3D reference frames reveals important geometric and mechanical distinctions between the localization problem (‘where’ is an object) and the feature extraction problem (‘what’ is an object). Head-centered and resting-whisker reference frames lend themselves to quantifying temporal and kinematic cues used for object localization. The whisking-centered reference frame lends itself to quantifying the contact mechanics likely associated with feature extraction. We offer the ‘windowed sampling’ hypothesis for active sensing: that rats can estimate an object’s spatial features by integrating mechanical information across whiskers during brief (25–60 ms) windows of ‘haptic enclosure’ with the whiskers, a motion that resembles a hand grasp. PMID:27632212
Infant visual attention and object recognition.
Reynolds, Greg D
2015-05-15
This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Yeom, Soonja; Choi-Lundberg, Derek L.; Fluck, Andrew Edward; Sale, Arthur
2017-01-01
Purpose: This study aims to evaluate factors influencing undergraduate students' acceptance of a computer-aided learning resource using the Phantom Omni haptic stylus to enable rotation, touch and kinaesthetic feedback and display of names of three-dimensional (3D) human anatomical structures on a visual display. Design/methodology/approach: The…
Comparing Tactile Maps and Haptic Digital Representations of a Maritime Environment
ERIC Educational Resources Information Center
Simonnet, Mathieu; Vieilledent, Steephane; Jacobson, R. Daniel; Tisseau, Jacques
2011-01-01
A map exploration and representation exercise was conducted with participants who were totally blind. Representations of maritime environments were presented either with a tactile map or with a digital haptic virtual map. We assessed the knowledge of spatial configurations using a triangulation technique. The results revealed that both types of…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-06
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-834] Certain Mobile Electronic Devices Incorporating Haptics; Institution of Investigation Pursuant to 19 U.S.C. 1337 AGENCY: U.S. International Trade.... International Trade Commission on February 7, 2012, and an amended complaint was filed with the U.S...
Feel, Imagine and Learn!--Haptic Augmented Simulation and Embodied Instruction in Physics Learning
ERIC Educational Resources Information Center
Han, In Sook
2010-01-01
The purpose of this study was to investigate the potentials and effects of an embodied instructional model in abstract concept learning. This embodied instructional process included haptic augmented educational simulation as an instructional tool to provide perceptual experiences as well as further instruction to activate those previous…
Let the Force Be with Us: Dyads Exploit Haptic Coupling for Coordination
ERIC Educational Resources Information Center
van der Wel, Robrecht P. R. D.; Knoblich, Guenther; Sebanz, Natalie
2011-01-01
People often perform actions that involve a direct physical coupling with another person, such as when moving furniture together. Here, we examined how people successfully coordinate such actions with others. We tested the hypothesis that dyads amplify their forces to create haptic information to coordinate. Participants moved a pole (resembling a…
Haptic Tracking Permits Bimanual Independence
ERIC Educational Resources Information Center
Rosenbaum, David A.; Dawson, Amanda A.; Challis, John H.
2006-01-01
This study shows that in a novel task--bimanual haptic tracking--neurologically normal human adults can move their 2 hands independently for extended periods of time with little or no training. Participants lightly touched buttons whose positions were moved either quasi-randomly in the horizontal plane by 1 or 2 human drivers (Experiment 1), in…
Unpacking Students' Conceptualizations through Haptic Feedback
ERIC Educational Resources Information Center
Magana, A. J.; Balachandran, S.
2017-01-01
While it is clear that the use of computer simulations has a beneficial effect on learning when compared to instruction without computer simulations, there is still room for improvement to fully realize their benefits for learning. Haptic technologies can fulfill the educational potential of computer simulations by adding the sense of touch.…
User Acceptance of a Haptic Interface for Learning Anatomy
ERIC Educational Resources Information Center
Yeom, Soonja; Choi-Lundberg, Derek; Fluck, Andrew; Sale, Arthur
2013-01-01
Visualizing the structure and relationships in three dimensions (3D) of organs is a challenge for students of anatomy. To provide an alternative way of learning anatomy engaging multiple senses, we are developing a force-feedback (haptic) interface for manipulation of 3D virtual organs, using design research methodology, with iterations of system…
Supramodality Effects in Visual and Haptic Spatial Processes
ERIC Educational Resources Information Center
Cattaneo, Zaira; Vecchi, Tomaso
2008-01-01
In this article, the authors investigated unimodal and cross-modal processes in spatial working memory. A number of locations had to be memorized within visual or haptic matrices according to different experimental conditions known to be critical in accounting for the effects of perception on imagery. Results reveal that some characteristics of…
Teaching Bovine Abdominal Anatomy: Use of a Haptic Simulator
ERIC Educational Resources Information Center
Kinnison, Tierney; Forrest, Neil David; Frean, Stephen Philip; Baillie, Sarah
2009-01-01
Traditional methods of teaching anatomy to undergraduate medical and veterinary students are being challenged and need to adapt to modern concerns and requirements. There is a move away from the use of cadavers to new technologies as a way of complementing the traditional approaches and addressing resource and ethical problems. Haptic (touch)…
What Aspects of Vision Facilitate Haptic Processing?
ERIC Educational Resources Information Center
Millar, Susanna; Al-Attar, Zainab
2005-01-01
We investigate how vision affects haptic performance when task-relevant visual cues are reduced or excluded. The task was to remember the spatial location of six landmarks that were explored by touch in a tactile map. Here, we use specially designed spectacles that simulate residual peripheral vision, tunnel vision, diffuse light perception, and…
Overview Electrotactile Feedback for Enhancing Human Computer Interface
NASA Astrophysics Data System (ADS)
Pamungkas, Daniel S.; Caesarendra, Wahyu
2018-04-01
To achieve effective interaction between a human and a computing device or machine, adequate feedback from the computing device or machine is required. Recently, haptic feedback is increasingly being utilised to improve the interactivity of the Human Computer Interface (HCI). Most existing haptic feedback enhancements aim at producing forces or vibrations to enrich the user’s interactive experience. However, these force and/or vibration actuated haptic feedback systems can be bulky and uncomfortable to wear and only capable of delivering a limited amount of information to the user which can limit both their effectiveness and the applications they can be applied to. To address this deficiency, electrotactile feedback is used. This involves delivering haptic sensations to the user by electrically stimulating nerves in the skin via electrodes placed on the surface of the skin. This paper presents a review and explores the capability of electrotactile feedback for HCI applications. In addition, a description of the sensory receptors within the skin for sensing tactile stimulus and electric currents alsoseveral factors which influenced electric signal to transmit to the brain via human skinare explained.
Design of high-fidelity haptic display for one-dimensional force reflection applications
NASA Astrophysics Data System (ADS)
Gillespie, Brent; Rosenberg, Louis B.
1995-12-01
This paper discusses the development of a virtual reality platform for the simulation of medical procedures which involve needle insertion into human tissue. The paper's focus is the hardware and software requirements for haptic display of a particular medical procedure known as epidural analgesia. To perform this delicate manual procedure, an anesthesiologist must carefully guide a needle through various layers of tissue using only haptic cues for guidance. As a simplifying aspect for the simulator design, all motions and forces involved in the task occur along a fixed line once insertion begins. To create a haptic representation of this procedure, we have explored both physical modeling and perceptual modeling techniques. A preliminary physical model was built based on CT-scan data of the operative site. A preliminary perceptual model was built based on current training techniques for the procedure provided by a skilled instructor. We compare and contrast these two modeling methods and discuss the implications of each. We select and defend the perceptual model as a superior approach for the epidural analgesia simulator.
Casellato, Claudia; Pedrocchi, Alessandra; Zorzi, Giovanna; Vernisse, Lea; Ferrigno, Giancarlo; Nardocci, Nardo
2013-05-01
New insights suggest that dystonic motor impairments could also involve a deficit of sensory processing. In this framework, biofeedback, making covert physiological processes more overt, could be useful. The present work proposes an innovative integrated setup which provides the user with an electromyogram (EMG)-based visual-haptic biofeedback during upper limb movements (spiral tracking tasks), to test if augmented sensory feedbacks can induce motor control improvement in patients with primary dystonia. The ad hoc developed real-time control algorithm synchronizes the haptic loop with the EMG reading; the brachioradialis EMG values were used to modify visual and haptic features of the interface: the higher was the EMG level, the higher was the virtual table friction and the background color proportionally moved from green to red. From recordings on dystonic and healthy subjects, statistical results showed that biofeedback has a significant impact, correlated with the local impairment, on the dystonic muscular control. These tests pointed out the effectiveness of biofeedback paradigms in gaining a better specific-muscle voluntary motor control. The flexible tool developed here shows promising prospects of clinical applications and sensorimotor rehabilitation.
Haptic display for the VR arthroscopy training simulator
NASA Astrophysics Data System (ADS)
Ziegler, Rolf; Brandt, Christoph; Kunstmann, Christian; Mueller, Wolfgang; Werkhaeuser, Holger
1997-05-01
A specific desire to find new training methods arose from the new fields called 'minimal invasive surgery.' With the technical advance modern video arthroscopy became the standard procedure in the ORs. Holding the optical system with the video camera in one hand, watching the operation field on the monitor, the other hand was free to guide, e.g., a probe. As arthroscopy became a more common procedure it became obvious that some sort of special training was necessary to guarantee a certain level of qualification of the surgeons. Therefore, a hospital in Frankfurt, Germany approached the Fraunhofer Institute for Computer Graphics to develop a training system for arthroscopy based on VR techniques. At least the main drawback of the developed simulator is the missing of haptic perception, especially of force feedback. In cooperation with the Department of Electro-Mechanical Construction at the Darmstadt Technical University we have designed and built a haptic display for the VR arthroscopy training simulator. In parallel we developed a concept for the integration of the haptic display in a configurable way.
Open Touch/Sound Maps: A system to convey street data through haptic and auditory feedback
NASA Astrophysics Data System (ADS)
Kaklanis, Nikolaos; Votis, Konstantinos; Tzovaras, Dimitrios
2013-08-01
The use of spatial (geographic) information is becoming ever more central and pervasive in today's internet society but the most of it is currently inaccessible to visually impaired users. However, access in visual maps is severely restricted to visually impaired and people with blindness, due to their inability to interpret graphical information. Thus, alternative ways of a map's presentation have to be explored, in order to enforce the accessibility of maps. Multiple types of sensory perception like touch and hearing may work as a substitute of vision for the exploration of maps. The use of multimodal virtual environments seems to be a promising alternative for people with visual impairments. The present paper introduces a tool for automatic multimodal map generation having haptic and audio feedback using OpenStreetMap data. For a desired map area, an elevation map is being automatically generated and can be explored by touch, using a haptic device. A sonification and a text-to-speech (TTS) mechanism provide also audio navigation information during the haptic exploration of the map.
Topographic modelling of haptic properties of tissue products
NASA Astrophysics Data System (ADS)
Rosen, B.-G.; Fall, A.; Rosen, S.; Farbrot, A.; Bergström, P.
2014-03-01
The way a product or material feels when touched, haptics, has been shown to be a property that plays an important role when consumers determine the quality of products For tissue products in constant touch with the skin, softness" becomes a primary quality parameter. In the present work, the relationship between topography and the feeling of the surface has been investigated for commercial tissues with varying degree of texture from the low textured crepe tissue to the highly textured embossed- and air-dried tissue products. A trained sensory panel at was used to grade perceived haptic "roughness". The technique used to characterize the topography was Digital light projection (DLP) technique, By the use of multivariate statistics, strong correlations between perceived roughness and topography were found with predictability of above 90 percent even though highly textured products were included. Characterization was made using areal ISO 25178-2 topography parameters in combination with non-contacting topography measurement. The best prediction ability was obtained when combining haptic properties with the topography parameters auto-correlation length (Sal), peak material volume (Vmp), core roughness depth (Sk) and the maximum height of the surface (Sz).
Using postural synergies to animate a low-dimensional hand avatar in haptic simulation.
Mulatto, Sara; Formaglio, Alessandro; Malvezzi, Monica; Prattichizzo, Domenico
2013-01-01
A technique to animate a realistic hand avatar with 20 DoFs based on the biomechanics of the human hand is presented. The animation does not use any sensor glove or advanced tracker with markers. The proposed approach is based on the knowledge of a set of kinematic constraints on the model of the hand, referred to as postural synergies, which allows to represent the hand posture using a number of variables lower than the number of joints of the hand model. This low-dimensional set of parameters is estimated from direct measurement of the motion of thumb and index finger tracked using two haptic devices. A kinematic inversion algorithm has been developed, which takes synergies into account and estimates the kinematic configuration of the whole hand, i.e., also of the fingers whose end tips are not directly tracked by the two haptic devices. The hand skin is deformable and its deformation is computed using a linear vertex blending technique. The proposed synergy-based animation of the hand avatar involves only algebraic computations and is suitable for real-time implementation as required in haptics.
Human-arm-and-hand-dynamic model with variability analyses for a stylus-based haptic interface.
Fu, Michael J; Cavuşoğlu, M Cenk
2012-12-01
Haptic interface research benefits from accurate human arm models for control and system design. The literature contains many human arm dynamic models but lacks detailed variability analyses. Without accurate measurements, variability is modeled in a very conservative manner, leading to less than optimal controller and system designs. This paper not only presents models for human arm dynamics but also develops inter- and intrasubject variability models for a stylus-based haptic device. Data from 15 human subjects (nine male, six female, ages 20-32) were collected using a Phantom Premium 1.5a haptic device for system identification. In this paper, grip-force-dependent models were identified for 1-3-N grip forces in the three spatial axes. Also, variability due to human subjects and grip-force variation were modeled as both structured and unstructured uncertainties. For both forms of variability, the maximum variation, 95 %, and 67 % confidence interval limits were examined. All models were in the frequency domain with force as input and position as output. The identified models enable precise controllers targeted to a subset of possible human operator dynamics.
Surgical virtual reality - highlights in developing a high performance surgical haptic device.
Custură-Crăciun, D; Cochior, D; Constantinoiu, S; Neagu, C
2013-01-01
Just like simulators are a standard in aviation and aerospace sciences, we expect for surgical simulators to soon become a standard in medical applications. These will correctly instruct future doctors in surgical techniques without there being a need for hands on patient instruction. Using virtual reality by digitally transposing surgical procedures changes surgery in are volutionary manner by offering possibilities for implementing new, much more efficient, learning methods, by allowing the practice of new surgical techniques and by improving surgeon abilities and skills. Perfecting haptic devices has opened the door to a series of opportunities in the fields of research,industry, nuclear science and medicine. Concepts purely theoretical at first, such as telerobotics, telepresence or telerepresentation,have become a practical reality as calculus techniques, telecommunications and haptic devices evolved,virtual reality taking a new leap. In the field of surgery barrier sand controversies still remain, regarding implementation and generalization of surgical virtual simulators. These obstacles remain connected to the high costs of this yet fully sufficiently developed technology, especially in the domain of haptic devices. Celsius.
Obstacle Crossing Differences Between Blind and Blindfolded Subjects After Haptic Exploration.
Forner-Cordero, Arturo; Garcia, Valéria D; Rodrigues, Sérgio T; Duysens, Jacques
2016-01-01
Little is known about the ability of blind people to cross obstacles after they have explored haptically their size and position. Long-term absence of vision may affect spatial cognition in the blind while their extensive experience with the use of haptic information for guidance may lead to compensation strategies. Seven blind and 7 sighted participants (with vision available and blindfolded) walked along a flat pathway and crossed an obstacle after a haptic exploration. Blind and blindfolded subjects used different strategies to cross the obstacle. After the first 20 trials the blindfolded subjects reduced the distance between the foot and the obstacle at the toe-off instant, while the blind behaved as the subjects with full vision. Blind and blindfolded participants showed larger foot clearance than participants with vision. At foot landing the hip was more behind the foot in the blindfolded condition, while there were no differences between the blind and the vision conditions. For several parameters of the obstacle crossing task, blind people were more similar to subjects with full vision indicating that the blind subjects were able to compensate for the lack of vision.
NASA's Hybrid Reality Lab: One Giant Leap for Full Dive
NASA Technical Reports Server (NTRS)
Delgado, Francisco J.; Noyes, Matthew
2017-01-01
This presentation demonstrates how NASA is using consumer VR headsets, game engine technology and NVIDIA's GPUs to create highly immersive future training systems augmented with extremely realistic haptic feedback, sound, additional sensory information, and how these can be used to improve the engineering workflow. Include in this presentation is an environment simulation of the ISS, where users can interact with virtual objects, handrails, and tracked physical objects while inside VR, integration of consumer VR headsets with the Active Response Gravity Offload System, and a space habitat architectural evaluation tool. Attendees will learn how the best elements of real and virtual worlds can be combined into a hybrid reality environment with tangible engineering and scientific applications.
Distinct roles of basal forebrain cholinergic neurons in spatial and object recognition memory.
Okada, Kana; Nishizawa, Kayo; Kobayashi, Tomoko; Sakata, Shogo; Kobayashi, Kazuto
2015-08-06
Recognition memory requires processing of various types of information such as objects and locations. Impairment in recognition memory is a prominent feature of amnesia and a symptom of Alzheimer's disease (AD). Basal forebrain cholinergic neurons contain two major groups, one localized in the medial septum (MS)/vertical diagonal band of Broca (vDB), and the other in the nucleus basalis magnocellularis (NBM). The roles of these cell groups in recognition memory have been debated, and it remains unclear how they contribute to it. We use a genetic cell targeting technique to selectively eliminate cholinergic cell groups and then test spatial and object recognition memory through different behavioural tasks. Eliminating MS/vDB neurons impairs spatial but not object recognition memory in the reference and working memory tasks, whereas NBM elimination undermines only object recognition memory in the working memory task. These impairments are restored by treatment with acetylcholinesterase inhibitors, anti-dementia drugs for AD. Our results highlight that MS/vDB and NBM cholinergic neurons are not only implicated in recognition memory but also have essential roles in different types of recognition memory.