Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets.
Meredith, M Alex; Allman, Brian L
2015-03-01
The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
How Do Batters Use Visual, Auditory, and Tactile Information about the Success of a Baseball Swing?
ERIC Educational Resources Information Center
Gray, Rob
2009-01-01
Bat/ball contact produces visual (the ball leaving the bat), auditory (the "crack" of the bat), and tactile (bat vibration) feedback about the success of the swing. We used a batting simulation to investigate how college baseball players use visual, tactile, and auditory feedback. In Experiment 1, swing accuracy (i.e., the lateral separation…
Gainotti, Guido
2010-02-01
The aim of the present survey was to review scientific articles dealing with the non-visual (auditory and tactile) forms of neglect to determine: (a) whether behavioural patterns similar to those observed in the visual modality can also be observed in the non-visual modalities; (b) whether a different severity of neglect can be found in the visual and in the auditory and tactile modalities; (c) the reasons for the possible differences between the visual and non-visual modalities. Data pointing to a contralesional orienting of attention in the auditory and the tactile modalities in visual neglect patients were separately reviewed. Results showed: (a) that in patients with right brain damage manifestations of neglect for the contralesional side of space can be found not only in the visual but also in the auditory and tactile modalities; (b) that the severity of neglect is greater in the visual than in the non-visual modalities. This asymmetry in the severity of neglect across modalities seems due to the greater role that the automatic capture of attention by irrelevant ipsilesional stimuli seems to play in the visual modality. Copyright 2009 Elsevier Srl. All rights reserved.
ERIC Educational Resources Information Center
Mullen, Stuart; Dixon, Mark R.; Belisle, Jordan; Stanley, Caleb
2017-01-01
The current study sought to evaluate the efficacy of a stimulus equivalence training procedure in establishing auditory-tactile-visual stimulus classes with 2 children with autism and developmental delays. Participants were exposed to vocal-tactile (A-B) and tactile-picture (B-C) conditional discrimination training and were tested for the…
Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.
Bigelow, James; Poremba, Amy
2014-01-01
Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.
Matching Teaching and Learning Styles.
ERIC Educational Resources Information Center
Caudill, Gil
1998-01-01
Outlines three basic learning modalities--auditory, visual, and tactile--and notes that technology can help incorporate multiple modalities within each lesson, to meet the needs of most students. Discusses the importance in multiple modality teaching of effectively assessing students. Presents visual, auditory and tactile activity suggestions.…
Thinking about touch facilitates tactile but not auditory processing.
Anema, Helen A; de Haan, Alyanne M; Gebuis, Titia; Dijkerman, H Chris
2012-05-01
Mental imagery is considered to be important for normal conscious experience. It is most frequently investigated in the visual, auditory and motor domain (imagination of movement), while the studies on tactile imagery (imagination of touch) are scarce. The current study investigated the effect of tactile and auditory imagery on the left/right discriminations of tactile and auditory stimuli. In line with our hypothesis, we observed that after tactile imagery, tactile stimuli were responded to faster as compared to auditory stimuli and vice versa. On average, tactile stimuli were responded to faster as compared to auditory stimuli, and stimuli in the imagery condition were on average responded to slower as compared to baseline performance (left/right discrimination without imagery assignment). The former is probably due to the spatial and somatotopic proximity of the fingers receiving the taps and the thumbs performing the response (button press), the latter to a dual task cost. Together, these results provide the first evidence of a behavioural effect of a tactile imagery assignment on the perception of real tactile stimuli.
ERIC Educational Resources Information Center
Gersten, Susan G. Liss
A study was conducted to determine if visual linguistic numeric, auditory linguistic numeric, and tactile concrete learners have statistically significant different study habits, study attitudes, and study orientation than their low visual linguistic numeric, low auditory linguistic numeric, and low tactile concrete counterparts. Data were…
Is More Better? - Night Vision Enhancement System's Pedestrian Warning Modes and Older Drivers.
Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas
2010-01-01
Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers' workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers.
Is More Better? — Night Vision Enhancement System’s Pedestrian Warning Modes and Older Drivers
Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas
2010-01-01
Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers’ workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers. PMID:21050616
Effects of Visual, Auditory, and Tactile Alerts on Platoon Leader Performance and Decision Making
2005-12-01
Effects of Visual, Auditory, and Tactile Alerts on Platoon Leader Performance and Decision Making by Andrea S . Krausman, Linda R. Elliott...Tactile Alerts on Platoon Leader Performance and Decision Making Andrea S . Krausman, Linda R. Elliott, and Rodger A. Pettitt Human Research and...Platoon Leader Performance and Decision Making 5c. PROGRAM ELEMENT NUMBER 5d. PROJECT NUMBER 62716AH70 5e. TASK NUMBER 6. AUTHOR( S
Auditory, Tactile, and Audiotactile Information Processing Following Visual Deprivation
ERIC Educational Resources Information Center
Occelli, Valeria; Spence, Charles; Zampini, Massimiliano
2013-01-01
We highlight the results of those studies that have investigated the plastic reorganization processes that occur within the human brain as a consequence of visual deprivation, as well as how these processes give rise to behaviorally observable changes in the perceptual processing of auditory and tactile information. We review the evidence showing…
To what extent do Gestalt grouping principles influence tactile perception?
Gallace, Alberto; Spence, Charles
2011-07-01
Since their formulation by the Gestalt movement more than a century ago, the principles of perceptual grouping have primarily been investigated in the visual modality and, to a lesser extent, in the auditory modality. The present review addresses the question of whether the same grouping principles also affect the perception of tactile stimuli. Although, to date, only a few studies have explicitly investigated the existence of Gestalt grouping principles in the tactile modality, we argue that many more studies have indirectly provided evidence relevant to this topic. Reviewing this body of research, we argue that similar principles to those reported previously in visual and auditory studies also govern the perceptual grouping of tactile stimuli. In particular, we highlight evidence showing that the principles of proximity, similarity, common fate, good continuation, and closure affect tactile perception in both unimodal and crossmodal settings. We also highlight that the grouping of tactile stimuli is often affected by visual and auditory information that happen to be presented simultaneously. Finally, we discuss the theoretical and applied benefits that might pertain to the further study of Gestalt principles operating in both unisensory and multisensory tactile perception.
Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.
Nava, Elena; Grassi, Massimo; Turati, Chiara
2016-01-01
Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five.
Baldwin, Carryl L; Eisert, Jesse L; Garcia, Andre; Lewis, Bridget; Pratt, Stephanie M; Gonzalez, Christian
2012-01-01
Through a series of investigations involving different levels of contextual fidelity we developed scales of perceived urgency for several dimensions of the auditory, visual, and tactile modalities. Psychophysical ratings of perceived urgency, annoyance, and acceptability as well as behavioral responses to signals in each modality were obtained and analyzed using Steven's Power Law to allow comparison across modalities. Obtained results and their implications for use as in-vehicle alerts and warnings are discussed.
DOT National Transportation Integrated Search
1962-05-01
Tactile communication presents a relatively unexploited channel of information transmission in the field of aviation. Visual and auditory input channels frequently reach an information saturation point during various flight operations. A cutaneous co...
Teaching for Different Learning Styles.
ERIC Educational Resources Information Center
Cropper, Carolyn
1994-01-01
This study examined learning styles in 137 high ability fourth-grade students. All students were administered two learning styles inventories. Characteristics of students with the following learning styles are summarized: auditory language, visual language, auditory numerical, visual numerical, tactile concrete, individual learning, group…
Shopper, M
1978-01-01
The role of audition as an important perceptual modality in early psychic development has been neglected. Some reasons for this neglect are suggested. In the development of psychoanalytic technique, the analyst has changed from a "tactile presence" to a "visual presence," then finally, with the analyst positioning himself behind the couch, to an "auditory presence." Several clinical examples from analytic patients as well as child development in normal and deaf children provide instances of each type of perceptual "presence." It is suggested that, in evaluating analyzability, analysis requires a specific ego ability, namely, tolerance for the analyst as an "auditory presence." It is emphasized that some patients, for reasons of development, constitution, and/or significant stress (separation), cannot work with the analyst as an "auditory presence," but regress to the analyst as a "visual" or "tactile" presence. The importance of audition in early mother/stranger differentiations, and in the peek-a-boo game, is a developmental precursor to the use of audition as a contact modality in the separation and individuation phase. Audition permits active locomotion and separation from tactile and visual contact modalities between toddler and mother, while at the same time maintaining contact via their respective "auditory presence" for each other. The utilization of the pull-toy in mastering the conflicts of the separation-individuation phase is demonstrated. The pull-toy is heir to the teddy bear and ancestor to the tricycle. Greater attentiveness to the auditory perceptual modality may help us understand developmental phenomenon, better evaluate the potential analysand, and clarify clinical problems of audition occurring in dreams and those areas of psychopathology having to do with auditory phenomena. The more refined tripartite conept of "presence" as it relates to the predominant perceptual modality--tactile, visual, auditory--is felt to be a useful conceptualization for both developmental and clinical understanding.
Task-specific reorganization of the auditory cortex in deaf humans
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-01
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964
Task-specific reorganization of the auditory cortex in deaf humans.
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-24
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.
Lu, Sara A; Wickens, Christopher D; Prinet, Julie C; Hutchins, Shaun D; Sarter, Nadine; Sebok, Angelia
2013-08-01
The aim of this study was to integrate empirical data showing the effects of interrupting task modality on the performance of an ongoing visual-manual task and the interrupting task itself. The goal is to support interruption management and the design of multimodal interfaces. Multimodal interfaces have been proposed as a promising means to support interruption management.To ensure the effectiveness of this approach, their design needs to be based on an analysis of empirical data concerning the effectiveness of individual and redundant channels of information presentation. Three meta-analyses were conducted to contrast performance on an ongoing visual task and interrupting tasks as a function of interrupting task modality (auditory vs. tactile, auditory vs. visual, and single modality vs. redundant auditory-visual). In total, 68 studies were included and six moderator variables were considered. The main findings from the meta-analyses are that response times are faster for tactile interrupting tasks in case of low-urgency messages.Accuracy is higher with tactile interrupting tasks for low-complexity signals but higher with auditory interrupting tasks for high-complexity signals. Redundant auditory-visual combinations are preferable for communication tasks during high workload and with a small visual angle of separation. The three meta-analyses contribute to the knowledge base in multimodal information processing and design. They highlight the importance of moderator variables in predicting the effects of interruption task modality on ongoing and interrupting task performance. The findings from this research will help inform the design of multimodal interfaces in data-rich, event-driven domains.
Audio-tactile integration and the influence of musical training.
Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo
2014-01-01
Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.
Electrotactile and vibrotactile displays for sensory substitution systems
NASA Technical Reports Server (NTRS)
Kaczmarek, Kurt A.; Webster, John G.; Bach-Y-rita, Paul; Tompkins, Willis J.
1991-01-01
Sensory substitution systems provide their users with environmental information through a human sensory channel (eye, ear, or skin) different from that normally used or with the information processed in some useful way. The authors review the methods used to present visual, auditory, and modified tactile information to the skin and discuss present and potential future applications of sensory substitution, including tactile vision substitution (TVS), tactile auditory substitution, and remote tactile sensing or feedback (teletouch). The relevant sensory physiology of the skin, including the mechanisms of normal touch and the mechanisms and sensations associated with electrical stimulation of the skin using surface electrodes (electrotactile, or electrocutaneous, stimulation), is reviewed. The information-processing ability of the tactile sense and its relevance to sensory substitution is briefly summarized. The limitations of current tactile display technologies are discussed.
Stojmenova, Kristina; Sodnik, Jaka
2018-07-04
There are 3 standardized versions of the Detection Response Task (DRT), 2 using visual stimuli (remote DRT and head-mounted DRT) and one using tactile stimuli. In this article, we present a study that proposes and validates a type of auditory signal to be used as DRT stimulus and evaluate the proposed auditory version of this method by comparing it with the standardized visual and tactile version. This was a within-subject design study performed in a driving simulator with 24 participants. Each participant performed 8 2-min-long driving sessions in which they had to perform 3 different tasks: driving, answering to DRT stimuli, and performing a cognitive task (n-back task). Presence of additional cognitive load and type of DRT stimuli were defined as independent variables. DRT response times and hit rates, n-back task performance, and pupil size were observed as dependent variables. Significant changes in pupil size for trials with a cognitive task compared to trials without showed that cognitive load was induced properly. Each DRT version showed a significant increase in response times and a decrease in hit rates for trials with a secondary cognitive task compared to trials without. Similar and significantly better results in differences in response times and hit rates were obtained for the auditory and tactile version compared to the visual version. There were no significant differences in performance rate between the trials without DRT stimuli compared to trials with and among the trials with different DRT stimuli modalities. The results from this study show that the auditory DRT version, using the signal implementation suggested in this article, is sensitive to the effects of cognitive load on driver's attention and is significantly better than the remote visual and tactile version for auditory-vocal cognitive (n-back) secondary tasks.
Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.
Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard
2018-01-01
The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.
Cross-modal links among vision, audition, and touch in complex environments.
Ferris, Thomas K; Sarter, Nadine B
2008-02-01
This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.
Engineering Data Compendium. Human Perception and Performance. Volume 2
1988-01-01
Stimulation 5.1014 5.1004 Auditory Detection in the Presence of Visual Stimulation 5.1015 5.1005 Tactual Detection and Discrimination in the Presence of...Accessory Stimulation 5.1016 5.1006 Tactile Versus Auditory Localization of Sound 5.1007 Spatial Localization in the Presence of Inter- 5.1017...York: Wiley. Cross References 5.1004 Auditory detection in the presence of visual stimulation ; 5.1005 Tactual detection and dis- crimination in
How We Turned around a Problem School.
ERIC Educational Resources Information Center
Stone, Pete
1992-01-01
After discovering at least 64 percent of their students were either tactile or kinesthetic learners, educators at North Carolina elementary school began grouping kids according to their tactile/kinesthetic or auditory/visual strengths and altered reading instruction schedules every three weeks so that each group had opportunities to learn at best…
2006-08-01
Space Administration ( NASA ) Task Load Index ( TLX ...SITREP Questionnaire Example 33 Appendix C. NASA - TLX 35 Appendix D. Demographic Questionnaire 39 Appendix E. Post-Test Questionnaire 41...Mean ratings of physical demand by cue condition using NASA - TLX . ..................... 19 Figure 9. Mean ratings of temporal demand by cue condition
Intermodal Attention Shifts in Multimodal Working Memory.
Katus, Tobias; Grubert, Anna; Eimer, Martin
2017-04-01
Attention maintains task-relevant information in working memory (WM) in an active state. We investigated whether the attention-based maintenance of stimulus representations that were encoded through different modalities is flexibly controlled by top-down mechanisms that depend on behavioral goals. Distinct components of the ERP reflect the maintenance of tactile and visual information in WM. We concurrently measured tactile (tCDA) and visual contralateral delay activity (CDA) to track the attentional activation of tactile and visual information during multimodal WM. Participants simultaneously received tactile and visual sample stimuli on the left and right sides and memorized all stimuli on one task-relevant side. After 500 msec, an auditory retrocue indicated whether the sample set's tactile or visual content had to be compared with a subsequent test stimulus set. tCDA and CDA components that emerged simultaneously during the encoding phase were consistently reduced after retrocues that marked the corresponding (tactile or visual) modality as task-irrelevant. The absolute size of cue-dependent modulations was similar for the tCDA/CDA components and did not depend on the number of tactile/visual stimuli that were initially encoded into WM. Our results suggest that modality-specific maintenance processes in sensory brain regions are flexibly modulated by top-down influences that optimize multimodal WM representations for behavioral goals.
Using multisensory cues to facilitate air traffic management.
Ngo, Mary K; Pierce, Russell S; Spence, Charles
2012-12-01
In the present study, we sought to investigate whether auditory and tactile cuing could be used to facilitate a complex, real-world air traffic management scenario. Auditory and tactile cuing provides an effective means of improving both the speed and accuracy of participants' performance in a variety of laboratory-based visual target detection and identification tasks. A low-fidelity air traffic simulation task was used in which participants monitored and controlled aircraft.The participants had to ensure that the aircraft landed or exited at the correct altitude, speed, and direction and that they maintained a safe separation from all other aircraft and boundaries. The performance measures recorded included en route time, handoff delay, and conflict resolution delay (the performance measure of interest). In a baseline condition, the aircraft in conflict was highlighted in red (visual cue), and in the experimental conditions, this standard visual cue was accompanied by a simultaneously presented auditory, vibrotactile, or audiotactile cue. Participants responded significantly more rapidly, but no less accurately, to conflicts when presented with an additional auditory or audiotactile cue than with either a vibrotactile or visual cue alone. Auditory and audiotactile cues have the potential for improving operator performance by reducing the time it takes to detect and respond to potential visual target events. These results have important implications for the design and use of multisensory cues in air traffic management.
Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya
2013-09-01
Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can implicitly strengthen automatic change detection from an early stage in a cross-sensory manner, at least in the vision to audition direction.
Pitts, Brandon J; Sarter, Nadine
2018-06-01
Objective This research sought to determine whether people can perceive and process three nonredundant (and unrelated) signals in vision, hearing, and touch at the same time and how aging and concurrent task demands affect this ability. Background Multimodal displays have been shown to improve multitasking and attention management; however, their potential limitations are not well understood. The majority of studies on multimodal information presentation have focused on the processing of only two concurrent and, most often, redundant cues by younger participants. Method Two experiments were conducted in which younger and older adults detected and responded to a series of singles, pairs, and triplets of visual, auditory, and tactile cues in the absence (Experiment 1) and presence (Experiment 2) of an ongoing simulated driving task. Detection rates, response times, and driving task performance were measured. Results Compared to younger participants, older adults showed longer response times and higher error rates in response to cues/cue combinations. Older participants often missed the tactile cue when three cues were combined. They sometimes falsely reported the presence of a visual cue when presented with a pair of auditory and tactile signals. Driving performance suffered most in the presence of cue triplets. Conclusion People are more likely to miss information if more than two concurrent nonredundant signals are presented to different sensory channels. Application The findings from this work help inform the design of multimodal displays and ensure their usefulness across different age groups and in various application domains.
Conveying Looming with a Localized Tactile Cue
2015-04-01
leaning and reflexive head righting required at different speeds of linear or angular motion, the angle of contact of the foot to the substrate (e.g...approach information (e.g., relative distance updates) prior to actual contact , as has been reported for visual and auditory displays. A few studies have...Jacobs, 2013). Cancar et al. asked 12 subjects to estimate time-to- contact of a radially-expanding tactile or visual flow field representing a
ERIC Educational Resources Information Center
Poole, Daniel; Gowen, Emma; Warren, Paul A.; Poliakoff, Ellen
2017-01-01
Previous studies have indicated that visual-auditory temporal acuity is reduced in children with autism spectrum conditions (ASC) in comparison to neurotypicals. In the present study we investigated temporal acuity for all possible bimodal pairings of visual, tactile and auditory information in adults with ASC (n = 18) and a matched control group…
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
Accommodating Elementary Students' Learning Styles.
ERIC Educational Resources Information Center
Wallace, James
1995-01-01
Examines the perceptual learning style preferences of sixth- and seventh-grade students in the Philippines. Finds that the visual modality was the most preferred and the auditory modality was the least preferred. Offers suggestions for accommodating visual, tactile, and kinesthetic preferences. (RS)
Perceptual Learning Style and Learning Proficiency: A Test of the Hypothesis
ERIC Educational Resources Information Center
Kratzig, Gregory P.; Arbuthnott, Katherine D.
2006-01-01
Given the potential importance of using modality preference with instruction, the authors tested whether learning style preference correlated with memory performance in each of 3 sensory modalities: visual, auditory, and kinesthetic. In Study 1, participants completed objective measures of pictorial, auditory, and tactile learning and learning…
To What Extent Do Gestalt Grouping Principles Influence Tactile Perception?
ERIC Educational Resources Information Center
Gallace, Alberto; Spence, Charles
2011-01-01
Since their formulation by the Gestalt movement more than a century ago, the principles of perceptual grouping have primarily been investigated in the visual modality and, to a lesser extent, in the auditory modality. The present review addresses the question of whether the same grouping principles also affect the perception of tactile stimuli.…
Van Damme, Stefaan; Gallace, Alberto; Spence, Charles; Crombez, Geert; Moseley, G Lorimer
2009-02-09
Threatening stimuli are thought to bias spatial attention toward the location from which the threat is presented. Although this effect is well-established in the visual domain, little is known regarding whether tactile attention is similarly affected by threatening pictures. We hypothesised that tactile attention might be more affected by cues implying physical threat to a person's bodily tissues than by cues implying general threat. In the present study, participants made temporal order judgments (TOJs) concerning which of a pair of tactile (or auditory) stimuli, one presented to either hand, at a range of inter-stimulus intervals, had been presented first. A picture (showing physical threat, general threat, or no threat) was presented in front of one or the other hand shortly before the tactile stimuli. The results revealed that tactile attention was biased toward the side on which the picture was presented, and that this effect was significantly larger for physical threat pictures than for general threat or neutral pictures. By contrast, the bias in auditory attention toward the side of the picture was significantly larger for general threat pictures than for physical threat pictures or neutral pictures. These findings therefore demonstrate a modality-specific effect of physically threatening cues on the processing of tactile stimuli, and of generally threatening cues on auditory information processing. These results demonstrate that the processing of tactile information from the body part closest to the threatening stimulus is prioritized over tactile information from elsewhere on the body.
Tactile modulation of hippocampal place fields.
Gener, Thomas; Perez-Mendez, Lorena; Sanchez-Vives, Maria V
2013-12-01
Neural correlates of spatial representation can be found in the activity of the hippocampal place cells. These neurons are characterized by firing whenever the animal is located in a particular area of the space, the place field. Place fields are modulated by sensory cues, such as visual, auditory, or olfactory cues, being the influence of visual inputs the most thoroughly studied. Tactile information gathered by the whiskers has a prominent representation in the rat cerebral cortex. However, the influence of whisker-detected tactile cues on place fields remains an open question. Here we studied place fields in an enriched tactile environment where the remaining sensory cues were occluded. First, place cells were recorded before and after blockade of tactile transmission by means of lidocaine applied on the whisker pad. Following tactile deprivation, the majority of place cells decreased their firing rate and their place fields expanded. We next rotated the tactile cues and 90% of place fields rotated with them. Our results demonstrate that tactile information is integrated into place cells at least in a tactile-enriched arena and when other sensory cues are not available. Copyright © 2013 Wiley Periodicals, Inc.
Medial Auditory Thalamic Stimulation as a Conditioned Stimulus for Eyeblink Conditioning in Rats
ERIC Educational Resources Information Center
Campolattaro, Matthew M.; Halverson, Hunter E.; Freeman, John H.
2007-01-01
The neural pathways that convey conditioned stimulus (CS) information to the cerebellum during eyeblink conditioning have not been fully delineated. It is well established that pontine mossy fiber inputs to the cerebellum convey CS-related stimulation for different sensory modalities (e.g., auditory, visual, tactile). Less is known about the…
The Influence of Tactile Cognitive Maps on Auditory Space Perception in Sighted Persons.
Tonelli, Alessia; Gori, Monica; Brayda, Luca
2016-01-01
We have recently shown that vision is important to improve spatial auditory cognition. In this study, we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular, we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people - one experimental and one control group - in an auditory space bisection task. In the first group, the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound propagation.
Interaction of Perceptual Grouping and Crossmodal Temporal Capture in Tactile Apparent-Motion
Chen, Lihan; Shi, Zhuanghua; Müller, Hermann J.
2011-01-01
Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can “capture” visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left- or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from −75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs—one short (75 ms), one long (325 ms)—were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects. PMID:21383834
An experimental study on target recognition using white canes.
Nunokawa, Kiyohiko; Ino, Shuichi
2010-01-01
To understand basic tactile perception using white canes, we compared tapping (two times) and pushing (two times) methods using the index finger and using a white cane, with and without accompanying auditory information. Participants were six visually impaired individuals who used a white cane to walk independently in their daily lives. For each of the tapping and pushing and sound or no sound conditions, participants gave magnitude estimates for the hardness of rubber panels. Results indicated that using a white cane produces sensitivity levels equal to using a finger when accompanied by auditory information, and suggested that when using a white cane to estimate the hardness of a target, it is most effective to have two different modalities of tactile and auditory information derived from tapping.
Lighting Up Science for the Visually Impaired.
ERIC Educational Resources Information Center
Billings, Gilbert W.; And Others
1980-01-01
Described are activities designed specifically for visually impaired students, demonstrating (1) meiosis, (2) mass, (3) enzyme-substrate reactions, (4) function and relationships of flowering parts. Employed are tactile and auditory learning aids, such as the tape recorder, electric eye, Braille typewriter, textured fabrics, and three-dimensional…
Neural correlates of audiotactile phonetic processing in early-blind readers: an fMRI study.
Pishnamazi, Morteza; Nojaba, Yasaman; Ganjgahi, Habib; Amousoltani, Asie; Oghabian, Mohammad Ali
2016-05-01
Reading is a multisensory function that relies on arbitrary associations between auditory speech sounds and symbols from a second modality. Studies of bimodal phonetic perception have mostly investigated the integration of visual letters and speech sounds. Blind readers perform an analogous task by using tactile Braille letters instead of visual letters. The neural underpinnings of audiotactile phonetic processing have not been studied before. We used functional magnetic resonance imaging to reveal the neural correlates of audiotactile phonetic processing in 16 early-blind Braille readers. Braille letters and corresponding speech sounds were presented in unimodal, and congruent/incongruent bimodal configurations. We also used a behavioral task to measure the speed of blind readers in identifying letters presented via tactile and/or auditory modalities. Reaction times for tactile stimuli were faster. The reaction times for bimodal stimuli were equal to those for the slower auditory-only stimuli. fMRI analyses revealed the convergence of unimodal auditory and unimodal tactile responses in areas of the right precentral gyrus and bilateral crus I of the cerebellum. The left and right planum temporale fulfilled the 'max criterion' for bimodal integration, but activities of these areas were not sensitive to the phonetical congruency between sounds and Braille letters. Nevertheless, congruency effects were found in regions of frontal lobe and cerebellum. Our findings suggest that, unlike sighted readers who are assumed to have amodal phonetic representations, blind readers probably process letters and sounds separately. We discuss that this distinction might be due to mal-development of multisensory neural circuits in early blinds or it might be due to inherent differences between Braille and print reading mechanisms.
Learning Styles and Student Diversity.
ERIC Educational Resources Information Center
Loper, Sue
1989-01-01
A teacher reports on helpful advice she received from a colleague when she started teaching: to teach students in the cognitive mode in which they learn best (auditory, visual, kinesthetic, or tactile). (TE)
Rizza, Aurora; Terekhov, Alexander V; Montone, Guglielmo; Olivetti-Belardinelli, Marta; O'Regan, J Kevin
2018-01-01
Tactile speech aids, though extensively studied in the 1980's and 1990's, never became a commercial success. A hypothesis to explain this failure might be that it is difficult to obtain true perceptual integration of a tactile signal with information from auditory speech: exploitation of tactile cues from a tactile aid might require cognitive effort and so prevent speech understanding at the high rates typical of everyday speech. To test this hypothesis, we attempted to create true perceptual integration of tactile with auditory information in what might be considered the simplest situation encountered by a hearing-impaired listener. We created an auditory continuum between the syllables /BA/ and /VA/, and trained participants to associate /BA/ to one tactile stimulus and /VA/ to another tactile stimulus. After training, we tested if auditory discrimination along the continuum between the two syllables could be biased by incongruent tactile stimulation. We found that such a bias occurred only when the tactile stimulus was above, but not when it was below its previously measured tactile discrimination threshold. Such a pattern is compatible with the idea that the effect is due to a cognitive or decisional strategy, rather than to truly perceptual integration. We therefore ran a further study (Experiment 2), where we created a tactile version of the McGurk effect. We extensively trained two Subjects over 6 days to associate four recorded auditory syllables with four corresponding apparent motion tactile patterns. In a subsequent test, we presented stimulation that was either congruent or incongruent with the learnt association, and asked Subjects to report the syllable they perceived. We found no analog to the McGurk effect, suggesting that the tactile stimulation was not being perceptually integrated with the auditory syllable. These findings strengthen our hypothesis according to which tactile aids failed because integration of tactile cues with auditory speech occurred at a cognitive or decisional level, rather than truly at a perceptual level.
Rizza, Aurora; Terekhov, Alexander V.; Montone, Guglielmo; Olivetti-Belardinelli, Marta; O’Regan, J. Kevin
2018-01-01
Tactile speech aids, though extensively studied in the 1980’s and 1990’s, never became a commercial success. A hypothesis to explain this failure might be that it is difficult to obtain true perceptual integration of a tactile signal with information from auditory speech: exploitation of tactile cues from a tactile aid might require cognitive effort and so prevent speech understanding at the high rates typical of everyday speech. To test this hypothesis, we attempted to create true perceptual integration of tactile with auditory information in what might be considered the simplest situation encountered by a hearing-impaired listener. We created an auditory continuum between the syllables /BA/ and /VA/, and trained participants to associate /BA/ to one tactile stimulus and /VA/ to another tactile stimulus. After training, we tested if auditory discrimination along the continuum between the two syllables could be biased by incongruent tactile stimulation. We found that such a bias occurred only when the tactile stimulus was above, but not when it was below its previously measured tactile discrimination threshold. Such a pattern is compatible with the idea that the effect is due to a cognitive or decisional strategy, rather than to truly perceptual integration. We therefore ran a further study (Experiment 2), where we created a tactile version of the McGurk effect. We extensively trained two Subjects over 6 days to associate four recorded auditory syllables with four corresponding apparent motion tactile patterns. In a subsequent test, we presented stimulation that was either congruent or incongruent with the learnt association, and asked Subjects to report the syllable they perceived. We found no analog to the McGurk effect, suggesting that the tactile stimulation was not being perceptually integrated with the auditory syllable. These findings strengthen our hypothesis according to which tactile aids failed because integration of tactile cues with auditory speech occurred at a cognitive or decisional level, rather than truly at a perceptual level. PMID:29875719
Category specific dysnomia after thalamic infarction: a case-control study.
Levin, Netta; Ben-Hur, Tamir; Biran, Iftah; Wertman, Eli
2005-01-01
Category specific naming impairment was described mainly after cortical lesions. It is thought to result from a lesion in a specific network, reflecting the organization of our semantic knowledge. The deficit usually involves multiple semantic categories whose profile of naming deficit generally obeys the animate/inanimate dichotomy. Thalamic lesions cause general semantic naming deficit, and only rarely a category specific semantic deficit for very limited and highly specific categories. We performed a case-control study on a 56-year-old right-handed man who presented with language impairment following a left anterior thalamic infarction. His naming ability and semantic knowledge were evaluated in the visual, tactile and auditory modalities for stimuli from 11 different categories, and compared to that of five controls. In naming to visual stimuli the patient performed poorly (error rate>50%) in four categories: vegetables, toys, animals and body parts (average 70.31+/-15%). In each category there was a different dominating error type. He performed better in the other seven categories (tools, clothes, transportation, fruits, electric, furniture, kitchen utensils), averaging 14.28+/-9% errors. Further analysis revealed a dichotomy between naming in animate and inanimate categories in the visual and tactile modalities but not in response to auditory stimuli. Thus, a unique category specific profile of response and naming errors to visual and tactile, but not auditory stimuli was found after a left anterior thalamic infarction. This might reflect the role of the thalamus not only as a relay station but further as a central integrator of different stages of perceptual and semantic processing.
Cost Analysis of Public Rights-of-Way Accessibility Guidelines
DOT National Transportation Integrated Search
2010-11-29
Accessible Pedestrian Signals (APS) provide auditory and tactile information about the : pedestrian signal phases (walk and dont walk) at signalized pedestrian crossings. : This information parallels the visual information provided by ...
Eye-gaze independent EEG-based brain-computer interfaces for communication.
Riccio, A; Mattia, D; Simione, L; Olivetti, M; Cincotti, F
2012-08-01
The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users' requirements in a real-life scenario.
Eye-gaze independent EEG-based brain-computer interfaces for communication
NASA Astrophysics Data System (ADS)
Riccio, A.; Mattia, D.; Simione, L.; Olivetti, M.; Cincotti, F.
2012-08-01
The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users’ requirements in a real-life scenario.
Sleep-dependent consolidation benefits fast transfer of time interval training.
Chen, Lihan; Guo, Lu; Bao, Ming
2017-03-01
Previous study has shown that short training (15 min) for explicitly discriminating temporal intervals between two paired auditory beeps, or between two paired tactile taps, can significantly improve observers' ability to classify the perceptual states of visual Ternus apparent motion while the training of task-irrelevant sensory properties did not help to improve visual timing (Chen and Zhou in Exp Brain Res 232(6):1855-1864, 2014). The present study examined the role of 'consolidation' after training of temporal task-irrelevant properties, or whether a pure delay (i.e., blank consolidation) following pretest of the target task would give rise to improved ability of visual interval timing, typified in visual Ternus display. A procedure of pretest-training-posttest was adopted, with the probe of discriminating Ternus apparent motion. The extended implicit training of timing in which the time intervals between paired auditory beeps or paired tactile taps were manipulated but the task was discrimination of the auditory pitches or tactile intensities, did not lead to the training benefits (Exps 1 and 3); however, a delay of 24 h after implicit training of timing, including solving 'Sudoku puzzles,' made the otherwise absent training benefits observable (Exps 2, 4, 5 and 6). The above improvements in performance were not due to a practice effect of Ternus motion (Exp 7). A general 'blank' consolidation period of 24 h also made improvements of visual timing observable (Exp 8). Taken together, the current findings indicated that sleep-dependent consolidation imposed a general effect, by potentially triggering and maintaining neuroplastic changes in the intrinsic (timing) network to enhance the ability of time perception.
Kaufmann, Tobias; Holz, Elisa M; Kübler, Andrea
2013-01-01
This paper describes a case study with a patient in the classic locked-in state, who currently has no means of independent communication. Following a user-centered approach, we investigated event-related potentials (ERP) elicited in different modalities for use in brain-computer interface (BCI) systems. Such systems could provide her with an alternative communication channel. To investigate the most viable modality for achieving BCI based communication, classic oddball paradigms (1 rare and 1 frequent stimulus, ratio 1:5) in the visual, auditory and tactile modality were conducted (2 runs per modality). Classifiers were built on one run and tested offline on another run (and vice versa). In these paradigms, the tactile modality was clearly superior to other modalities, displaying high offline accuracy even when classification was performed on single trials only. Consequently, we tested the tactile paradigm online and the patient successfully selected targets without any error. Furthermore, we investigated use of the visual or tactile modality for different BCI systems with more than two selection options. In the visual modality, several BCI paradigms were tested offline. Neither matrix-based nor so-called gaze-independent paradigms constituted a means of control. These results may thus question the gaze-independence of current gaze-independent approaches to BCI. A tactile four-choice BCI resulted in high offline classification accuracies. Yet, online use raised various issues. Although performance was clearly above chance, practical daily life use appeared unlikely when compared to other communication approaches (e.g., partner scanning). Our results emphasize the need for user-centered design in BCI development including identification of the best stimulus modality for a particular user. Finally, the paper discusses feasibility of EEG-based BCI systems for patients in classic locked-in state and compares BCI to other AT solutions that we also tested during the study.
Childhood Onset Schizophrenia: High Rate of Visual Hallucinations
ERIC Educational Resources Information Center
David, Christopher N.; Greenstein, Deanna; Clasen, Liv; Gochman, Pete; Miller, Rachel; Tossell, Julia W.; Mattai, Anand A.; Gogtay, Nitin; Rapoport, Judith L.
2011-01-01
Objective: To document high rates and clinical correlates of nonauditory hallucinations in childhood onset schizophrenia (COS). Method: Within a sample of 117 pediatric patients (mean age 13.6 years), diagnosed with COS, the presence of auditory, visual, somatic/tactile, and olfactory hallucinations was examined using the Scale for the Assessment…
1981-06-01
targets that have been cali- brated for different types of search tasks. Recognition tests might include visual recognition of site personnel, auditory ...Strength o Physiological processes o Auditory processes o Visual processes o Tactile sense o Psychomotor processes o Tolerance to environment o Learning...sensitive" to an easily measurable degree, and another third at a more subliminal level. This sensitivity is even further height- ened in individuals by the
A CAI System for Visually Impaired Children to Improve Abilities of Orientation and Mobility
NASA Astrophysics Data System (ADS)
Yoneda, Takahiro; Kudo, Hiroaki; Minagawa, Hiroki; Ohnishi, Noboru; Matsubara, Shizuya
Some visually impaired children have difficulty in simple locomotion, and need orientation and mobility training. We developed a computer assisted instruction system which assists this training. A user realizes a task given by a tactile map and synthesized speech. The user walks around a room according to the task. The system gives the gap of walk path from its target path via both auditory and tactile feedback after the end of a task. Then the user can understand how well the user walked. We describe the detail of the proposed system and task, and the experimental result with three visually impaired children.
A systematic approach to the Kansei factors of tactile sense regarding the surface roughness.
Choi, Kyungmee; Jun, Changrim
2007-01-01
Designing products to satisfy customers' emotion requires the information gathered through the human senses, which are visual, auditory, olfactory, gustatory, or tactile senses. By controlling certain design factors, customers' emotion can be evaluated, designed, and satisfied. In this study, a systematic approach is proposed to study the tactile sense regarding the surface roughness. Numerous pairs of antonymous tactile adjectives are collected and clustered. The optimal number of adjective clusters is estimated based on the several criterion functions. The representative average preferences of the final clusters are obtained as the estimates of engineering parameters to control the surface roughness of the commercial polymer-based products.
Bruck, Dorothy; Thomas, Ian R
2009-02-01
People who are hard-of-hearing may rely on auditory, visual, or tactile alarms in a fire emergency, and US standards require strobe lights in hotel bedrooms to provide emergency notification for people with hearing loss. This is the first study to compare the waking effectiveness of a variety of auditory (beeps), tactile (bed and pillow shakers), and visual (strobe lights) signals at a range of intensities. Three auditory signals, a bed shaker, a pillow shaker, and strobe lights were presented to 38 adults (aged 18 to 80 yr) with mild to moderately severe hearing loss of 25 to 70 dB (in both ears), during slow-wave sleep (deep sleep). Two of the auditory signals were selected on the basis that they had the lowest auditory thresholds when awake (from a range of eight signals). The third auditory signal was the current 3100-Hz smoke alarm. All auditory signals were tested below, at, and above the decibel level prescribed by the applicable standard for bedrooms (75 dBA). In the case of bed and pillow shakers intensities below, at, and above the level as purchased were tested. For strobe lights three levels were used, all of which were above the applicable standard. The intensity level at which participants awoke was identified by electroencephalograph monitoring. The most effective signal was a 520-Hz square wave auditory signal, waking 92% at 75 dBA, compared with 56% waking to the 75 dBA high-pitched alarm. Bed and pillow shakers awoke 80 to 84% at the intensity level as purchased. The strobe lights awoke only 27% at an intensity above the US standard. Nonparametric analyses confirmed that the 520-Hz square wave signal was significantly more effective than the current smoke alarm and the strobe lights in waking this population. A low-frequency square wave signal has now been found to be significantly more effective than all tested alternatives in a number of populations (hard-of-hearing, children, older adults, young adults, alcohol impaired) and should be adopted across the whole population as the normal smoke alarm signal. Strobe lights, even at high intensities, are ineffective in reliably waking people with mild to moderate hearing loss.
Visual and tactile information in double bass intonation control.
Lage, Guilherme Menezes; Borém, Fausto; Vieira, Maurílio Nunes; Barreiros, João Pardal
2007-04-01
Traditionally, the teaching of intonation on the non-tempered orchestral strings (violin, viola, cello, and double bass) has resorted to the auditory and proprioceptive senses only. This study aims at understanding the role of visual and tactile information in the control of the non-tempered intonation of the acoustic double bass. Eight musicians played 11 trials of an atonal sequence of musical notes on two double basses of different sizes under different sensorial constraints. The accuracy of the played notes was analyzed by measuring their frequencies and comparing them with respective target values. The main finding was that the performance which integrated visual and tactile information was superior in relation to the other performances in the control of double bass intonation. This contradicts the traditional belief that proprioception and hearing are the most effective feedback information in the performance of stringed instruments.
Structural reorganization of the early visual cortex following Braille training in sighted adults.
Bola, Łukasz; Siuda-Krzywicka, Katarzyna; Paplińska, Małgorzata; Sumera, Ewa; Zimmermann, Maria; Jednoróg, Katarzyna; Marchewka, Artur; Szwed, Marcin
2017-12-12
Training can induce cross-modal plasticity in the human cortex. A well-known example of this phenomenon is the recruitment of visual areas for tactile and auditory processing. It remains unclear to what extent such plasticity is associated with changes in anatomy. Here we enrolled 29 sighted adults into a nine-month tactile Braille-reading training, and used voxel-based morphometry and diffusion tensor imaging to describe the resulting anatomical changes. In addition, we collected resting-state fMRI data to relate these changes to functional connectivity between visual and somatosensory-motor cortices. Following Braille-training, we observed substantial grey and white matter reorganization in the anterior part of early visual cortex (peripheral visual field). Moreover, relative to its posterior, foveal part, the peripheral representation of early visual cortex had stronger functional connections to somatosensory and motor cortices even before the onset of training. Previous studies show that the early visual cortex can be functionally recruited for tactile discrimination, including recognition of Braille characters. Our results demonstrate that reorganization in this region induced by tactile training can also be anatomical. This change most likely reflects a strengthening of existing connectivity between the peripheral visual cortex and somatosensory cortices, which suggests a putative mechanism for cross-modal recruitment of visual areas.
Research Program Review. Aircrew Physiology.
1982-06-01
15 Visual and Auditory LocaizationrNormal and Abnormal Relation Leonard Detection of Retinal Ischemia Prior to Blackout by Electrical Evoked...parameters and provision of auditory or tactile feedback to the subject, all promise some improvement. Measurement of the separate responses at 01...Work in Progress A centrifuge program designed to evaluate two different electrode placements and four different frequencies of stimulation is now in
Visual and tactile interfaces for bi-directional human robot communication
NASA Astrophysics Data System (ADS)
Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin
2013-05-01
Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.
ERIC Educational Resources Information Center
Missouri Univ., Columbia. Coll. of Education.
Information is provided regarding major learning styles and other factors important to student learning. Several typically asked questions are presented regarding different learning styles (visual, auditory, tactile and kinesthetic, and multisensory learning), associated considerations, determining individuals' learning styles, and appropriate…
Code of Federal Regulations, 2011 CFR
2011-07-01
... developing appropriate programming to meet the particular needs of individuals with disabilities, including... through tactile, vibratory, auditory, and visual media. (4) Technical assistance and support services to...
Learning Styles and Metacognition.
ERIC Educational Resources Information Center
Turner, Nancy D'Isa
1993-01-01
Examines the effects of modified instruction and high ability fifth-grade students' use of metacognition on spelling achievement. Notes that the instruction was modified to match the visual, auditory, tactile, and kinesthetic preferences of the group. Finds positive results. (RS)
Subcortical functional reorganization due to early blindness.
Coullon, Gaelle S L; Jiang, Fang; Fine, Ione; Watkins, Kate E; Bridge, Holly
2015-04-01
Lack of visual input early in life results in occipital cortical responses to auditory and tactile stimuli. However, it remains unclear whether cross-modal plasticity also occurs in subcortical pathways. With the use of functional magnetic resonance imaging, auditory responses were compared across individuals with congenital anophthalmia (absence of eyes), those with early onset (in the first few years of life) blindness, and normally sighted individuals. We find that the superior colliculus, a "visual" subcortical structure, is recruited by the auditory system in congenital and early onset blindness. Additionally, auditory subcortical responses to monaural stimuli were altered as a result of blindness. Specifically, responses in the auditory thalamus were equally strong to contralateral and ipsilateral stimulation in both groups of blind subjects, whereas sighted controls showed stronger responses to contralateral stimulation. These findings suggest that early blindness results in substantial reorganization of subcortical auditory responses. Copyright © 2015 the American Physiological Society.
ERIC Educational Resources Information Center
Pieretti, Robert A.; Kaul, Sandra D.; Zarchy, Razi M.; O'Hanlon, Laureen M.
2015-01-01
The primary focus of this research study was to examine the benefit of a using a multimodal approach to speech sound correction with preschool children. The approach uses the auditory, tactile, and kinesthetic modalities and includes a unique, interactive visual focus that attempts to provide a visual representation of a phonemic category. The…
Chansirinukor, Wunpen; Khemthong, Supalak
2014-07-01
To compare psychomotor function between a music student group who had music education and a non-music student group who participated in music training. Consecutive sampling was used for completing questionnaires, testing reaction times (visual, auditory, and tactile system), measuring electromyography of upper trapezius muscles both sides and taking photos of the Craniovertebral (CV) angle in the sitting position. Data collection was made twice for each student group: the music students at one-hour intervals for resting and conducting nonmusic activities, the non-music students at two-day intervals, 20 minutes/session, and performed music training (by a manual of keyboard notation). The non-music students (n = 65) improved reaction times, but responded slower than the music students except for the tactile system. The music students (n = 28) showed faster reaction times and higher activities of the trapezius muscle than the non-music students at post-test. In addition, the CV angle of the non-music students was significantly improved. The level of musical ability may influence the psychomotor function. Significant improvement was observed in visual, auditory and tactile reaction time, and CV angle in the non-music students. However upper trapezius muscle activities between both student groups were unchanged.
Fritz, Jonathan B.; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C.
2016-01-01
While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30–40 seconds to a duration of ~1–2 seconds, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. PMID:26707975
Making Microcomputers Accessible to Blind Persons.
ERIC Educational Resources Information Center
Ruconich, Sandra K.; And Others
1986-01-01
The article considers advantages and limitations of tactile, auditory, and visual means of microcomputer access for blind students. Discussed are electronic braille, paper braille, the Optacon, synthesized speech, and enlarged print. Improved multimedia access technology is predicted for the future. (Author/DB)
Evaluation of Domain-Specific Collaboration Interfaces for Team Command and Control Tasks
2012-05-01
Technologies 1.1.1. Virtual Whiteboard Cognitive theories relating the utilization, storage, and retrieval of verbal and spatial information, such as...AE Spatial emergent SE Auditory linguistic AL Spatial positional SP Facial figural FF Spatial quantitative SQ Facial motive FM Tactile figural...driven by the auditory linguistic (AL), short-term memory (STM), spatial attentive (SA), visual temporal (VT), and vocal process (V) subscales. 0
Hao, Qiao; Ora, Hiroki; Ogawa, Ken-Ichiro; Ogata, Taiki; Miyake, Yoshihiro
2016-09-13
The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part.
Fritz, Jonathan B; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C
2016-06-01
While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
de Borst, Aline W; de Gelder, Beatrice
2017-08-01
Previous studies have shown that the early visual cortex contains content-specific representations of stimuli during visual imagery, and that these representational patterns of imagery content have a perceptual basis. To date, there is little evidence for the presence of a similar organization in the auditory and tactile domains. Using fMRI-based multivariate pattern analyses we showed that primary somatosensory, auditory, motor, and visual cortices are discriminative for imagery of touch versus sound. In the somatosensory, motor and visual cortices the imagery modality discriminative patterns were similar to perception modality discriminative patterns, suggesting that top-down modulations in these regions rely on similar neural representations as bottom-up perceptual processes. Moreover, we found evidence for content-specific representations of the stimuli during auditory imagery in the primary somatosensory and primary motor cortices. Both the imagined emotions and the imagined identities of the auditory stimuli could be successfully classified in these regions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Advanced mathematics communication beyond modality of\\xA0sight
NASA Astrophysics Data System (ADS)
Sedaghatjou, Mina
2018-01-01
This study illustrates how mathematical communication and learning are inherently multimodal and embodied; hence, sight-disabled students are also able to conceptualize visuospatial information and mathematical concepts through tactile and auditory activities. Adapting a perceptuomotor integration approach, the study shows that the lack of access to visual fields in an advanced mathematics course does not obstruct a blind student's ability to visualize, but transforms it. The goal of this study is not to compare the visually impaired student with non-visually impaired students to address the 'differences' in understanding; instead, I discuss the challenges that a blind student, named Anthony, has encountered and the ways that we tackled those problems. I also demonstrate how the proper and precisely crafted tactile materials empowered Anthony to learn mathematical functions.
Evaluation of Persons of Varying Ages.
ERIC Educational Resources Information Center
Stolte, John F.
1996-01-01
Reviews two experiments that strongly support dual coding theory. Dual coding theory holds that communicating concretely (tactile, auditory, or visual stimuli) affects evaluative thinking stronger than communicating abstractly through words and numbers. The experiments applied this theory to the realm of age and evaluation. (MJP)
Learning Style Preferences of Southeast Asian Students.
ERIC Educational Resources Information Center
Park, Clara C.
2000-01-01
Investigated the perceptual learning style preferences (auditory, visual, kinesthetic, and tactile) and preferences for group and individual learning of Southeast Asian students compared to white students. Surveys indicated significant differences in learning style preferences between Southeast Asian and white students and between the diverse…
Jia, Lina; Shi, Zhuanghua; Zang, Xuelian; Müller, Hermann J
2013-11-06
Although attention can be captured toward high-arousal stimuli, little is known about how perceiving emotion in one modality influences the temporal processing of non-emotional stimuli in other modalities. We addressed this issue by presenting observers spatially uninformative emotional pictures while they performed an audio-tactile temporal-order judgment (TOJ) task. In Experiment 1, audio-tactile stimuli were presented at the same location straight ahead of the participants, who had to judge "which modality came first?". In Experiments 2 and 3, the audio-tactile stimuli were delivered one to the left and the other to the right side, and participants had to judge "which side came first?". We found both negative and positive high-arousal pictures to significantly bias TOJs towards the tactile and away from the auditory event when the audio-tactile stimuli were spatially separated; by contrast, there was no such bias when the audio-tactile stimuli originated from the same location. To further examine whether this bias is attributable to the emotional meanings conveyed by the pictures or to their high arousal effect, we compared and contrasted the influences of near-body threat vs. remote threat (emotional) pictures on audio-tactile TOJs in Experiment 3. The bias manifested only in the near-body threat condition. Taken together, the findings indicate that visual stimuli conveying meanings of near-body interaction activate a sensorimotor functional link prioritizing the processing of tactile over auditory signals when these signals are spatially separated. In contrast, audio-tactile signals from the same location engender strong crossmodal integration, thus counteracting modality-based attentional shifts induced by the emotional pictures. © 2013 Published by Elsevier B.V.
Auditory-tactile echo-reverberating stuttering speech corrector
NASA Astrophysics Data System (ADS)
Kuniszyk-Jozkowiak, Wieslawa; Adamczyk, Bogdan
1997-02-01
The work presents the construction of a device, which transforms speech sounds into acoustical and tactile signals of echo and reverberation. Research has been done on the influence of the echo and reverberation, which are transmitted as acoustic and tactile stimuli, on speech fluency. Introducing the echo or reverberation into the auditory feedback circuit results in a reduction of stuttering. A bit less, but still significant corrective effects are observed while using the tactile channel for transmitting the signals. The use of joined auditory and tactile channels increases the effects of their corrective influence on the stutterers' speech. The results of the experiment justify the use of the tactile channel in the stutterers' therapy.
The sense of agency is action-effect causality perception based on cross-modal grouping.
Kawabe, Takahiro; Roseboom, Warrick; Nishida, Shin'ya
2013-07-22
Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action-effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action-effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes.
The sense of agency is action–effect causality perception based on cross-modal grouping
Kawabe, Takahiro; Roseboom, Warrick; Nishida, Shin'ya
2013-01-01
Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action–effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action–effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes. PMID:23740784
Multisensory Motion Perception in 3–4 Month-Old Infants
Nava, Elena; Grassi, Massimo; Brenna, Viola; Croci, Emanuela; Turati, Chiara
2017-01-01
Human infants begin very early in life to take advantage of multisensory information by extracting the invariant amodal information that is conveyed redundantly by multiple senses. Here we addressed the question as to whether infants can bind multisensory moving stimuli, and whether this occurs even if the motion produced by the stimuli is only illusory. Three- to 4-month-old infants were presented with two bimodal pairings: visuo-tactile and audio-visual. Visuo-tactile pairings consisted of apparently vertically moving bars (the Barber Pole illusion) moving in either the same or opposite direction with a concurrent tactile stimulus consisting of strokes given on the infant’s back. Audio-visual pairings consisted of the Barber Pole illusion in its visual and auditory version, the latter giving the impression of a continuous rising or ascending pitch. We found that infants were able to discriminate congruently (same direction) vs. incongruently moving (opposite direction) pairs irrespective of modality (Experiment 1). Importantly, we also found that congruently moving visuo-tactile and audio-visual stimuli were preferred over incongruently moving bimodal stimuli (Experiment 2). Our findings suggest that very young infants are able to extract motion as amodal component and use it to match stimuli that only apparently move in the same direction. PMID:29187829
Perspectives of Elementary School Teachers on Outdoor Education
ERIC Educational Resources Information Center
Palavan, Ozcan; Cicek, Volkan; Atabay, Merve
2016-01-01
Outdoor education stands out as one of the methods to deliver the desired educational outcomes taking the needs of the students, teachers and the curricular objectives into consideration. Outdoor education focuses on experimental, hands-on learning in real-life environments through senses, e.g., through visual, auditory, and tactile means,…
The Tactile Continuity Illusion
ERIC Educational Resources Information Center
Kitagawa, Norimichi; Igarashi, Yuka; Kashino, Makio
2009-01-01
We can perceive the continuity of an object or event by integrating spatially/temporally discrete sensory inputs. The mechanism underlying this perception of continuity has intrigued many researchers and has been well documented in both the visual and auditory modalities. The present study shows for the first time to our knowledge that an illusion…
Using ICT-Supported Narratives in Teaching Science and Their Effects on Middle School Students
ERIC Educational Resources Information Center
Ekici, Fatma Taskin; Pekmezci, Sultan
2015-01-01
Effective and sustainable science education is enriched by the use of visuals, auditory, and tactile experiences. In order to provide effective learning, instruction needs to include multimodal approaches. Integrating ICT supported narrations into learning environments may provide effective and sustainable learning methods. Investigated in this…
Response Modality Variations Affect Determinations of Children's Learning Styles.
ERIC Educational Resources Information Center
Janowitz, Jeffrey M.
The Swassing-Barbe Modality Index (SBMI) uses visual, auditory, and tactile inputs, but only reconstructed output, to measure children's modality strengths. In this experiment, the SBMI's three input modalities were crossed with two output modalities (spoken and drawn) in addition to the reconstructed standard to result in nine treatment…
Perceptual Learning Style Matching and L2 Vocabulary Acquisition
ERIC Educational Resources Information Center
Tight, Daniel G.
2010-01-01
This study explored learning and retention of concrete nouns in second language Spanish by first language English undergraduates (N = 128). Each completed a learning style (visual, auditory, tactile/kinesthetic, mixed) assessment, took a vocabulary pretest, and then studied 12 words each through three conditions (matching, mismatching, mixed…
ERIC Educational Resources Information Center
Park, Clara C.
1997-01-01
Investigates for perceptual learning style preferences (auditory, visual, kinesthetic, and tactile) and preferences for group and individual leaning of Chinese, Filipino, Korean, and Vietnamese secondary education students. Comparison analysis reveals diverse learning style preferences between Anglo and Asian American students and also between…
Price, Tom A. R.
2015-01-01
Environments vary stochastically, and animals need to behave in ways that best fit the conditions in which they find themselves. The social environment is particularly variable, and responding appropriately to it can be vital for an animal’s success. However, cues of social environment are not always reliable, and animals may need to balance accuracy against the risk of failing to respond if local conditions or interfering signals prevent them detecting a cue. Recent work has shown that many male Drosophila fruit flies respond to the presence of rival males, and that these responses increase their success in acquiring mates and fathering offspring. In Drosophila melanogaster males detect rivals using auditory, tactile and olfactory cues. However, males fail to respond to rivals if any two of these senses are not functioning: a single cue is not enough to produce a response. Here we examined cue use in the detection of rival males in a distantly related Drosophila species, D. pseudoobscura, where auditory, olfactory, tactile and visual cues were manipulated to assess the importance of each sensory cue singly and in combination. In contrast to D. melanogaster, male D. pseudoobscura require intact olfactory and tactile cues to respond to rivals. Visual cues were not important for detecting rival D. pseudoobscura, while results on auditory cues appeared puzzling. This difference in cue use in two species in the same genus suggests that cue use is evolutionarily labile, and may evolve in response to ecological or life history differences between species. PMID:25849643
Newborn infants perceive abstract numbers
Izard, Véronique; Sann, Coralie; Spelke, Elizabeth S.; Streri, Arlette
2009-01-01
Although infants and animals respond to the approximate number of elements in visual, auditory, and tactile arrays, only human children and adults have been shown to possess abstract numerical representations that apply to entities of all kinds (e.g., 7 samurai, seas, or sins). Do abstract numerical concepts depend on language or culture, or do they form a part of humans' innate, core knowledge? Here we show that newborn infants spontaneously associate stationary, visual-spatial arrays of 4–18 objects with auditory sequences of events on the basis of number. Their performance provides evidence for abstract numerical representations at the start of postnatal experience. PMID:19520833
Audio aided electro-tactile perception training for finger posture biofeedback.
Vargas, Jose Gonzalez; Yu, Wenwei
2008-01-01
Visual information is one of the prerequisites for most biofeedback studies. The aim of this study is to explore how the usage of an audio aided training helps in the learning process of dynamical electro-tactile perception without any visual feedback. In this research, the electrical simulation patterns associated with the experimenter's finger postures and motions were presented to the subjects. Along with the electrical stimulation patterns 2 different types of information, verbal and audio information on finger postures and motions, were presented to the verbal training subject group (group 1) and audio training subject group (group 2), respectively. The results showed an improvement in the ability to distinguish and memorize electrical stimulation patterns correspondent to finger postures and motions without visual feedback, and with audio tones aid, the learning was faster and the perception became more precise after training. Thus, this study clarified that, as a substitution to visual presentation, auditory information could help effectively in the formation of electro-tactile perception. Further research effort needed to make clear the difference between the visual guided and audio aided training in terms of information compilation, post-training effect and robustness of the perception.
The role of working memory in auditory selective attention.
Dalton, Polly; Santangelo, Valerio; Spence, Charles
2009-11-01
A growing body of research now demonstrates that working memory plays an important role in controlling the extent to which irrelevant visual distractors are processed during visual selective attention tasks (e.g., Lavie, Hirst, De Fockert, & Viding, 2004). Recently, it has been shown that the successful selection of tactile information also depends on the availability of working memory (Dalton, Lavie, & Spence, 2009). Here, we investigate whether working memory plays a role in auditory selective attention. Participants focused their attention on short continuous bursts of white noise (targets) while attempting to ignore pulsed bursts of noise (distractors). Distractor interference in this auditory task, as measured in terms of the difference in performance between congruent and incongruent distractor trials, increased significantly under high (vs. low) load in a concurrent working-memory task. These results provide the first evidence demonstrating a causal role for working memory in reducing interference by irrelevant auditory distractors.
Proceedings of the Ship Production Symposium, held in New Orleans, Louisiana, on 2-4 September 1992
1992-09-01
that enables an observer to experience an environment or a task by means of visual, auditory , and sensory simulation (50). The equipment includes a... auditory images. Less progress has been made on general-purpose tactile sensory response equipment. Quasi-realistic graphical output has already helped in...The second is the United States of America funding was earmarked for a U. S. yard to help stimulate the U.S. economy. In essence, the RSV
Gopalakrishnan, R; Burgess, R C; Plow, E B; Floden, D P; Machado, A G
2015-09-24
Pain anticipation plays a critical role in pain chronification and results in disability due to pain avoidance. It is important to understand how different sensory modalities (auditory, visual or tactile) may influence pain anticipation as different strategies could be applied to mitigate anticipatory phenomena and chronification. In this study, using a countdown paradigm, we evaluated with magnetoencephalography the neural networks associated with pain anticipation elicited by different sensory modalities in normal volunteers. When encountered with well-established cues that signaled pain, visual and somatosensory cortices engaged the pain neuromatrix areas early during the countdown process, whereas the auditory cortex displayed delayed processing. In addition, during pain anticipation, the visual cortex displayed independent processing capabilities after learning the contextual meaning of cues from associative and limbic areas. Interestingly, cross-modal activation was also evident and strong when visual and tactile cues signaled upcoming pain. Dorsolateral prefrontal cortex and mid-cingulate cortex showed significant activity during pain anticipation regardless of modality. Our results show pain anticipation is processed with great time efficiency by a highly specialized and hierarchical network. The highest degree of higher-order processing is modulated by context (pain) rather than content (modality) and rests within the associative limbic regions, corroborating their intrinsic role in chronification. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Abboud, Sami; Hanassy, Shlomi; Levy-Tzedek, Shelly; Maidenbaum, Shachar; Amedi, Amir
2014-01-01
Sensory-substitution devices (SSDs) provide auditory or tactile representations of visual information. These devices often generate unpleasant sensations and mostly lack color information. We present here a novel SSD aimed at addressing these issues. We developed the EyeMusic, a novel visual-to-auditory SSD for the blind, providing both shape and color information. Our design uses musical notes on a pentatonic scale generated by natural instruments to convey the visual information in a pleasant manner. A short behavioral protocol was utilized to train the blind to extract shape and color information, and test their acquired abilities. Finally, we conducted a survey and a comparison task to assess the pleasantness of the generated auditory stimuli. We show that basic shape and color information can be decoded from the generated auditory stimuli. High performance levels were achieved by all participants following as little as 2-3 hours of training. Furthermore, we show that users indeed found the stimuli pleasant and potentially tolerable for prolonged use. The novel EyeMusic algorithm provides an intuitive and relatively pleasant way for the blind to extract shape and color information. We suggest that this might help facilitating visual rehabilitation because of the added functionality and enhanced pleasantness.
Integrated trimodal SSEP experimental setup for visual, auditory and tactile stimulation
NASA Astrophysics Data System (ADS)
Kuś, Rafał; Spustek, Tomasz; Zieleniewska, Magdalena; Duszyk, Anna; Rogowski, Piotr; Suffczyński, Piotr
2017-12-01
Objective. Steady-state evoked potentials (SSEPs), the brain responses to repetitive stimulation, are commonly used in both clinical practice and scientific research. Particular brain mechanisms underlying SSEPs in different modalities (i.e. visual, auditory and tactile) are very complex and still not completely understood. Each response has distinct resonant frequencies and exhibits a particular brain topography. Moreover, the topography can be frequency-dependent, as in case of auditory potentials. However, to study each modality separately and also to investigate multisensory interactions through multimodal experiments, a proper experimental setup appears to be of critical importance. The aim of this study was to design and evaluate a novel SSEP experimental setup providing a repetitive stimulation in three different modalities (visual, tactile and auditory) with a precise control of stimuli parameters. Results from a pilot study with a stimulation in a particular modality and in two modalities simultaneously prove the feasibility of the device to study SSEP phenomenon. Approach. We developed a setup of three separate stimulators that allows for a precise generation of repetitive stimuli. Besides sequential stimulation in a particular modality, parallel stimulation in up to three different modalities can be delivered. Stimulus in each modality is characterized by a stimulation frequency and a waveform (sine or square wave). We also present a novel methodology for the analysis of SSEPs. Main results. Apart from constructing the experimental setup, we conducted a pilot study with both sequential and simultaneous stimulation paradigms. EEG signals recorded during this study were analyzed with advanced methodology based on spatial filtering and adaptive approximation, followed by statistical evaluation. Significance. We developed a novel experimental setup for performing SSEP experiments. In this sense our study continues the ongoing research in this field. On the other hand, the described setup along with the presented methodology is a considerable improvement and an extension of methods constituting the state-of-the-art in the related field. Device flexibility both with developed analysis methodology can lead to further development of diagnostic methods and provide deeper insight into information processing in the human brain.
Multisensory Integration in the Virtual Hand Illusion with Active Movement
Satoh, Satoru; Hachimura, Kozaburo
2016-01-01
Improving the sense of immersion is one of the core issues in virtual reality. Perceptual illusions of ownership can be perceived over a virtual body in a multisensory virtual reality environment. Rubber Hand and Virtual Hand Illusions showed that body ownership can be manipulated by applying suitable visual and tactile stimulation. In this study, we investigate the effects of multisensory integration in the Virtual Hand Illusion with active movement. A virtual xylophone playing system which can interactively provide synchronous visual, tactile, and auditory stimulation was constructed. We conducted two experiments regarding different movement conditions and different sensory stimulations. Our results demonstrate that multisensory integration with free active movement can improve the sense of immersion in virtual reality. PMID:27847822
Auditory peripersonal space in humans.
Farnè, Alessandro; Làdavas, Elisabetta
2002-10-01
In the present study we report neuropsychological evidence of the existence of an auditory peripersonal space representation around the head in humans and its characteristics. In a group of right brain-damaged patients with tactile extinction, we found that a sound delivered near the ipsilesional side of the head (20 cm) strongly extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). By contrast, when an auditory stimulus was presented far from the head (70 cm), cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was most consistently found (i.e., both in the front and back spaces) when a complex sound was presented, like a white noise burst. Pure tones produced spatially specific cross-modal extinction when presented in the back space, but not in the front space. In addition, the most severe cross-modal extinction emerged when sounds came from behind the head, thus showing that the back space is more sensitive than the front space to the sensory interaction of auditory-tactile inputs. Finally, when cross-modal effects were investigated by reversing the spatial arrangement of cross-modal stimuli (i.e., touch on the right and sound on the left), we found that an ipsilesional tactile stimulus, although inducing a small amount of cross-modal tactile-auditory extinction, did not produce any spatial-specific effect. Therefore, the selective aspects of cross-modal interaction found near the head cannot be explained by a competition between a damaged left spatial representation and an intact right spatial representation. Thus, consistent with neurophysiological evidence from monkeys, our findings strongly support the existence, in humans, of an integrated cross-modal system coding auditory and tactile stimuli near the body, that is, in the peripersonal space.
Matsushima, J; Kumagai, M; Harada, C; Takahashi, K; Inuyama, Y; Ifukube, T
1992-09-01
Our previous reports showed that second formant information, using a speech coding method, could be transmitted through an electrode on the promontory. However, second formant information can also be transmitted by tactile stimulation. Therefore, to find out whether electrical stimulation of the auditory nerve would be superior to tactile stimulation for our speech coding method, the time resolutions of the two modes of stimulation were compared. The results showed that the time resolution of electrical promontory stimulation was three times better than the time resolution of tactile stimulation of the finger. This indicates that electrical stimulation of the auditory nerve is much better for our speech coding method than tactile stimulation of the finger.
Preserving Tradition through Technology.
ERIC Educational Resources Information Center
Wakshul, Barbra
2001-01-01
Language is easiest to learn before age 5. The Cherokee Nation supported production of a toy that teaches young children basic Cherokee words. When figures that come with the toy are placed into it, a computer chip activates a voice speaking the name of the figure in Cherokee. Learning takes place on visual, auditory, and tactile levels. (TD)
Strategy Access Rods: A Hands-On Approach.
ERIC Educational Resources Information Center
Worthing, Bernadette; Laster, Barbara
2002-01-01
Describes Strategy Access Rods (SARs), balsa-wood, prism-like or rectangular rods on which a one-sentence reading strategy phrase in the first person is printed. Notes SARs serve as a visual, auditory, kinesthetic, and tactile reminder of the strategies available to developing readers. Discusses use of SARs for word recognition and comprehension.…
Reading Comprehension, Learning Styles, and Seventh Grade Students
ERIC Educational Resources Information Center
Williams, Judy
2010-01-01
Reading is a basic life skill. Unfortunately, in 2007, only 29% of all eighth graders were able to comprehend at or above a proficient reading comprehension level. Sensory learning styles (kinesthetic, tactile, auditory, and visual) affect the way that students prefer to learn and the areas in which they will have difficulty learning. This study…
Learning Styles in the Art Room
ERIC Educational Resources Information Center
Rohrbach, Marla
2011-01-01
Art students have different learning styles. Some are visual learners who need to see the information. Some are auditory learners who need to hear the information. Others are tactile/kinesthetic learners who need to move, do, or touch in order to learn. Looking over her curriculum and lesson plans, the author realized almost every art lesson…
The nature of working memory for Braille.
Cohen, Henri; Voss, Patrice; Lepore, Franco; Scherzer, Peter
2010-05-26
Blind individuals have been shown on multiple occasions to compensate for their loss of sight by developing exceptional abilities in their remaining senses. While most research has been focused on perceptual abilities per se in the auditory and tactile modalities, recent work has also investigated higher-order processes involving memory and language functions. Here we examined tactile working memory for Braille in two groups of visually challenged individuals (completely blind subjects, CBS; blind with residual vision, BRV). In a first experimental procedure both groups were given a Braille tactile memory span task with and without articulatory suppression, while the BRV and a sighted group performed a visual version of the task. It was shown that the Braille tactile working memory (BrWM) of CBS individuals under articulatory suppression is as efficient as that of sighted individuals' visual working memory in the same condition. Moreover, the results suggest that BrWM may be more robust in the CBS than in the BRV subjects, thus pointing to the potential role of visual experience in shaping tactile working memory. A second experiment designed to assess the nature (spatial vs. verbal) of this working memory was then carried out with two new CBS and BRV groups having to perform the Braille task concurrently with a mental arithmetic task or a mental displacement of blocks task. We show that the disruption of memory was greatest when concurrently carrying out the mental displacement of blocks, indicating that the Braille tactile subsystem of working memory is likely spatial in nature in CBS. The results also point to the multimodal nature of working memory and show how experience can shape the development of its subcomponents.
The Nature of Working Memory for Braille
Cohen, Henri; Voss, Patrice; Lepore, Franco; Scherzer, Peter
2010-01-01
Blind individuals have been shown on multiple occasions to compensate for their loss of sight by developing exceptional abilities in their remaining senses. While most research has been focused on perceptual abilities per se in the auditory and tactile modalities, recent work has also investigated higher-order processes involving memory and language functions. Here we examined tactile working memory for Braille in two groups of visually challenged individuals (completely blind subjects, CBS; blind with residual vision, BRV). In a first experimental procedure both groups were given a Braille tactile memory span task with and without articulatory suppression, while the BRV and a sighted group performed a visual version of the task. It was shown that the Braille tactile working memory (BrWM) of CBS individuals under articulatory suppression is as efficient as that of sighted individuals' visual working memory in the same condition. Moreover, the results suggest that BrWM may be more robust in the CBS than in the BRV subjects, thus pointing to the potential role of visual experience in shaping tactile working memory. A second experiment designed to assess the nature (spatial vs. verbal) of this working memory was then carried out with two new CBS and BRV groups having to perform the Braille task concurrently with a mental arithmetic task or a mental displacement of blocks task. We show that the disruption of memory was greatest when concurrently carrying out the mental displacement of blocks, indicating that the Braille tactile subsystem of working memory is likely spatial in nature in CBS. The results also point to the multimodal nature of working memory and show how experience can shape the development of its subcomponents. PMID:20520807
Auditory spatial processing in the human cortex.
Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C
2012-12-01
The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.
ERIC Educational Resources Information Center
Coleman, Joyce H.
This Child Development Associate (CDA) training module, the seventh in a series of 16, provides an introduction to cognitive development in young children for bilingual/bicultural preschool teacher trainees. Perceptual skills (visual, figure-ground, part-whole, spatial, auditory and tactile discrimination) and cognitive processes and concepts…
ERIC Educational Resources Information Center
Park, Clara C.
1997-01-01
Investigated the four basic learning style preferences (visual, auditory, kinesthetic, and tactile) of Korean-, Armenian-, and Mexican-American students attending 10 Los Angeles schools and compared them with those of Anglo students. All four ethnic groups, regardless of sex and academic achievement level, indicate a major preference for…
Gudi-Mindermann, Helene; Rimmele, Johanna M; Nolte, Guido; Bruns, Patrick; Engel, Andreas K; Röder, Brigitte
2018-04-12
The functional relevance of crossmodal activation (e.g. auditory activation of occipital brain regions) in congenitally blind individuals is still not fully understood. The present study tested whether the occipital cortex of blind individuals is integrated into a challenged functional network. A working memory (WM) training over four sessions was implemented. Congenitally blind and matched sighted participants were adaptively trained with an n-back task employing either voices (auditory training) or tactile stimuli (tactile training). In addition, a minimally demanding 1-back task served as an active control condition. Power and functional connectivity of EEG activity evolving during the maintenance period of an auditory 2-back task were analyzed, run prior to and after the WM training. Modality-specific (following auditory training) and modality-independent WM training effects (following both auditory and tactile training) were assessed. Improvements in auditory WM were observed in all groups, and blind and sighted individuals did not differ in training gains. Auditory and tactile training of sighted participants led, relative to the active control group, to an increase in fronto-parietal theta-band power, suggesting a training-induced strengthening of the existing modality-independent WM network. No power effects were observed in the blind. Rather, after auditory training the blind showed a decrease in theta-band connectivity between central, parietal, and occipital electrodes compared to the blind tactile training and active control groups. Furthermore, in the blind auditory training increased beta-band connectivity between fronto-parietal, central and occipital electrodes. In the congenitally blind, these findings suggest a stronger integration of occipital areas into the auditory WM network. Copyright © 2018 Elsevier B.V. All rights reserved.
Scarfe, Amy C.; Moore, Brian C. J.; Pardhan, Shahina
2017-01-01
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound. PMID:28407000
Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina
2017-01-01
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.
Integration of auditory and vibrotactile stimuli: Effects of frequency
Wilson, E. Courtenay; Reed, Charlotte M.; Braida, Louis D.
2010-01-01
Perceptual integration of vibrotactile and auditory sinusoidal tone pulses was studied in detection experiments as a function of stimulation frequency. Vibrotactile stimuli were delivered through a single channel vibrator to the left middle fingertip. Auditory stimuli were presented diotically through headphones in a background of 50 dB sound pressure level broadband noise. Detection performance for combined auditory-tactile presentations was measured using stimulus levels that yielded 63% to 77% correct unimodal performance. In Experiment 1, the vibrotactile stimulus was 250 Hz and the auditory stimulus varied between 125 and 2000 Hz. In Experiment 2, the auditory stimulus was 250 Hz and the tactile stimulus varied between 50 and 400 Hz. In Experiment 3, the auditory and tactile stimuli were always equal in frequency and ranged from 50 to 400 Hz. The highest rates of detection for the combined-modality stimulus were obtained when stimulating frequencies in the two modalities were equal or closely spaced (and within the Pacinian range). Combined-modality detection for closely spaced frequencies was generally consistent with an algebraic sum model of perceptual integration; wider-frequency spacings were generally better fit by a Pythagorean sum model. Thus, perceptual integration of auditory and tactile stimuli at near-threshold levels appears to depend both on absolute frequency and relative frequency of stimulation within each modality. PMID:21117754
Advanced Multimodal Solutions for Information Presentation
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Godfroy-Cooper, Martine
2018-01-01
High-workload, fast-paced, and degraded sensory environments are the likeliest candidates to benefit from multimodal information presentation. For example, during EVA (Extra-Vehicular Activity) and telerobotic operations, the sensory restrictions associated with a space environment provide a major challenge to maintaining the situation awareness (SA) required for safe operations. Multimodal displays hold promise to enhance situation awareness and task performance by utilizing different sensory modalities and maximizing their effectiveness based on appropriate interaction between modalities. During EVA, the visual and auditory channels are likely to be the most utilized with tasks such as monitoring the visual environment, attending visual and auditory displays, and maintaining multichannel auditory communications. Previous studies have shown that compared to unimodal displays (spatial auditory or 2D visual), bimodal presentation of information can improve operator performance during simulated extravehicular activity on planetary surfaces for tasks as diverse as orientation, localization or docking, particularly when the visual environment is degraded or workload is increased. Tactile displays offer a third sensory channel that may both offload information processing effort and provide a means to capture attention when urgently required. For example, recent studies suggest that including tactile cues may result in increased orientation and alerting accuracy, improved task response time and decreased workload, as well as provide self-orientation cues in microgravity on the ISS (International Space Station). An important overall issue is that context-dependent factors like task complexity, sensory degradation, peripersonal vs. extrapersonal space operations, workload, experience level, and operator fatigue tend to vary greatly in complex real-world environments and it will be difficult to design a multimodal interface that performs well under all conditions. As a possible solution, adaptive systems have been proposed in which the information presented to the user changes as a function of taskcontext-dependent factors. However, this presupposes that adequate methods for detecting andor predicting such factors are developed. Further, research in adaptive systems for aviation suggests that they can sometimes serve to increase workload and reduce situational awareness. It will be critical to develop multimodal display guidelines that include consideration of smart systems that can select the best display method for a particular contextsituation.The scope of the current work is an analysis of potential multimodal display technologies for long duration missions and, in particular, will focus on their potential role in EVA activities. The review will address multimodal (combined visual, auditory andor tactile) displays investigated by NASA, industry, and DoD (Dept. of Defense). It also considers the need for adaptive information systems to accommodate a variety of operational contexts such as crew status (e.g., fatigue, workload level) and task environment (e.g., EVA, habitat, rover, spacecraft). Current approaches to guidelines and best practices for combining modalities for the most effective information displays are also reviewed. Potential issues in developing interface guidelines for the Exploration Information System (EIS) are briefly considered.
Bertelson, Paul; Aschersleben, Gisa
2003-10-01
In the well-known visual bias of auditory location (alias the ventriloquist effect), auditory and visual events presented in separate locations appear closer together, provided the presentations are synchronized. Here, we consider the possibility of the converse phenomenon: crossmodal attraction on the time dimension conditional on spatial proximity. Participants judged the order of occurrence of sound bursts and light flashes, respectively, separated in time by varying stimulus onset asynchronies (SOAs) and delivered either in the same or in different locations. Presentation was organized using randomly mixed psychophysical staircases, by which the SOA was reduced progressively until a point of uncertainty was reached. This point was reached at longer SOAs with the sounds in the same frontal location as the flashes than in different places, showing that apparent temporal separation is effectively longer in the first condition. Together with a similar one obtained recently in a case of tactile-visual discrepancy, this result supports a view in which timing and spatial layout of the inputs play to some extent inter-changeable roles in the pairing operation at the base of crossmodal interaction.
Subcortical functional reorganization due to early blindness
Jiang, Fang; Fine, Ione; Watkins, Kate E.; Bridge, Holly
2015-01-01
Lack of visual input early in life results in occipital cortical responses to auditory and tactile stimuli. However, it remains unclear whether cross-modal plasticity also occurs in subcortical pathways. With the use of functional magnetic resonance imaging, auditory responses were compared across individuals with congenital anophthalmia (absence of eyes), those with early onset (in the first few years of life) blindness, and normally sighted individuals. We find that the superior colliculus, a “visual” subcortical structure, is recruited by the auditory system in congenital and early onset blindness. Additionally, auditory subcortical responses to monaural stimuli were altered as a result of blindness. Specifically, responses in the auditory thalamus were equally strong to contralateral and ipsilateral stimulation in both groups of blind subjects, whereas sighted controls showed stronger responses to contralateral stimulation. These findings suggest that early blindness results in substantial reorganization of subcortical auditory responses. PMID:25673746
2003-02-01
servcice warfighters (Training devices and protocols, Onboard equipment, Cognitive and sensorimotor aids, Visual and auditory symbology, Peripheral visual...vestibular stimulation causing a decrease in cerebral blood pressure with the consequent reduction in G-tolerance and increased likelihood of ALOC or GLOC...tactile stimulators (e.g. one providing a sensation of movement) or of displays with a more complex coding (e.g. by increase in the number of tactors, or
Kinesthetic information facilitates saccades towards proprioceptive-tactile targets.
Voudouris, Dimitris; Goettker, Alexander; Mueller, Stefanie; Fiehler, Katja
2016-05-01
Saccades to somatosensory targets have longer latencies and are less accurate and precise than saccades to visual targets. Here we examined how different somatosensory information influences the planning and control of saccadic eye movements. Participants fixated a central cross and initiated a saccade as fast as possible in response to a tactile stimulus that was presented to either the index or the middle fingertip of their unseen left hand. In a static condition, the hand remained at a target location for the entire block of trials and the stimulus was presented at a fixed time after an auditory tone. Therefore, the target location was derived only from proprioceptive and tactile information. In a moving condition, the hand was first actively moved to the same target location and the stimulus was then presented immediately. Thus, in the moving condition additional kinesthetic information about the target location was available. We found shorter saccade latencies in the moving compared to the static condition, but no differences in accuracy or precision of saccadic endpoints. In a second experiment, we introduced variable delays after the auditory tone (static condition) or after the end of the hand movement (moving condition) in order to reduce the predictability of the moment of the stimulation and to allow more time to process the kinesthetic information. Again, we found shorter latencies in the moving compared to the static condition but no improvement in saccade accuracy or precision. In a third experiment, we showed that the shorter saccade latencies in the moving condition cannot be explained by the temporal proximity between the relevant event (auditory tone or end of hand movement) and the moment of the stimulation. Our findings suggest that kinesthetic information facilitates planning, but not control, of saccadic eye movements to proprioceptive-tactile targets. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gravitational Effects on Brain and Behavior
NASA Technical Reports Server (NTRS)
Young, Laurence R.
1991-01-01
Visual, vestibular, tactile, proprioceptive, and perhaps auditory clues are combined with knowledge of commanded voluntary movement to produce a single, usually consistent, perception of spatial orientation. The recent Spacelab flights have provided especially valuable observations on the effects of weightlessness and space flight. The response of the otolith organs to weightlessness and readapting to Earth's gravitation is described. Reference frames for orientation are briefly discussed.
The role of whiskers in compensation of visual deficit in a mouse model of retinal degeneration.
Voller, Jaroslav; Potužáková, Barbora; Šimeček, Vojtěch; Vožeh, František
2014-01-13
Sensory deprivation in one modality can enhance the development of the remaining modalities via mechanisms of synaptic plasticity. Mice of the C3H strain suffer from RD1 retinal degeneration that leads to visual impairment at weaning age. We examined a role of whiskers in compensation of the visual deficit. In order to differentiate the contribution of the whiskers from other mechanisms that can take part in the compensation, we investigated the effect of both chronic and acute tactile deprivation. Three-month-old mice were used. We examined motor skills (rotarod, beam walking test), gait control (CatWalk system), spontaneous motor activity (open field) and CNS excitability to an acoustic stimulus for assessment of compensatory changes in auditory system (audiogenic epilepsy). In the sighted mice, the only effect was a decline in their rotarod test performance after acute whisker removal. In the blind animals, chronic tactile deprivation caused changes in their gait and impaired the performance in motor tests. Some other compensatory mechanisms were involved but the whiskers are essential for the compensation as it emerged from more marked change of gait and the worsening of the motor performance after the acute whisker removal. Both chronic and acute tactile deprivation induced anxiety-like behaviour. Only a combination of blindness and chronic tactile deprivation led to an increased sense of hearing. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Hearing shapes our perception of time: temporal discrimination of tactile stimuli in deaf people.
Bolognini, Nadia; Cecchetto, Carlo; Geraci, Carlo; Maravita, Angelo; Pascual-Leone, Alvaro; Papagno, Costanza
2012-02-01
Confronted with the loss of one type of sensory input, we compensate using information conveyed by other senses. However, losing one type of sensory information at specific developmental times may lead to deficits across all sensory modalities. We addressed the effect of auditory deprivation on the development of tactile abilities, taking into account changes occurring at the behavioral and cortical level. Congenitally deaf and hearing individuals performed two tactile tasks, the first requiring the discrimination of the temporal duration of touches and the second requiring the discrimination of their spatial length. Compared with hearing individuals, deaf individuals were impaired only in tactile temporal processing. To explore the neural substrate of this difference, we ran a TMS experiment. In deaf individuals, the auditory association cortex was involved in temporal and spatial tactile processing, with the same chronometry as the primary somatosensory cortex. In hearing participants, the involvement of auditory association cortex occurred at a later stage and selectively for temporal discrimination. The different chronometry in the recruitment of the auditory cortex in deaf individuals correlated with the tactile temporal impairment. Thus, early hearing experience seems to be crucial to develop an efficient temporal processing across modalities, suggesting that plasticity does not necessarily result in behavioral compensation.
Auditory motion processing after early blindness
Jiang, Fang; Stecker, G. Christopher; Fine, Ione
2014-01-01
Studies showing that occipital cortex responds to auditory and tactile stimuli after early blindness are often interpreted as demonstrating that early blind subjects “see” auditory and tactile stimuli. However, it is not clear whether these occipital responses directly mediate the perception of auditory/tactile stimuli, or simply modulate or augment responses within other sensory areas. We used fMRI pattern classification to categorize the perceived direction of motion for both coherent and ambiguous auditory motion stimuli. In sighted individuals, perceived motion direction was accurately categorized based on neural responses within the planum temporale (PT) and right lateral occipital cortex (LOC). Within early blind individuals, auditory motion decisions for both stimuli were successfully categorized from responses within the human middle temporal complex (hMT+), but not the PT or right LOC. These findings suggest that early blind responses within hMT+ are associated with the perception of auditory motion, and that these responses in hMT+ may usurp some of the functions of nondeprived PT. Thus, our results provide further evidence that blind individuals do indeed “see” auditory motion. PMID:25378368
Devecioğlu, İsmail; Güçlü, Burak
2015-03-15
Rat skin is innervated by mechanoreceptive fibers similar to those in other mammals. Tactile experiments with behaving rats mostly focus on the vibrissal system which does not exist in humans. The aim of this study was to design and implement a novel vibrotactile system to stimulate the glabrous skin of behaving rats during operant conditioning. A computer-controlled vibrotactile system was developed for various tasks in which the volar surface of unrestrained rats' fore- and hindpaws was stimulated in an operant chamber. The operant chamber was built from off-the-shelf components. A highly accurate electrodynamic shaker with a novel multi-probe design was used for generating mechanical displacements. Twenty-five rats were trained for four sequential tasks: (A) middle-lever (trial start signal) press, (B) side-lever press with an associated visual cue, (C) similar to (B) with the addition of an auditory/tactile stimulus, (D) auditory/tactile detection (yes/no) task. Out of 9 rats which could complete the tactile version of this training schedule, 5 had over 70% accuracy in the tactile version of the detection task. Unlike actuators for stimulating whiskers, this system does not require a particular head/body alignment and can be used with freely behaving animals. The vibrotactile system was found to be effective for conditioning freely behaving rats based on stimuli applied on the glabrous skin. However, detection accuracies were lower compared to those in tasks involving whisker stimulation reported previously, probably due to differences in cortical processing. Copyright © 2015 Elsevier B.V. All rights reserved.
Fengler, Ineke; Nava, Elena; Röder, Brigitte
2015-01-01
Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166
2007-02-01
Differences (ANOVA post hoc analyses) (Experiment 1) 45 Appendix D. NASA - TLX Subscale Score Mean Differences (ANOVA post hoc analyses) (Experiment 1) 47... NASA - TLX Subscale Score Mean Differences (ANOVA post hoc analyses) (Experiment 2) 55 Distribution List 56 v List of Figures Figure 1...15 Figure 16. Overall NASA - TLX score versus waypoint display modality................................... 16
ERIC Educational Resources Information Center
Wolf, Beverly; Abbott, Robert D.; Berninger, Virginia W.
2017-01-01
In Study 1, the treatment group (N = 33 first graders, M = 6 years 10 months, 16 girls) received Slingerland multi-modal (auditory, visual, tactile, motor through hand, and motor through mouth) manuscript (unjoined) handwriting instruction embedded in systematic spelling, reading, and composing lessons; and the control group (N = 16 first graders,…
A Design Architecture for an Integrated Training System Decision Support System
1990-07-01
Sensory modes include visual, auditory, tactile, or kinesthetic; performance categories include time to complete , speed of response, or correct action ...procedures, and finally application and examples from the aviation proponency with emphasis on the LHX program. Appendix B is a complete bibliography...integrated analysis of ITS development. The approach was designed to provide an accurate and complete representation of the ITS development process and
Tanaka, T; Kojima, S; Takeda, H; Ino, S; Ifukube, T
2001-12-15
The maintenance of postural balance depends on effective and efficient feedback from various sensory inputs. The importance of auditory inputs in this respect is not, as yet, fully understood. The purpose of this study was to analyse how the moving auditory stimuli could affect the standing balance in healthy adults of different ages. The participants of the study were 12 healthy volunteers, who were divided into two age categories: the young group (mean = 21.9 years) and the elderly group (mean = 68.9 years). The instrument used for evaluation of standing balance was a force plate for measuring body sway parameters. The toe pressure was measured using the F-scan Tactile Sensor System. The moving auditory stimulus produced a white-noise sound and binaural cue using the Beachtron Affordable 3D Audio system. The moving auditory stimulus conditions were employed by having the sound come from the right to left or vice versa at the height of the participant's ears. Participants were asked to stand on the force plate in the Romberg position for 20 s with either eyes opened or eyes closed for analysing the effect of visual input. Simultaneously, all participants tried to remain in the standing position with and without auditory stimulation that the participants heard from the headphone. In addition, the variables of body sway were measured under four conditions for analysing the effect of decreased tactile sensation of toes and feet soles: standing on the normal surface (NS) or soft surface (SS) with and without auditory stimulation. The participants were asked to stand in a total of eight conditions. The results showed that the lateral body sway of the elderly group was more influenced than that of the young group by the lateral moving auditory stimulation. The analysis of toe pressure indicated that all participants used their left feet more than their right feet to maintain balance. Moreover, the elderly had the tendency to be stabilized mainly by use of their heels. The young group were mainly stabilized by the toes of their feet. The results suggest that the elderly may need a more appropriate stimulus of tactile and auditory sense as a feedback system than the young for maintaining and control of their standing postures.
Modeling Auditory-Haptic Interface Cues from an Analog Multi-line Telephone
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Anderson, Mark R.; Bittner, Rachael M.
2012-01-01
The Western Electric Company produced a multi-line telephone during the 1940s-1970s using a six-button interface design that provided robust tactile, haptic and auditory cues regarding the "state" of the communication system. This multi-line telephone was used as a model for a trade study comparison of two interfaces: a touchscreen interface (iPad)) versus a pressure-sensitive strain gauge button interface (Phidget USB interface controllers). The experiment and its results are detailed in the authors' AES 133rd convention paper " Multimodal Information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Dispays". This Engineering Brief describes how the interface logic, visual indications, and auditory cues of the original telephone were synthesized using MAX/MSP, including the logic for line selection, line hold, and priority line activation.
Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.
Vercillo, Tiziana; Gori, Monica
2015-01-01
The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.
Fujisaki, Waka; Nishida, Shin'ya
2010-08-07
The human brain processes different aspects of the surrounding environment through multiple sensory modalities, and each modality can be subdivided into multiple attribute-specific channels. When the brain rebinds sensory content information ('what') across different channels, temporal coincidence ('when') along with spatial coincidence ('where') provides a critical clue. It however remains unknown whether neural mechanisms for binding synchronous attributes are specific to each attribute combination, or universal and central. In human psychophysical experiments, we examined how combinations of visual, auditory and tactile attributes affect the temporal frequency limit of synchrony-based binding. The results indicated that the upper limits of cross-attribute binding were lower than those of within-attribute binding, and surprisingly similar for any combination of visual, auditory and tactile attributes (2-3 Hz). They are unlikely to be the limits for judging synchrony, since the temporal limit of a cross-attribute synchrony judgement was higher and varied with the modality combination (4-9 Hz). These findings suggest that cross-attribute temporal binding is mediated by a slow central process that combines separately processed 'what' and 'when' properties of a single event. While the synchrony performance reflects temporal bottlenecks existing in 'when' processing, the binding performance reflects the central temporal limit of integrating 'when' and 'what' properties.
Full body action remapping of peripersonal space: the case of walking.
Noel, Jean-Paul; Grivaz, Petr; Marmaroli, Patrick; Lissek, Herve; Blanke, Olaf; Serino, Andrea
2015-04-01
The space immediately surrounding the body, i.e. peripersonal space (PPS), is represented by populations of multisensory neurons, from a network of premotor and parietal areas, which integrate tactile stimuli from the body's surface with visual or auditory stimuli presented within a limited distance from the body. Here we show that PPS boundaries extend while walking. We used an audio-tactile interaction task to identify the location in space where looming sounds affect reaction time to tactile stimuli on the chest, taken as a proxy of the PPS boundary. The task was administered while participants either stood still or walked on a treadmill. In addition, in two separate experiments, subjects either received or not additional visual inputs, i.e. optic flow, implying a translation congruent with the direction of their walking. Results revealed that when participants were standing still, sounds boosted tactile processing when located within 65-100 cm from the participants' body, but not at farther distances. Instead, when participants were walking PPS expands as reflected in boosted tactile processing at ~1.66 m. This was found despite the fact the spatial relationship between the participant's body and the sound's source did not vary between the Standing and the Walking condition. This expansion effect on PPS boundaries due to walking was the same with or without optic flow, suggesting that kinematics and proprioceptive cues, rather than visual cues, are critical in triggering the effect. These results are the first to demonstrate an adaptation of the chest's PPS representation due to whole body motion and are compatible with the view that PPS constitutes a dynamic sensory-motor interface between the individual and the environment. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Goldsmith, H. H.; Van Hulle, C. A.; Arneson, C. L.; Schreiber, J. E.; Gernsbacher, M. A.
2006-01-01
Some adults and children exhibit defensive behaviors to tactile or auditory stimulation. These symptoms occur not only in subsets of children with ADHD, autism, and Fragile X syndrome, but also in the apparent absence of accompanying disorders. Relatively little research explores the correlates and antecedents of sensory defensiveness. Using a…
Gestural Communication and Mating Tactics in Wild Chimpanzees
Roberts, Anna Ilona; Roberts, Sam George Bradley
2015-01-01
The extent to which primates can flexibly adjust the production of gestural communication according to the presence and visual attention of the audience provides key insights into the social cognition underpinning gestural communication, such as an understanding of third party relationships. Gestures given in a mating context provide an ideal area for examining this flexibility, as frequently the interests of a male signaller, a female recipient and a rival male bystander conflict. Dominant chimpanzee males seek to monopolize matings, but subordinate males may use gestural communication flexibly to achieve matings despite their low rank. Here we show that the production of mating gestures in wild male East African chimpanzees (Pan troglodytes schweunfurthii) was influenced by a conflict of interest with females, which in turn was influenced by the presence and visual attention of rival males. When the conflict of interest was low (the rival male was present and looking away), chimpanzees used visual/ tactile gestures over auditory gestures. However, when the conflict of interest was high (the rival male was absent, or was present and looking at the signaller) chimpanzees used auditory gestures over visual/ tactile gestures. Further, the production of mating gestures was more common when the number of oestrous and non-oestrus females in the party increased, when the female was visually perceptive and when there was no wind. Females played an active role in mating behaviour, approaching for copulations more often when the number of oestrus females in the party increased and when the rival male was absent, or was present and looking away. Examining how social and ecological factors affect mating tactics in primates may thus contribute to understanding the previously unexplained reproductive success of subordinate male chimpanzees. PMID:26536467
Effectiveness of glucose monitoring systems modified for the visually impaired.
Bernbaum, M; Albert, S G; Brusca, S; McGinnis, J; Miller, D; Hoffmann, J W; Mooradian, A D
1993-10-01
To compare three glucose meters modified for use by individuals with diabetes and visual impairment regarding accuracy, precision, and clinical reliability. Ten subjects with diabetes and visual impairment performed self-monitoring of blood glucose using each of the three commercially available blood glucose meters modified for visually impaired users (the AccuChek Freedom [Boehringer Mannheim, Indianapolis, IN], the Diascan SVM [Home Diagnostics, Eatontown, NJ], and the One Touch [Lifescan, Milpitas, CA]). The meters were independently evaluated by a laboratory technologist for precision and accuracy determinations. Only two meters were acceptable with regard to laboratory precision (coefficient of variation < 10%)--the Accuchek and the One Touch. The Accuchek and the One Touch did not differ significantly with regard to laboratory estimates of accuracy. A great discrepancy of the clinical reliability results was observed between these two meters. The Accuchek maintained a high degree of reliability (y = 0.99X + 0.44, r = 0.97, P = 0.001). The visually impaired subjects were unable to perform reliable testing using the One Touch system because of a lack of appropriate tactile landmarks and auditory signals. In addition to laboratory assessments of glucose meters, monitoring systems designed for the visually impaired must include adequate tactile and audible feedback features to allow for the acquisition and placement of appropriate blood samples.
Hoefer, M; Tyll, S; Kanowski, M; Brosch, M; Schoenfeld, M A; Heinze, H-J; Noesselt, T
2013-10-01
Although multisensory integration has been an important area of recent research, most studies focused on audiovisual integration. Importantly, however, the combination of audition and touch can guide our behavior as effectively which we studied here using psychophysics and functional magnetic resonance imaging (fMRI). We tested whether task-irrelevant tactile stimuli would enhance auditory detection, and whether hemispheric asymmetries would modulate these audiotactile benefits using lateralized sounds. Spatially aligned task-irrelevant tactile stimuli could occur either synchronously or asynchronously with the sounds. Auditory detection was enhanced by non-informative synchronous and asynchronous tactile stimuli, if presented on the left side. Elevated fMRI-signals to left-sided synchronous bimodal stimulation were found in primary auditory cortex (A1). Adjacent regions (planum temporale, PT) expressed enhanced BOLD-responses for synchronous and asynchronous left-sided bimodal conditions. Additional connectivity analyses seeded in right-hemispheric A1 and PT for both bimodal conditions showed enhanced connectivity with right-hemispheric thalamic, somatosensory and multisensory areas that scaled with subjects' performance. Our results indicate that functional asymmetries interact with audiotactile interplay which can be observed for left-lateralized stimulation in the right hemisphere. There, audiotactile interplay recruits a functional network of unisensory cortices, and the strength of these functional network connections is directly related to subjects' perceptual sensitivity. Copyright © 2013 Elsevier Inc. All rights reserved.
A crossmodal role for audition in taste perception.
Yan, Kimberly S; Dando, Robin
2015-06-01
Our sense of taste can be influenced by our other senses, with several groups having explored the effects of olfactory, visual, or tactile stimulation on what we perceive as taste. Research into multisensory, or crossmodal perception has rarely linked our sense of taste with that of audition. In our study, 48 participants in a crossover experiment sampled multiple concentrations of solutions of 5 prototypic tastants, during conditions with or without broad spectrum auditory stimulation, simulating that of airline cabin noise. Airline cabins are an unusual environment, in which food is consumed routinely under extreme noise conditions, often over 85 dB, and in which the perceived quality of food is often criticized. Participants rated the intensity of solutions representing varying concentrations of the 5 basic tastes on the general Labeled Magnitude Scale. No difference in intensity ratings was evident between the control and sound condition for salty, sour, or bitter tastes. Likewise, panelists did not perform differently during sound conditions when rating tactile, visual, or auditory stimulation, or in reaction time tests. Interestingly, sweet taste intensity was rated progressively lower, whereas the perception of umami taste was augmented during the experimental sound condition, to a progressively greater degree with increasing concentration. We postulate that this effect arises from mechanostimulation of the chorda tympani nerve, which transits directly across the tympanic membrane of the middle ear. (c) 2015 APA, all rights reserved).
Fujisaki, Waka; Nishida, Shin'ya
2010-01-01
The human brain processes different aspects of the surrounding environment through multiple sensory modalities, and each modality can be subdivided into multiple attribute-specific channels. When the brain rebinds sensory content information (‘what’) across different channels, temporal coincidence (‘when’) along with spatial coincidence (‘where’) provides a critical clue. It however remains unknown whether neural mechanisms for binding synchronous attributes are specific to each attribute combination, or universal and central. In human psychophysical experiments, we examined how combinations of visual, auditory and tactile attributes affect the temporal frequency limit of synchrony-based binding. The results indicated that the upper limits of cross-attribute binding were lower than those of within-attribute binding, and surprisingly similar for any combination of visual, auditory and tactile attributes (2–3 Hz). They are unlikely to be the limits for judging synchrony, since the temporal limit of a cross-attribute synchrony judgement was higher and varied with the modality combination (4–9 Hz). These findings suggest that cross-attribute temporal binding is mediated by a slow central process that combines separately processed ‘what’ and ‘when’ properties of a single event. While the synchrony performance reflects temporal bottlenecks existing in ‘when’ processing, the binding performance reflects the central temporal limit of integrating ‘when’ and ‘what’ properties. PMID:20335212
2014-01-01
Background People with severe disabilities, e.g. due to neurodegenerative disease, depend on technology that allows for accurate wheelchair control. For those who cannot operate a wheelchair with a joystick, brain-computer interfaces (BCI) may offer a valuable option. Technology depending on visual or auditory input may not be feasible as these modalities are dedicated to processing of environmental stimuli (e.g. recognition of obstacles, ambient noise). Herein we thus validated the feasibility of a BCI based on tactually-evoked event-related potentials (ERP) for wheelchair control. Furthermore, we investigated use of a dynamic stopping method to improve speed of the tactile BCI system. Methods Positions of four tactile stimulators represented navigation directions (left thigh: move left; right thigh: move right; abdomen: move forward; lower neck: move backward) and N = 15 participants delivered navigation commands by focusing their attention on the desired tactile stimulus in an oddball-paradigm. Results Participants navigated a virtual wheelchair through a building and eleven participants successfully completed the task of reaching 4 checkpoints in the building. The virtual wheelchair was equipped with simulated shared-control sensors (collision avoidance), yet these sensors were rarely needed. Conclusion We conclude that most participants achieved tactile ERP-BCI control sufficient to reliably operate a wheelchair and dynamic stopping was of high value for tactile ERP classification. Finally, this paper discusses feasibility of tactile ERPs for BCI based wheelchair control. PMID:24428900
1997-01-01
restrict limb movement and impede tactile, visual, auditory , and olfactory functioning. When worn at the MOPP IV level, CPC becomes an encapsulated...gas mask. This index was less than the level of acceptability of voice communication of 75% (Bensel, 1997/this issue). This finding is of concern...because even the two voice resonators in the M40 facepiece, compared to one in the M17, do not produce quality speech. Restricted and optically
A view of Kanerva's sparse distributed memory
NASA Technical Reports Server (NTRS)
Denning, P. J.
1986-01-01
Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.
2007-11-14
Artificial intelligence and 4 23 education , Volume 1: Learning environments and tutoring systems. Hillsdale, NJ: Erlbaum. Wickens, C.D. (1984). Processing...and how to use it to best optimize the learning process. Some researchers (see Loftin & Savely, 1991) have proposed adding intelligent systems to the...is experienced as the cognitive centers in an individual’s brain process visual, tactile, kinesthetic , olfactory, proprioceptive, and auditory
Learning Style as a Predictor of First-Time NCLEX-RN Success: Implications for Nurse Educators.
Lown, Susan G; Hawkins, Lee Ann
Improving NCLEX-RN® pass rates remains a priority for nursing programs. Many programs collect learning style inventory data, yet few studies have looked at relationships between these data and NCLEX-RN pass/fail rates. Learning style preferences (visual, auditory, tactile, individual, group) and NCLEX pass/fail results were examined for 532 undergraduates in a Midwestern university. A significant correlation between preference for group learning and failure of the NCLEX was found (χ = 5.99, P = .05).
Keil, Julian; Pomper, Ulrich; Feuerbach, Nele; Senkowski, Daniel
2017-03-01
Intersensory attention (IA) describes the process of directing attention to a specific modality. Temporal orienting (TO) characterizes directing attention to a specific moment in time. Previously, studies indicated that these two processes could have opposite effects on early evoked brain activity. The exact time-course and processing stages of both processes are still unknown. In this human electroencephalography study, we investigated the effects of IA and TO on visuo-tactile stimulus processing within one paradigm. IA was manipulated by presenting auditory cues to indicate whether participants should detect visual or tactile targets in visuo-tactile stimuli. TO was manipulated by presenting stimuli block-wise at fixed or variable inter-stimulus intervals. We observed that TO affects evoked activity to visuo-tactile stimuli prior to IA. Moreover, we found that TO reduces the amplitude of early evoked brain activity, whereas IA enhances it. Using beamformer source-localization, we observed that IA increases neural responses in sensory areas of the attended modality whereas TO reduces brain activity in widespread cortical areas. Based on these findings we derive an updated working model for the effects of temporal and intersensory attention on early evoked brain activity. Copyright © 2017 Elsevier Inc. All rights reserved.
Social Contact Enhances Bodily Self-Awareness.
Hazem, Nesrine; Beaurenaut, Morgan; George, Nathalie; Conty, Laurence
2018-03-08
Human self-awareness is arguably the most important and revealing question of modern sciences. Converging theoretical perspectives link self-awareness and social abilities in human beings. In particular, mutual engagement during social interactions-or social contact-would boost self-awareness. Yet, empirical evidence for this effect is scarce. We recently showed that the perception of eye contact induces enhanced bodily self-awareness. Here, we aimed at extending these findings by testing the influence of social contact in auditory and tactile modalities, in order to demonstrate that social contact enhances bodily self-awareness irrespective of sensory modality. In a first experiment, participants were exposed to hearing their own first name (as compared to another unfamiliar name and noise). In a second experiment, human touch (as compared to brush touch and no-touch) was used as the social contact cue. In both experiments, participants demonstrated more accurate rating of their bodily reactions in response to emotional pictures following the social contact condition-a proxy of bodily self-awareness. Further analyses indicated that the effect of social contact was comparable across tactile, auditory and visual modalities. These results provide the first direct empirical evidence in support of the essential social nature of human self-awareness.
Field evaluation of a wearable multimodal soldier navigation system.
Aaltonen, Iina; Laarni, Jari
2017-09-01
Challenging environments pose difficulties for terrain navigation, and therefore wearable and multimodal navigation systems have been proposed to overcome these difficulties. Few such navigation systems, however, have been evaluated in field conditions. We evaluated how a multimodal system can aid in navigating in a forest in the context of a military exercise. The system included a head-mounted display, headphones, and a tactile vibrating vest. Visual, auditory, and tactile modalities were tested and evaluated using unimodal, bimodal, and trimodal conditions. Questionnaires, interviews and observations were used to evaluate the advantages and disadvantages of each modality and their multimodal use. The guidance was considered easy to interpret and helpful in navigation. Simplicity of the displayed information was required, which was partially conflicting with the request for having both distance and directional information available. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yang, Chao-Yang; Wu, Cheng-Tse
2017-03-01
This research investigated the risks involved in bicycle riding while using various sensory modalities to deliver training information. To understand the risks associated with using bike computers, this study evaluated hazard perception performance through lab-based simulations of authentic riding conditions. Analysing hazard sensitivity (d') of signal detection theory, the rider's response time, and eye glances provided insights into the risks of using bike computers. In this study, 30 participants were tested with eight hazard perception tasks while they maintained a cadence of 60 ± 5 RPM and used bike computers with different sensory displays, namely visual, auditory, and tactile feedback signals. The results indicated that synchronously using different sense organs to receive cadence feedback significantly affects hazard perception performance; direct visual information leads to the worst rider distraction, with a mean sensitivity to hazards (d') of -1.03. For systems with multiple interacting sensory aids, auditory aids were found to result in the greatest reduction in sensitivity to hazards (d' mean = -0.57), whereas tactile sensory aids reduced the degree of rider distraction (d' mean = -0.23). Our work complements existing work in this domain by advancing the understanding of how to design devices that deliver information subtly, thereby preventing disruption of a rider's perception of road hazards. Copyright © 2016 Elsevier Ltd. All rights reserved.
Re-examining overlap between tactile and visual motion responses within hMT+ and STS
Jiang, Fang; Beauchamp, Michael S.; Fine, Ione
2015-01-01
Here we examine overlap between tactile and visual motion BOLD responses within the human MT+ complex. Although several studies have reported tactile responses overlapping with hMT+, many used group average analyses, leaving it unclear whether these responses were restricted to sub-regions of hMT+. Moreover, previous studies either employed a tactile task or passive stimulation, leaving it unclear whether or not tactile responses in hMT+ are simply the consequence of visual imagery. Here we carried out a replication of one of the classic papers finding tactile responses in hMT+ (Hagen et al. 2002). We mapped MT and MST in individual subjects using visual field localizers. We then examined responses to tactile motion on the arm, either presented passively or in the presence of a visual task performed at fixation designed to minimize visualization of the concurrent tactile stimulation. To our surprise, without a visual task, we found only weak tactile motion responses in MT (6% of voxels showing tactile responses) and MST (2% of voxels). With an unrelated visual task designed to withdraw attention from the tactile modality, responses in MST reduced to almost nothing (<1% regions). Consistent with previous results, we did observe tactile responses in STS regions superior and anterior to hMT+. Despite the lack of individual overlap, group averaged responses produced strong spurious overlap between tactile and visual motion responses within hMT+ that resembled those observed in previous studies. The weak nature of tactile responses in hMT+ (and their abolition by withdrawal of attention) suggests that hMT+ may not serve as a supramodal motion processing module. PMID:26123373
Processing of speech signals for physical and sensory disabilities.
Levitt, H
1995-01-01
Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities. Images Fig. 4 PMID:7479816
Processing of Speech Signals for Physical and Sensory Disabilities
NASA Astrophysics Data System (ADS)
Levitt, Harry
1995-10-01
Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities.
Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).
Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566
Maras Atabay, Meltem; Safi Oz, Zehra; Kurtman, Elvan
2014-08-01
The dopamine D4 receptor gene (DRD4) encodes a receptor for dopamine, a chemical messenger used in the brain. One variant of the DRD4 gene, the 7R allele, is believed to be associated with attention deficit hyperactivity disorder (ADHD). The aim of this study was to investigate the relationships between repeat polymorphisms in dopamine DRD4 and second language learning styles such as visual (seeing), tactile (touching), auditory (hearing), kinesthetic (moving) and group/individual learning styles, as well as the relationships among DRD4 gene polymorphisms and ADHD in undergraduate students. A total of 227 students between the ages of 17-21 years were evaluated using the Wender Utah rating scale and DSM-IV diagnostic criteria for ADHD. Additionally, Reid's perceptual learning style questionnaire for second language learning style was applied. In addition, these students were evaluated for social distress factors using the list of Threatening Events (TLE); having had no TLE, having had just one TLE or having had two or more TLEs within the previous 6 months before the interview. For DRD4 gene polymorphisms, DNA was extracted from whole blood using the standard phenol/chloroform method and genotyped using polymerase chain reaction. Second language learners with the DRD4.7+ repeats showed kinaesthetic and auditory learning styles, while students with DRD4.7-repeats showed visual, tactile and group learning, and also preferred the more visual learning styles [Formula: see text]. We also demonstrated that the DRD4 polymorphism significantly affected the risk effect conferred by an increasing level of exposure to TLE.
Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa
2015-01-01
Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is supported by computational and behavioral data. PMID:25698947
Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa
2015-01-01
Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is supported by computational and behavioral data.
Kostopoulos, Penelope; Petrides, Michael
2016-02-16
There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.
Auditory and tactile gap discrimination by observers with normal and impaired hearing.
Desloge, Joseph G; Reed, Charlotte M; Braida, Louis D; Perez, Zachary D; Delhorne, Lorraine A; Villabona, Timothy J
2014-02-01
Temporal processing ability for the senses of hearing and touch was examined through the measurement of gap-duration discrimination thresholds (GDDTs) employing the same low-frequency sinusoidal stimuli in both modalities. GDDTs were measured in three groups of observers (normal-hearing, hearing-impaired, and normal-hearing with simulated hearing loss) covering an age range of 21-69 yr. GDDTs for a baseline gap of 6 ms were measured for four different combinations of 100-ms leading and trailing markers (250-250, 250-400, 400-250, and 400-400 Hz). Auditory measurements were obtained for monaural presentation over headphones and tactile measurements were obtained using sinusoidal vibrations presented to the left middle finger. The auditory GDDTs of the hearing-impaired listeners, which were larger than those of the normal-hearing observers, were well-reproduced in the listeners with simulated loss. The magnitude of the GDDT was generally independent of modality and showed effects of age in both modalities. The use of different-frequency compared to same-frequency markers led to a greater deterioration in auditory GDDTs compared to tactile GDDTs and may reflect differences in bandwidth properties between the two sensory systems.
Hearing visuo-tactile synchrony - Sound-induced proprioceptive drift in the invisible hand illusion.
Darnai, Gergely; Szolcsányi, Tibor; Hegedüs, Gábor; Kincses, Péter; Kállai, János; Kovács, Márton; Simon, Eszter; Nagy, Zsófia; Janszky, József
2017-02-01
The rubber hand illusion (RHI) and its variant the invisible hand illusion (IHI) are useful for investigating multisensory aspects of bodily self-consciousness. Here, we explored whether auditory conditioning during an RHI could enhance the trisensory visuo-tactile-proprioceptive interaction underlying the IHI. Our paradigm comprised of an IHI session that was followed by an RHI session and another IHI session. The IHI sessions had two parts presented in counterbalanced order. One part was conducted in silence, whereas the other part was conducted on the backdrop of metronome beats that occurred in synchrony with the brush movements used for the induction of the illusion. In a first experiment, the RHI session also involved metronome beats and was aimed at creating an associative memory between the brush stroking of a rubber hand and the sounds. An analysis of IHI sessions showed that the participants' perceived hand position drifted more towards the body-midline in the metronome relative to the silent condition without any sound-related session differences. Thus, the sounds, but not the auditory RHI conditioning, influenced the IHI. In a second experiment, the RHI session was conducted without metronome beats. This confirmed the conditioning-independent presence of sound-induced proprioceptive drift in the IHI. Together, these findings show that the influence of visuo-tactile integration on proprioceptive updating is modifiable by irrelevant auditory cues merely through the temporal correspondence between the visuo-tactile and auditory events. © 2016 The British Psychological Society.
Acoustic Tactile Representation of Visual Information
NASA Astrophysics Data System (ADS)
Silva, Pubudu Madhawa
Our goal is to explore the use of hearing and touch to convey graphical and pictorial information to visually impaired people. Our focus is on dynamic, interactive display of visual information using existing, widely available devices, such as smart phones and tablets with touch sensitive screens. We propose a new approach for acoustic-tactile representation of visual signals that can be implemented on a touch screen and allows the user to actively explore a two-dimensional layout consisting of one or more objects with a finger or a stylus while listening to auditory feedback via stereo headphones. The proposed approach is acoustic-tactile because sound is used as the primary source of information for object localization and identification, while touch is used for pointing and kinesthetic feedback. A static overlay of raised-dot tactile patterns can also be added. A key distinguishing feature of the proposed approach is the use of spatial sound (directional and distance cues) to facilitate the active exploration of the layout. We consider a variety of configurations for acoustic-tactile rendering of object size, shape, identity, and location, as well as for the overall perception of simple layouts and scenes. While our primary goal is to explore the fundamental capabilities and limitations of representing visual information in acoustic-tactile form, we also consider a number of relatively simple configurations that can be tied to specific applications. In particular, we consider a simple scene layout consisting of objects in a linear arrangement, each with a distinct tapping sound, which we compare to a ''virtual cane.'' We will also present a configuration that can convey a ''Venn diagram.'' We present systematic subjective experiments to evaluate the effectiveness of the proposed display for shape perception, object identification and localization, and 2-D layout perception, as well as the applications. Our experiments were conducted with visually blocked subjects. The results are evaluated in terms of accuracy and speed, and they demonstrate the advantages of spatial sound for guiding the scanning finger or pointer in shape perception, object localization, and layout exploration. We show that these advantages increase with the amount of detail (smaller object size) in the display. Our experimental results show that the proposed system outperforms the state of the art in shape perception, including variable friction displays. We also demonstrate that, even though they are currently available only as static overlays, raised dot patterns provide the best shape rendition in terms of both the accuracy and speed. Our experiments with layout rendering and perception demonstrate that simultaneous representation of objects, using the most effective approaches for directionality and distance rendering, approaches the optimal performance level provided by visual layout perception. Finally, experiments with the virtual cane and Venn diagram configurations demonstrate that the proposed techniques can be used effectively in simple but nontrivial real-world applications. One of the most important conclusions of our experiments is that there is a clear performance gap between experienced and inexperienced subjects, which indicates that there is a lot of room for improvement with appropriate and extensive training. By exploring a wide variety of design alternatives and focusing on different aspects of the acoustic-tactile interfaces, our results offer many valuable insights and great promise for the design of future systematic tests visually impaired and visually blocked subjects, utilizing the most effective configurations.
Teramoto, Wataru; Watanabe, Hiroshi; Umemura, Hiroyuki
2008-01-01
The perceived temporal order of external successive events does not always follow their physical temporal order. We examined the contribution of self-motion mechanisms in the perception of temporal order in the auditory modality. We measured perceptual biases in the judgment of the temporal order of two short sounds presented successively, while participants experienced visually induced self-motion (yaw-axis circular vection) elicited by viewing long-lasting large-field visual motion. In experiment 1, a pair of white-noise patterns was presented to participants at various stimulus-onset asynchronies through headphones, while they experienced visually induced self-motion. Perceived temporal order of auditory events was modulated by the direction of the visual motion (or self-motion). Specifically, the sound presented to the ear in the direction opposite to the visual motion (ie heading direction) was perceived prior to the sound presented to the ear in the same direction. Experiments 2A and 2B were designed to reduce the contributions of decisional and/or response processes. In experiment 2A, the directional cueing of the background (left or right) and the response dimension (high pitch or low pitch) were not spatially associated. In experiment 2B, participants were additionally asked to report which of the two sounds was perceived 'second'. Almost the same results as in experiment 1 were observed, suggesting that the change in temporal order of auditory events during large-field visual motion reflects a change in perceptual processing. Experiment 3 showed that the biases in the temporal-order judgments of auditory events were caused by concurrent actual self-motion with a rotatory chair. In experiment 4, using a small display, we showed that 'pure' long exposure to visual motion without the sensation of self-motion was not responsible for this phenomenon. These results are consistent with previous studies reporting a change in the perceived temporal order of visual or tactile events depending on the direction of self-motion. Hence, large-field induced (ie optic flow) self-motion can affect the temporal order of successive external events across various modalities.
Distracted Biking: An Observational Study.
Wolfe, Elizabeth Suzanne; Arabian, Sandra Strack; Breeze, Janis L; Salzler, Matthew J
2016-01-01
Commuting via bicycle is a very popular mode of transportation in the Northeastern United States. Boston, MA, has seen a rapid increase in bicycle ridership over the past decade, which has raised concerns and awareness about bicycle safety. An emerging topic in this field is distracted bicycle riding. This study was conducted to provide descriptive data on the prevalence and type of distracted bicycling in Boston at different times of day. This was a cross-sectional study in which observers tallied bicyclists at 4 high traffic intersections in Boston during various peak commuting hours for 2 types of distractions: auditory (earbuds/phones in or on ears), and visual/tactile (electronic device or other object in hand). Nineteen hundred seventy-four bicyclists were observed and 615 (31.2%), 95% CI [29, 33%], were distracted. Of those observed, auditory distractions were the most common (N = 349; 17.7%), 95% CI [16, 19], p = .0003, followed by visual/tactile distractions (N = 266; 13.5%), 95% CI [12, 15]. The highest proportion (40.7%), 95% CI [35, 46], of distracted bicyclists was observed during the midday commute (between 13:30 and 15:00). Distracted bicycling is a prevalent safety concern in the city of Boston, as almost a third of all bicyclists exhibited distracted behavior. Education and public awareness campaigns should be designed to decrease distracted bicycling behaviors and promote bicycle safety in Boston. An awareness of the prevalence of distracted biking can be utilized to promote bicycle safety campaigns dedicated to decreasing distracted bicycling and to provide a baseline against which improvements can be measured.
Distracted Biking: An Observational Study
Wolfe, Elizabeth Suzanne; Arabian, Sandra Strack; Breeze, Janis L; Salzler, Matthew J.
2016-01-01
Commuting via bicycle is a very popular mode of transportation in the Northeastern United States (US). Boston, MA has seen a rapid increase in bicycle ridership over the past decade which has raised concerns and awareness about bicycle safety. An emerging topic in this field is distracted bicycle riding. This study was conducted to provide descriptive data on the prevalence and type of distracted bicycling in Boston at different times of day. This was a cross-sectional study in which observers tallied bicyclists at four high traffic intersections in Boston during various peak commuting hours for two types of distractions: auditory (ear buds/phones in or on ears), and visual/tactile (electronic device or other object in hand). Nineteen hundred seventy-four bicyclists were observed and 615 (31.2%, 95% CI: 29%-33%) were distracted. Of those observed, auditory distractions were the most common (N= 349 [17.7%, 95% CI: 16%-19%], p=0.0003) followed by visual/tactile distractions (N= 266 [13.5%, 95% CI: 12%-15%]). The highest proportion (40.7%, 95% CI: 35%-46%) of distracted bicyclists was observed during the midday commute (between 13:30-15:00). Distracted bicycling is a prevalent safety concern in the city of Boston, as almost one-third of all bicyclists exhibited distracted behavior. Education and public awareness campaigns should be designed to decrease distracted bicycling behaviors and promote bicycle safety in Boston. An awareness of the prevalence of distracted biking can be utilized to promote bicycle safety campaigns dedicated to decreasing distracted bicycling and to provide a baseline against which improvements can be measured. PMID:26953533
Touch activates human auditory cortex.
Schürmann, Martin; Caetano, Gina; Hlushchuk, Yevhen; Jousmäki, Veikko; Hari, Riitta
2006-05-01
Vibrotactile stimuli can facilitate hearing, both in hearing-impaired and in normally hearing people. Accordingly, the sounds of hands exploring a surface contribute to the explorer's haptic percepts. As a possible brain basis of such phenomena, functional brain imaging has identified activations specific to audiotactile interaction in secondary somatosensory cortex, auditory belt area, and posterior parietal cortex, depending on the quality and relative salience of the stimuli. We studied 13 subjects with non-invasive functional magnetic resonance imaging (fMRI) to search for auditory brain areas that would be activated by touch. Vibration bursts of 200 Hz were delivered to the subjects' fingers and palm and tactile pressure pulses to their fingertips. Noise bursts served to identify auditory cortex. Vibrotactile-auditory co-activation, addressed with minimal smoothing to obtain a conservative estimate, was found in an 85-mm3 region in the posterior auditory belt area. This co-activation could be related to facilitated hearing at the behavioral level, reflecting the analysis of sound-like temporal patterns in vibration. However, even tactile pulses (without any vibration) activated parts of the posterior auditory belt area, which therefore might subserve processing of audiotactile events that arise during dynamic contact between hands and environment.
[Short-term memory characteristics of vibration intensity tactile perception on human wrist].
Hao, Fei; Chen, Li-Juan; Lu, Wei; Song, Ai-Guo
2014-12-25
In this study, a recall experiment and a recognition experiment were designed to assess the human wrist's short-term memory characteristics of tactile perception on vibration intensity, by using a novel homemade vibrotactile display device based on the spatiotemporal combination vibration of multiple micro vibration motors as a test device. Based on the obtained experimental data, the short-term memory span, recognition accuracy and reaction time of vibration intensity were analyzed. From the experimental results, some important conclusions can be made: (1) The average short-term memory span of tactile perception on vibration intensity is 3 ± 1 items; (2) The greater difference between two adjacent discrete intensities of vibrotactile stimulation is defined, the better average short-term memory span human wrist gets; (3) There is an obvious difference of the average short-term memory span on vibration intensity between the male and female; (4) The mechanism of information extraction in short-term memory of vibrotactile display is to traverse the scanning process by comparison; (5) The recognition accuracy and reaction time performance of vibrotactile display compares unfavourably with that of visual and auditory. The results from this study are important for designing vibrotactile display coding scheme.
Juvenile psittacine environmental enrichment.
Simone-Freilicher, Elisabeth; Rupley, Agnes E
2015-05-01
Environmental enrichment is of great import to the emotional, intellectual, and physical development of the juvenile psittacine and their success in the human home environment. Five major types of enrichment include social, occupational, physical, sensory, and nutritional. Occupational enrichment includes exercise and psychological enrichment. Physical enrichment includes the cage and accessories and the external home environment. Sensory enrichment may be visual, auditory, tactile, olfactory, or taste oriented. Nutritional enrichment includes variations in appearance, type, and frequency of diet, and treats, novelty, and foraging. Two phases of the preadult period deserve special enrichment considerations: the development of autonomy and puberty. Copyright © 2015 Elsevier Inc. All rights reserved.
Franosch, Jan-Moritz P; Urban, Sebastian; van Hemmen, J Leo
2013-12-01
How can an animal learn from experience? How can it train sensors, such as the auditory or tactile system, based on other sensory input such as the visual system? Supervised spike-timing-dependent plasticity (supervised STDP) is a possible answer. Supervised STDP trains one modality using input from another one as "supervisor." Quite complex time-dependent relationships between the senses can be learned. Here we prove that under very general conditions, supervised STDP converges to a stable configuration of synaptic weights leading to a reconstruction of primary sensory input.
Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne
2016-03-01
One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers' visual and manual distractions with 'infotainment' technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual-manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox 'one-shot' voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory-vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers' interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation.
Does bimodal stimulus presentation increase ERP components usable in BCIs?
NASA Astrophysics Data System (ADS)
Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Blankertz, Benjamin; Werkhoven, Peter J.
2012-08-01
Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants’ task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants’ task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.
NASA Astrophysics Data System (ADS)
Rule, Audrey C.; Stefanich, Greg P.; Boody, Robert M.; Peiffer, Belinda
2011-04-01
Science, technology, engineering, and mathematics (STEM) fields, important in today's world, are underrepresented by students with disabilities. Students with visual impairments, although cognitively similar to sighted peers, face challenges as STEM subjects are often taught using visuals. They need alternative forms of access such as enlarged or audio-converted text, tactile graphics, and involvement in hands-on science. This project focused on increasing teacher awareness of and providing funds for the purchase of supplemental adaptive resources, supplies, and equipment. We examined attitude and instructional changes across the year of the programme in 15 science and mathematics teachers educating students with visual impairments. Positive changes were noted from pretest to posttest in student and teacher perspectives, and in teacher attitudes towards students with disabilities in STEM classes. Teachers also provided insights into their challenges and successes through a reflective narrative. Several adolescent students resisted accommodations to avoid appearing conspicuous to peers. Teachers implemented three strategies to address this: providing the adaptations to all students in the class; convincing the student of the need for adaptation; and involving the class in understanding and accepting the student's impairment. A variety of teacher-created adaptations for various science and mathematics labs are reported. Another finding was many adaptations provided for the student with visual impairment benefitted the entire class. This study supports the claim that given knowledgeable, supportive teachers, and with appropriate accommodations such as tactile or auditory materials, students with visual impairments can be as successful and engaged as other students in science and mathematics.
Tactile Approaches for Teaching Blind and Visually-Impaired Students in the Geosciences
NASA Astrophysics Data System (ADS)
Permenter, J. L.; Runyon, C.
2003-12-01
Hearing and touch are perhaps the two most important senses for teaching visually-impaired students in any context. Classroom lectures obviously emphasize the auditory aspects of learning, while touch is often relegated to either Braille texts or raised--line drawings for illustrative figures. From the student's perspective, some lecture topics, especially in the sciences, can be a challenge to grasp without additional stimuli. Geosciences have a distinct visual component that can be lost when teaching blind or visually-impaired students, particularly in the study of geomorphology and landform change. As an example, the matters raised concerning volcanic hazards can be difficult to envision without due attention to the limitations of visually-impaired students. Here, we suggest an example of a tactile approach for introducing the study of volcanoes and the hazards associated with them. Large, visually-stimulating images of a volcanic, populated region in southern Peru are supplied for those students who have poor but extant visual acuity, while precise, clay-based models of the region complement the images for those students, as well as for students who have no visual ability whatsoever. We use a model of the terrestrial volcano El Misti and the nearby city of Arequipa, Peru, to directly reflect the volcanic morphology and hazardous aspects of the terrain. The use of computer-generated digital elevation models from remote sensing imaging systems allows accurate replication of the regional topography. Instructors are able to modify these clay models to illustrate spatial and temporal changes in the region, allowing students to better grasp potential geological and geographical transformations over time. The models spawn engaging class discussions and help with designing hazard mitigation protocols.
Vinter, A; Fernandes, V; Orlandi, O; Morgan, P
2013-11-01
The aim of the present study was to examine to what extent the verbal definitions of familiar objects produced by blind children reflect their peculiar perceptual experience and, in consequence, differ from those produced by sighted children. Ninety-six visually impaired children, aged between 6 and 14 years, and 32 age-matched sighted children had to define 10 words denoting concrete animate or inanimate familiar objects. The blind children evoked the tactile and auditory characteristics of objects and expressed personal perceptual experiences in their definitions. The sighted children relied on visual perception, and produced more visually oriented verbalism. In contrast, no differences were observed between children in their propensity to include functional attributes in their verbal definitions. The results are discussed in line with embodied views of cognition that postulate mandatory perceptuomotor processing of words during access to their meaning. © 2012 John Wiley & Sons Ltd.
Zelic, Gregory; Mottet, Denis; Lagarde, Julien
2012-01-01
Recent behavioral neuroscience research revealed that elementary reactive behavior can be improved in the case of cross-modal sensory interactions thanks to underlying multisensory integration mechanisms. Can this benefit be generalized to an ongoing coordination of movements under severe physical constraints? We choose a juggling task to examine this question. A central issue well-known in juggling lies in establishing and maintaining a specific temporal coordination among balls, hands, eyes and posture. Here, we tested whether providing additional timing information about the balls and hands motions by using external sound and tactile periodic stimulations, the later presented at the wrists, improved the behavior of jugglers. One specific combination of auditory and tactile metronome led to a decrease of the spatiotemporal variability of the juggler's performance: a simple sound associated to left and right tactile cues presented antiphase to each other, which corresponded to the temporal pattern of hands movement in the juggling task. A contrario, no improvements were obtained in the case of other auditory and tactile combinations. We even found a degraded performance when tactile events were presented alone. The nervous system thus appears able to integrate in efficient way environmental information brought by different sensory modalities, but only if the information specified matches specific features of the coordination pattern. We discuss the possible implications of these results for the understanding of the neuronal integration process implied in audio-tactile interaction in the context of complex voluntary movement, and considering the well-known gating effect of movement on vibrotactile perception. PMID:22384211
Dynamic and predictive links between touch and vision.
Gray, Rob; Tan, Hong Z
2002-07-01
We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.
Exploration of the Effectiveness of Tactile Methods
ERIC Educational Resources Information Center
Aldajani, Neda F.
2016-01-01
This paper introduces the tactile method and aims to explore the effectiveness of using tactile methods with students who are blind and visually impaired. Although there was limited research about using this strategy, all of the research agrees that using tactile is one of the best ways for students who are blind and visually impaired to be…
He, Qionger; Arroyo, Erica D; Smukowski, Samuel N; Xu, Jian; Piochon, Claire; Savas, Jeffrey N; Portera-Cailliau, Carlos; Contractor, Anis
2018-04-27
Sensory perturbations in visual, auditory and tactile perception are core problems in fragile X syndrome (FXS). In the Fmr1 knockout mouse model of FXS, the maturation of synapses and circuits during critical period (CP) development in the somatosensory cortex is delayed, but it is unclear how this contributes to altered tactile sensory processing in the mature CNS. Here we demonstrate that inhibiting the juvenile chloride co-transporter NKCC1, which contributes to altered chloride homeostasis in developing cortical neurons of FXS mice, rectifies the chloride imbalance in layer IV somatosensory cortex neurons and corrects the development of thalamocortical excitatory synapses during the CP. Comparison of protein abundances demonstrated that NKCC1 inhibition during early development caused a broad remodeling of the proteome in the barrel cortex. In addition, the abnormally large size of whisker-evoked cortical maps in adult Fmr1 knockout mice was corrected by rectifying the chloride imbalance during the early CP. These data demonstrate that correcting the disrupted driving force through GABA A receptors during the CP in cortical neurons restores their synaptic development, has an unexpectedly large effect on differentially expressed proteins, and produces a long-lasting correction of somatosensory circuit function in FXS mice.
Touch to see: neuropsychological evidence of a sensory mirror system for touch.
Bolognini, Nadia; Olgiati, Elena; Xaiz, Annalisa; Posteraro, Lucio; Ferraro, Francesco; Maravita, Angelo
2012-09-01
The observation of touch can be grounded in the activation of brain areas underpinning direct tactile experience, namely the somatosensory cortices. What is the behavioral impact of such a mirror sensory activity on visual perception? To address this issue, we investigated the causal interplay between observed and felt touch in right brain-damaged patients, as a function of their underlying damaged visual and/or tactile modalities. Patients and healthy controls underwent a detection task, comprising visual stimuli depicting touches or without a tactile component. Touch and No-touch stimuli were presented in egocentric or allocentric perspectives. Seeing touches, regardless of the viewing perspective, differently affects visual perception depending on which sensory modality is damaged: In patients with a selective visual deficit, but without any tactile defect, the sight of touch improves the visual impairment; this effect is associated with a lesion to the supramarginal gyrus. In patients with a tactile deficit, but intact visual perception, the sight of touch disrupts visual processing, inducing a visual extinction-like phenomenon. This disruptive effect is associated with the damage of the postcentral gyrus. Hence, a damage to the somatosensory system can lead to a dysfunctional visual processing, and an intact somatosensory processing can aid visual perception.
D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina
2017-10-01
Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18 F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.
2001-05-01
displays were discussed with the test pilots that we interviewed. The pilots had mixed opinions on tactile and auditory displays. Positive comments...were noted concerning three-dimensional auditory displays, although some stated that the pilot could easily ignore the aural tone. Others complained...recognition technology was not reliable enough and worried about problems with surrounding auditory signals from anti-G straining maneuvers, oxygen
Tactile Radar: experimenting a computer game with visually disabled.
Kastrup, Virgínia; Cassinelli, Alvaro; Quérette, Paulo; Bergstrom, Niklas; Sampaio, Eliana
2017-09-18
Visually disabled people increasingly use computers in everyday life, thanks to novel assistive technologies better tailored to their cognitive functioning. Like sighted people, many are interested in computer games - videogames and audio-games. Tactile-games are beginning to emerge. The Tactile Radar is a device through which a visually disabled person is able to detect distal obstacles. In this study, it is connected to a computer running a tactile-game. The game consists in finding and collecting randomly arranged coins in a virtual room. The study was conducted with nine congenital blind people including both sexes, aged 20-64 years old. Complementary methods of first and third person were used: the debriefing interview and the quasi-experimental design. The results indicate that the Tactile Radar is suitable for the creation of computer games specifically tailored for visually disabled people. Furthermore, the device seems capable of eliciting a powerful immersive experience. Methodologically speaking, this research contributes to the consolidation and development of first and third person complementary methods, particularly useful in disabled people research field, including the evaluation by users of the Tactile Radar effectiveness in a virtual reality context. Implications for rehabilitation Despite the growing interest in virtual games for visually disabled people, they still find barriers to access such games. Through the development of assistive technologies such as the Tactile Radar, applied in virtual games, we can create new opportunities for leisure, socialization and education for visually disabled people. The results of our study indicate that the Tactile Radar is adapted to the creation of video games for visually disabled people, providing a playful interaction with the players.
Etzi, Roberta; Spence, Charles; Zampini, Massimiliano; Gallace, Alberto
2016-01-01
Over the last decade, scientists working on the topic of multisensory integration, as well as designers and marketers involved in trying to understand consumer behavior, have become increasingly interested in the non-arbitrary associations (e.g., sound symbolism) between different sensorial attributes of the stimuli they work with. Nevertheless, to date, little research in this area has investigated the presence of these crossmodal correspondences in the tactile evaluation of everyday materials. Here, we explore the presence and nature of the associations between tactile sensations, the sound of non-words, and people's emotional states. Samples of cotton, satin, tinfoil, sandpaper, and abrasive sponge, were stroked along the participants' forearm at the speed of 5 cm/s. Participants evaluated the materials along several dimensions, comprising scales anchored by pairs of non-words (e.g., Kiki/Bouba) and adjectives (e.g., ugly/beautiful). The results revealed that smoother textures were associated with non-words made up of round-shaped sounds (e.g., Maluma), whereas rougher textures were more strongly associated with sharp-transient sounds (e.g., Takete). The results also revealed the presence of a number of correspondences between tactile surfaces and adjectives related to visual and auditory attributes. For example, smooth textures were associated with features evoked by words such as 'bright' and 'quiet'; by contrast, the rougher textures were associated with adjectives such as 'dim' and 'loud'. The textures were also found to be associated with a number of emotional labels. Taken together, these results further our understanding of crossmodal correspondences involving the tactile modality and provide interesting insights in the applied field of design and marketing.
Yang, Xuejuan; Xu, Ziliang; Liu, Lin; Liu, Peng; Sun, Jinbo; Jin, Lingmin; Zhu, Yuanqiang; Fei, Ningbo; Qin, Wei
2017-07-28
Cognitive processes involve input from multiple sensory modalities and obvious differences in the level of cognitive function can be observed between individuals. Evidence to date understanding the biological basis of tactile cognitive variability, however, is limited compared with other forms of sensory cognition. Data from auditory and visual cognition research suggest that variations in both genetics and intrinsic brain function might contribute to individual differences in tactile cognitive performance. In the present study, by using the tactual performance test (TPT), a widely used neuropsychological assessment tool, we investigated the effects of the brain-derived neurotrophic factor (BDNF) Val66Met polymorphism and resting-state brain functional connectivity (FC) on interindividual variability in TPT performance in healthy, young Chinese adults. Our results showed that the BDNF genotypes and resting-state FC had significant effects on the variability in TPT performance, together accounting for 32.5% and 19.1% of the variance on TPT total score and Memory subitem score respectively. Having fewer Met alleles, stronger anticorrelations between left posterior superior temporal gyrus and somatosensory areas (right postcentral gyrus and right parietal operculum cortex), and greater positive correlation between left parietal operculum cortex and left central opercular cortex, all correspond with better performance of TPT task. And FC between left parietal operculum cortex and left central opercular cortex might be a mediator of the relationship between BDNF genotypes and Memory subitem score. These data demonstrate a novel contribution of intrinsic brain function to tactile cognitive capacity, and further confirm the genetic basis of tactile cognition. Our findings might also explain the interindividual differences in cognitive ability observed in those who are blind and/or deaf from a new perspective. Copyright © 2017. Published by Elsevier Ltd.
Auditory Sensory Substitution is Intuitive and Automatic with Texture Stimuli
Stiles, Noelle R. B.; Shimojo, Shinsuke
2015-01-01
Millions of people are blind worldwide. Sensory substitution (SS) devices (e.g., vOICe) can assist the blind by encoding a video stream into a sound pattern, recruiting visual brain areas for auditory analysis via crossmodal interactions and plasticity. SS devices often require extensive training to attain limited functionality. In contrast to conventional attention-intensive SS training that starts with visual primitives (e.g., geometrical shapes), we argue that sensory substitution can be engaged efficiently by using stimuli (such as textures) associated with intrinsic crossmodal mappings. Crossmodal mappings link images with sounds and tactile patterns. We show that intuitive SS sounds can be matched to the correct images by naive sighted participants just as well as by intensively-trained participants. This result indicates that existing crossmodal interactions and amodal sensory cortical processing may be as important in the interpretation of patterns by SS as crossmodal plasticity (e.g., the strengthening of existing connections or the formation of new ones), especially at the earlier stages of SS usage. An SS training procedure based on crossmodal mappings could both considerably improve participant performance and shorten training times, thereby enabling SS devices to significantly expand blind capabilities. PMID:26490260
Crossmodal Congruency Benefits of Tactile and Visual Signalling
2013-11-12
modal information format seemed to produce faster and more accurate performance. The question of learning complex tactile communication signals...SECURITY CLASSIFICATION OF: We conducted an experiment in which tactile messages were created based on five common military arm and hand signals. We...compared response times and accuracy rates of novice individuals responding to visual and tactile representations of these messages, which were
Sadato, Norihiro; Okada, Tomohisa; Kubota, Kiyokazu; Yonekura, Yoshiharu
2004-04-08
The occipital cortex of blind subjects is known to be activated during tactile discrimination tasks such as Braille reading. To investigate whether this is due to long-term learning of Braille or to sensory deafferentation, we used fMRI to study tactile discrimination tasks in subjects who had recently lost their sight and never learned Braille. The occipital cortex of the blind subjects without Braille training was activated during the tactile discrimination task, whereas that of control sighted subjects was not. This finding suggests that the activation of the visual cortex of the blind during performance of a tactile discrimination task may be due to sensory deafferentation, wherein a competitive imbalance favors the tactile over the visual modality.
Short-term memory for spatial configurations in the tactile modality: a comparison with vision.
Picard, Delphine; Monnier, Catherine
2009-11-01
This study investigates the role of acquisition constraints on the short-term retention of spatial configurations in the tactile modality in comparison with vision. It tests whether the sequential processing of information inherent to the tactile modality could account for limitation in short-term memory span for tactual-spatial information. In addition, this study investigates developmental aspects of short-term memory for tactual- and visual-spatial configurations. A total of 144 child and adult participants were assessed for their memory span in three different conditions: tactual, visual, and visual with a limited field of view. The results showed lower tactual-spatial memory span than visual-spatial, regardless of age. However, differences in memory span observed between the tactile and visual modalities vanished when the visual processing of information occurred within a limited field. These results provide evidence for an impact of acquisition constraints on the retention of spatial information in the tactile modality in both childhood and adulthood.
Does touch inhibit visual imagery? A case study on acquired blindness.
von Trott Zu Solz, Jana; Paolini, Marco; Silveira, Sarita
2017-06-01
In a single-case study of acquired blindness, differential brain activation patterns for visual imagery of familiar objects with and without tactile exploration as well as of tactilely explored unfamiliar objects were observed. Results provide new insight into retrieval of visual images from episodic memory and point toward a potential tactile inhibition of visual imagery. © 2017 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
Tactile mental body parts representation in obesity.
Scarpina, Federica; Castelnuovo, Gianluca; Molinari, Enrico
2014-12-30
Obese people׳s distortions in visually-based mental body-parts representations have been reported in previous studies, but other sensory modalities have largely been neglected. In the present study, we investigated possible differences in tactilely-based body-parts representation between an obese and a healthy-weight group; additionally we explore the possible relationship between the tactile- and the visually-based body representation. Participants were asked to estimate the distance between two tactile stimuli that were simultaneously administered on the arm or on the abdomen, in the absence of visual input. The visually-based body-parts representation was investigated by a visual imagery method in which subjects were instructed to compare the horizontal extension of body part pairs. According to the results, the obese participants overestimated the size of the tactilely-perceived distances more than the healthy-weight group when the arm, and not the abdomen, was stimulated. Moreover, they reported a lower level of accuracy than did the healthy-weight group when estimating horizontal distances relative to their bodies, confirming an inappropriate visually-based mental body representation. Our results imply that body representation disturbance in obese people is not limited to the visual mental domain, but it spreads to the tactilely perceived distances. The inaccuracy was not a generalized tendency but was body-part related. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
An invisible touch: Body-related multisensory conflicts modulate visual consciousness.
Salomon, Roy; Galli, Giulia; Łukowska, Marta; Faivre, Nathan; Ruiz, Javier Bello; Blanke, Olaf
2016-07-29
The majority of scientific studies on consciousness have focused on vision, exploring the cognitive and neural mechanisms of conscious access to visual stimuli. In parallel, studies on bodily consciousness have revealed that bodily (i.e. tactile, proprioceptive, visceral, vestibular) signals are the basis for the sense of self. However, the role of bodily signals in the formation of visual consciousness is not well understood. Here we investigated how body-related visuo-tactile stimulation modulates conscious access to visual stimuli. We used a robotic platform to apply controlled tactile stimulation to the participants' back while they viewed a dot moving either in synchrony or asynchrony with the touch on their back. Critically, the dot was rendered invisible through continuous flash suppression. Manipulating the visual context by presenting the dot moving on either a body form, or a non-bodily object we show that: (i) conflict induced by synchronous visuo-tactile stimulation in a body context is associated with a delayed conscious access compared to asynchronous visuo-tactile stimulation, (ii) this effect occurs only in the context of a visual body form, and (iii) is not due to detection or response biases. The results indicate that body-related visuo-tactile conflicts impact visual consciousness by facilitating access of non-conflicting visual information to awareness, and that these are sensitive to the visual context in which they are presented, highlighting the interplay between bodily signals and visual experience. Copyright © 2015 Elsevier Ltd. All rights reserved.
Short-term visual deprivation, tactile acuity, and haptic solid shape discrimination.
Crabtree, Charles E; Norman, J Farley
2014-01-01
Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task - perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task - the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation.
Meaidi, Amani; Jennum, Poul; Ptito, Maurice; Kupers, Ron
2014-05-01
We aimed to assess dream content in groups of congenitally blind (CB), late blind (LB), and age- and sex-matched sighted control (SC) participants. We conducted an observational study of 11 CB, 14 LB, and 25 SC participants and collected dream reports over a 4-week period. Every morning participants filled in a questionnaire related to the sensory construction of the dream, its emotional and thematic content, and the possible occurrence of nightmares. We also assessed participants' ability of visual imagery during waking cognition, sleep quality, and depression and anxiety levels. All blind participants had fewer visual dream impressions compared to SC participants. In LB participants, duration of blindness was negatively correlated with duration, clarity, and color content of visual dream impressions. CB participants reported more auditory, tactile, gustatory, and olfactory dream components compared to SC participants. In contrast, LB participants only reported more tactile dream impressions. Blind and SC participants did not differ with respect to emotional and thematic dream content. However, CB participants reported more aggressive interactions and more nightmares compared to the other two groups. Our data show that blindness considerably alters the sensory composition of dreams and that onset and duration of blindness plays an important role. The increased occurrence of nightmares in CB participants may be related to a higher number of threatening experiences in daily life in this group. Copyright © 2014 Elsevier B.V. All rights reserved.
Retinotopically specific reorganization of visual cortex for tactile pattern recognition
Cheung, Sing-Hang; Fang, Fang; He, Sheng; Legge, Gordon E.
2009-01-01
Although previous studies have shown that Braille reading and other tactile-discrimination tasks activate the visual cortex of blind and sighted people [1–5], it is not known whether this kind of cross-modal reorganization is influenced by retinotopic organization. We have addressed this question by studying S, a visually impaired adult with the rare ability to read print visually and Braille by touch. S had normal visual development until age six years, and thereafter severe acuity reduction due to corneal opacification, but no evidence of visual-field loss. Functional magnetic resonance imaging (fMRI) revealed that, in S’s early visual areas, tactile information processing activated what would be the foveal representation for normally-sighted individuals, and visual information processing activated what would be the peripheral representation. Control experiments showed that this activation pattern was not due to visual imagery. S’s high-level visual areas which correspond to shape- and object-selective areas in normally-sighted individuals were activated by both visual and tactile stimuli. The retinotopically specific reorganization in early visual areas suggests an efficient redistribution of neural resources in the visual cortex. PMID:19361999
Cross-modal extinction in a boy with severely autistic behaviour and high verbal intelligence.
Bonneh, Yoram S; Belmonte, Matthew K; Pei, Francesca; Iversen, Portia E; Kenet, Tal; Akshoomoff, Natacha; Adini, Yael; Simon, Helen J; Moore, Christopher I; Houde, John F; Merzenich, Michael M
2008-07-01
Anecdotal reports from individuals with autism suggest a loss of awareness to stimuli from one modality in the presence of stimuli from another. Here we document such a case in a detailed study of A.M., a 13-year-old boy with autism in whom significant autistic behaviours are combined with an uneven IQ profile of superior verbal and low performance abilities. Although A.M.'s speech is often unintelligible, and his behaviour is dominated by motor stereotypies and impulsivity, he can communicate by typing or pointing independently within a letter board. A series of experiments using simple and highly salient visual, auditory, and tactile stimuli demonstrated a hierarchy of cross-modal extinction, in which auditory information extinguished other modalities at various levels of processing. A.M. also showed deficits in shifting and sustaining attention. These results provide evidence for monochannel perception in autism and suggest a general pattern of winner-takes-all processing in which a stronger stimulus-driven representation dominates behaviour, extinguishing weaker representations.
Ortiz, Tomás; Poch, Joaquín; Santos, Juan M.; Requena, Carmen; Martínez, Ana M.; Ortiz-Terán, Laura; Turrero, Agustín; Barcia, Juan; Nogales, Ramón; Calvo, Agustín; Martínez, José M.; Córdoba, José L.; Pascual-Leone, Alvaro
2011-01-01
Over three months of intensive training with a tactile stimulation device, 18 blind and 10 blindfolded seeing subjects improved in their ability to identify geometric figures by touch. Seven blind subjects spontaneously reported ‘visual qualia’, the subjective sensation of seeing flashes of light congruent with tactile stimuli. In the latter subjects tactile stimulation evoked activation of occipital cortex on electroencephalography (EEG). None of the blind subjects who failed to experience visual qualia, despite identical tactile stimulation training, showed EEG recruitment of occipital cortex. None of the blindfolded seeing humans reported visual-like sensations during tactile stimulation. These findings support the notion that the conscious experience of seeing is linked to the activation of occipital brain regions in people with blindness. Moreover, the findings indicate that provision of visual information can be achieved through non-visual sensory modalities which may help to minimize the disability of blind individuals, affording them some degree of object recognition and navigation aid. PMID:21853098
Braille and Tactile Graphics: Youths with Visual Impairments Share Their Experiences
ERIC Educational Resources Information Center
Rosenblum, L. Penny; Herzberg, Tina S.
2015-01-01
Introduction: Data were collected from youths with visual impairment about their experiences with tactile graphics and braille materials used in mathematics and science classes. Methods: Youths answered questions and explored four tactile graphics made using different production methods. They located specific information on each graphic and shared…
The frequency and severity of extinction after stroke affecting different vascular territories.
Chechlacz, Magdalena; Rotshtein, Pia; Demeyere, Nele; Bickerton, Wai-Ling; Humphreys, Glyn W
2014-02-01
We examined the frequency and severity of visual versus tactile extinction based on data from a large group of sub-acute patients (n=454) with strokes affecting different vascular territories. After right hemisphere damage visual and tactile extinction were equally common. However, after left hemisphere damage tactile extinction was more common than visual. The frequency of extinction was significantly higher in patients with right compared to left hemisphere damage in both visual and tactile modalities but this held only for strokes affecting the MCA and PCA territories and not for strokes affecting other vascular territories. Furthermore, the severity of extinction did not differ as a function of either the stimulus modality (visual versus tactile), the affected hemisphere (left versus right) or the stroke territory (MCA, PCA or other vascular territories). We conclude that the frequency but not severity of extinction in both modalities relates to the side of damage (i.e. left versus right hemisphere) and the vascular territories affected by the stroke, and that left hemisphere dominance for motor control may link to the greater incidence of tactile than visual extinction after left hemisphere stroke. We discuss the implications of our findings for understanding hemispheric lateralization within visuospatial attention networks. Copyright © 2014 Elsevier Ltd. All rights reserved.
Tactile and bone-conduction auditory brain computer interface for vision and hearing impaired users.
Rutkowski, Tomasz M; Mori, Hiromu
2015-04-15
The paper presents a report on the recently developed BCI alternative for users suffering from impaired vision (lack of focus or eye-movements) or from the so-called "ear-blocking-syndrome" (limited hearing). We report on our recent studies of the extents to which vibrotactile stimuli delivered to the head of a user can serve as a platform for a brain computer interface (BCI) paradigm. In the proposed tactile and bone-conduction auditory BCI novel multiple head positions are used to evoke combined somatosensory and auditory (via the bone conduction effect) P300 brain responses, in order to define a multimodal tactile and bone-conduction auditory brain computer interface (tbcaBCI). In order to further remove EEG interferences and to improve P300 response classification synchrosqueezing transform (SST) is applied. SST outperforms the classical time-frequency analysis methods of the non-linear and non-stationary signals such as EEG. The proposed method is also computationally more effective comparing to the empirical mode decomposition. The SST filtering allows for online EEG preprocessing application which is essential in the case of BCI. Experimental results with healthy BCI-naive users performing online tbcaBCI, validate the paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies. We present a comparison of the proposed SST-based preprocessing method, combined with a logistic regression (LR) classifier, together with classical preprocessing and LDA-based classification BCI techniques. The proposed tbcaBCI paradigm together with data-driven preprocessing methods are a step forward in robust BCI applications research. Copyright © 2014 Elsevier B.V. All rights reserved.
A new method for text detection and recognition in indoor scene for assisting blind people
NASA Astrophysics Data System (ADS)
Jabnoun, Hanen; Benzarti, Faouzi; Amiri, Hamid
2017-03-01
Developing assisting system of handicapped persons become a challenging ask in research projects. Recently, a variety of tools are designed to help visually impaired or blind people object as a visual substitution system. The majority of these tools are based on the conversion of input information into auditory or tactile sensory information. Furthermore, object recognition and text retrieval are exploited in the visual substitution systems. Text detection and recognition provides the description of the surrounding environments, so that the blind person can readily recognize the scene. In this work, we aim to introduce a method for detecting and recognizing text in indoor scene. The process consists on the detection of the regions of interest that should contain the text using the connected component. Then, the text detection is provided by employing the images correlation. This component of an assistive blind person should be simple, so that the users are able to obtain the most informative feedback within the shortest time.
ERIC Educational Resources Information Center
Nober, E. Harris
The study investigated whether low frequency air and bone thresholds elicited at high intensity levels from deaf children with a sensory-neural diagnosis reflect valid auditory sensitivity or are mediated through cutaneous-tactile receptors. Subjects were five totally deaf (mean age 17.0) yielding vibrotactile thresholds but with no air and bone…
Short-Term Visual Deprivation, Tactile Acuity, and Haptic Solid Shape Discrimination
Crabtree, Charles E.; Norman, J. Farley
2014-01-01
Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task – perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task – the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation. PMID:25397327
NASA Astrophysics Data System (ADS)
Ren, Chunye; Parel, Jean-Marie A.
1993-06-01
Scientists have searched every discipline to find effective methods of treating blindness, such as using aids based on conversion of the optical image, to auditory or tactile stimuli. However, the limited performance of such equipment and difficulties in training patients have seriously hampered practical applications. A great edification has been given by the discovery of Foerster (1929) and Krause & Schum (1931), who found that the electrical stimulation of the visual cortex evokes the perception of a small spot of light called `phosphene' in both blind and sighted subjects. According to this principle, it is possible to invite artificial vision by using stimulation with electrodes placed on the vision neural system, thereby developing a prosthesis for the blind that might be of value in reading and mobility. In fact, a number of investigators have already exploited this phenomena to produce a functional visual prosthesis, bringing about great advances in this area.
Sensory Temporal Processing in Adults with Early Hearing Loss
ERIC Educational Resources Information Center
Heming, Joanne E.; Brown, Lenora N.
2005-01-01
This study examined tactile and visual temporal processing in adults with early loss of hearing. The tactile task consisted of punctate stimulations that were delivered to one or both hands by a mechanical tactile stimulator. Pairs of light emitting diodes were presented on a display for visual stimulation. Responses consisted of YES or NO…
Sklar, A E; Sarter, N B
1999-12-01
Observed breakdowns in human-machine communication can be explained, in part, by the nature of current automation feedback, which relies heavily on focal visual attention. Such feedback is not well suited for capturing attention in case of unexpected changes and events or for supporting the parallel processing of large amounts of data in complex domains. As suggested by multiple-resource theory, one possible solution to this problem is to distribute information across various sensory modalities. A simulator study was conducted to compare the effectiveness of visual, tactile, and redundant visual and tactile cues for indicating unexpected changes in the status of an automated cockpit system. Both tactile conditions resulted in higher detection rates for, and faster response times to, uncommanded mode transitions. Tactile feedback did not interfere with, nor was its effectiveness affected by, the performance of concurrent visual tasks. The observed improvement in task-sharing performance indicates that the introduction of tactile feedback is a promising avenue toward better supporting human-machine communication in event-driven, information-rich domains.
Visual enhancing of tactile perception in the posterior parietal cortex.
Ro, Tony; Wallace, Ruth; Hagedorn, Judith; Farnè, Alessandro; Pienkos, Elizabeth
2004-01-01
The visual modality typically dominates over our other senses. Here we show that after inducing an extreme conflict in the left hand between vision of touch (present) and the feeling of touch (absent), sensitivity to touch increases for several minutes after the conflict. Transcranial magnetic stimulation of the posterior parietal cortex after this conflict not only eliminated the enduring visual enhancement of touch, but also impaired normal tactile perception. This latter finding demonstrates a direct role of the parietal lobe in modulating tactile perception as a result of the conflict between these senses. These results provide evidence for visual-to-tactile perceptual modulation and demonstrate effects of illusory vision of touch on touch perception through a long-lasting modulatory process in the posterior parietal cortex.
Tactile Cueing as a Gravitational Substitute for Spatial Navigation During Parabolic Flight
NASA Technical Reports Server (NTRS)
Montgomery, K. L.; Beaton, K. H.; Barba, J. M.; Cackler, J. M.; Son, J. H.; Horsfield, S. P.; Wood, S. J.
2010-01-01
INTRODUCTION: Spatial navigation requires an accurate awareness of orientation in your environment. The purpose of this experiment was to examine how spatial awareness was impaired with changing gravitational cues during parabolic flight, and the extent to which vibrotactile feedback of orientation could be used to help improve performance. METHODS: Six subjects were restrained in a chair tilted relative to the plane floor, and placed at random positions during the start of the microgravity phase. Subjects reported their orientation using verbal reports, and used a hand-held controller to point to a desired target location presented using a virtual reality video mask. This task was repeated with and without constant tactile cueing of "down" direction using a belt of 8 tactors placed around the mid-torso. Control measures were obtained during ground testing using both upright and tilted conditions. RESULTS: Perceptual estimates of orientation and pointing accuracy were impaired during microgravity or during rotation about an upright axis in 1g. The amount of error was proportional to the amount of chair displacement. Perceptual errors were reduced during movement about a tilted axis on earth. CONCLUSIONS: Reduced perceptual errors during tilts in 1g indicate the importance of otolith and somatosensory cues for maintaining spatial awareness. Tactile cueing may improve navigation in operational environments or clinical populations, providing a non-visual non-auditory feedback of orientation or desired direction heading.
Tajadura-Jiménez, Ana; Tsakiris, Manos; Marquardt, Torsten; Bianchi-Berthouze, Nadia
2015-01-01
Auditory feedback accompanies almost all our actions, but its contribution to body-representation is understudied. Recently it has been shown that the auditory distance of action sounds recalibrates perceived tactile distances on one’s arm, suggesting that action sounds can change the mental representation of arm length. However, the question remains open of what factors play a role in this recalibration. In this study we investigate two of these factors, kinaesthesia, and sense of agency. Across two experiments, we asked participants to tap with their arm on a surface while extending their arm. We manipulated the tapping sounds to originate at double the distance to the tapping locations, as well as their synchrony to the action, which is known to affect feelings of agency over the sounds. Kinaesthetic cues were manipulated by having additional conditions in which participants did not displace their arm but kept tapping either close (Experiment 1) or far (Experiment 2) from their body torso. Results show that both the feelings of agency over the action sounds and kinaesthetic cues signaling arm displacement when displacement of the sound source occurs are necessary to observe changes in perceived tactile distance on the arm. In particular, these cues resulted in the perceived tactile distances on the arm being felt smaller, as compared to distances on a reference location. Moreover, our results provide the first evidence of consciously perceived changes in arm-representation evoked by action sounds and suggest that the observed changes in perceived tactile distance relate to experienced arm elongation. We discuss the observed effects in the context of forward internal models of sensorimotor integration. Our results add to these models by showing that predictions related to action sounds must fit with kinaesthetic cues in order for auditory inputs to change body-representation. PMID:26074843
Attention affects visual perceptual processing near the hand.
Cosman, Joshua D; Vecera, Shaun P
2010-09-01
Specialized, bimodal neural systems integrate visual and tactile information in the space near the hand. Here, we show that visuo-tactile representations allow attention to influence early perceptual processing, namely, figure-ground assignment. Regions that were reached toward were more likely than other regions to be assigned as foreground figures, and hand position competed with image-based information to bias figure-ground assignment. Our findings suggest that hand position allows attention to influence visual perceptual processing and that visual processes typically viewed as unimodal can be influenced by bimodal visuo-tactile representations.
ERIC Educational Resources Information Center
Hauptman, Anna R.
Two experiments involving 42 students from the Model Secondary School for the Deaf investigated both the visual and tactile components in the processing of spatial information. Test measures used were the Figures Rotations Test, Group Embedded Figures Test, and Tactile Rotations Test. The study suggested that spatial reasoning is a determining…
ERIC Educational Resources Information Center
Ryles, Ruby; Bell, Edward
2009-01-01
Seventy-three children with visual impairments aged 2-10 and their parents participated in a project that examined the children's interest in and exploration of tactile graphics. The parents reported that the children's interest in and conceptual understanding of the project's tactile workbook were high and that the children explored the…
ERIC Educational Resources Information Center
Teske, Jolene K.; Gray, Phyllis; Kuhn, Mason A.; Clausen, Courtney K.; Smith, Latisha L.; Alsubia, Sukainah A.; Ghayoorad, Maryam; Rule, Audrey C.; Schneider, Jean Suchsland
2014-01-01
Gifted students with visual impairments are twice exceptional learners and may not evidence their advanced science aptitudes without appropriate accommodations for learning science. However, effective tactile science teaching materials may be easily made. Recent research has shown that when tactile materials are used with "all" students…
Gautam, Anjali; Bhambal, Ajay; Moghe, Swapnil
2018-01-01
Children with special needs face unique challenges in day-to-day practice. They are dependent on their close ones for everything. To improve oral hygiene in such visually impaired children, undue training and education are required. Braille is an important language for reading and writing for the visually impaired. It helps them understand and visualize the world via touch. Audio aids are being used to impart health education to the visually impaired. Tactile models help them perceive things which they cannot visualize and hence are an important learning tool. This study aimed to assess the improvement in oral hygiene by audio aids and Braille and tactile models in visually impaired children aged 6-16 years of Bhopal city. This was a prospective study. Sixty visually impaired children aged 6-16 years were selected and randomly divided into three groups (20 children each). Group A: audio aids + Braille, Group B: audio aids + tactile models, and Group C: audio aids + Braille + tactile models. Instructions were given for maintaining good oral hygiene and brushing techniques were explained to all children. After 3 months' time, the oral hygiene status was recorded and compared using plaque and gingival index. ANNOVA test was used. The present study showed a decrease in the mean plaque and gingival scores at all time intervals in individual group as compared to that of the baseline that was statistically significant. The study depicts that the combination of audio aids, Braille and tactile models is an effective way to provide oral health education and improve oral health status of visually impaired children.
Infrared radiation from hot cones on cool conifers attracts seed-feeding insects
Takács, Stephen; Bottomley, Hannah; Andreller, Iisak; Zaradnik, Tracy; Schwarz, Joseph; Bennett, Robb; Strong, Ward; Gries, Gerhard
2008-01-01
Foraging animals use diverse cues to locate resources. Common foraging cues have visual, auditory, olfactory, tactile or gustatory characteristics. Here, we show a foraging herbivore using infrared (IR) radiation from living plants as a host-finding cue. We present data revealing that (i) conifer cones are warmer and emit more near-, mid- and long-range IR radiation than needles, (ii) cone-feeding western conifer seed bugs, Leptoglossus occidentalis (Hemiptera: Coreidae), possess IR receptive organs and orient towards experimental IR cues, and (iii) occlusion of the insects' IR receptors impairs IR perception. The conifers' cost of attracting cone-feeding insects may be offset by occasional mast seeding resulting in cone crops too large to be effectively exploited by herbivores. PMID:18945664
Infrared radiation from hot cones on cool conifers attracts seed-feeding insects.
Takács, Stephen; Bottomley, Hannah; Andreller, Iisak; Zaradnik, Tracy; Schwarz, Joseph; Bennett, Robb; Strong, Ward; Gries, Gerhard
2009-02-22
Foraging animals use diverse cues to locate resources. Common foraging cues have visual, auditory, olfactory, tactile or gustatory characteristics. Here, we show a foraging herbivore using infrared (IR) radiation from living plants as a host-finding cue. We present data revealing that (i) conifer cones are warmer and emit more near-, mid- and long-range IR radiation than needles, (ii) cone-feeding western conifer seed bugs, Leptoglossus occidentalis (Hemiptera: Coreidae), possess IR receptive organs and orient towards experimental IR cues, and (iii) occlusion of the insects' IR receptors impairs IR perception. The conifers' cost of attracting cone-feeding insects may be offset by occasional mast seeding resulting in cone crops too large to be effectively exploited by herbivores.
Behavioral, Modeling, and Electrophysiological Evidence for Supramodality in Human Metacognition.
Faivre, Nathan; Filevich, Elisa; Solovey, Guillermo; Kühn, Simone; Blanke, Olaf
2018-01-10
Human metacognition, or the capacity to introspect on one's own mental states, has been mostly characterized through confidence reports in visual tasks. A pressing question is to what extent results from visual studies generalize to other domains. Answering this question allows determining whether metacognition operates through shared, supramodal mechanisms or through idiosyncratic, modality-specific mechanisms. Here, we report three new lines of evidence for decisional and postdecisional mechanisms arguing for the supramodality of metacognition. First, metacognitive efficiency correlated among auditory, tactile, visual, and audiovisual tasks. Second, confidence in an audiovisual task was best modeled using supramodal formats based on integrated representations of auditory and visual signals. Third, confidence in correct responses involved similar electrophysiological markers for visual and audiovisual tasks that are associated with motor preparation preceding the perceptual judgment. We conclude that the supramodality of metacognition relies on supramodal confidence estimates and decisional signals that are shared across sensory modalities. SIGNIFICANCE STATEMENT Metacognitive monitoring is the capacity to access, report, and regulate one's own mental states. In perception, this allows rating our confidence in what we have seen, heard, or touched. Although metacognitive monitoring can operate on different cognitive domains, we ignore whether it involves a single supramodal mechanism common to multiple cognitive domains or modality-specific mechanisms idiosyncratic to each domain. Here, we bring evidence in favor of the supramodality hypothesis by showing that participants with high metacognitive performance in one modality are likely to perform well in other modalities. Based on computational modeling and electrophysiology, we propose that supramodality can be explained by the existence of supramodal confidence estimates and by the influence of decisional cues on confidence estimates. Copyright © 2018 the authors 0270-6474/18/380263-15$15.00/0.
Supramodal parametric working memory processing in humans.
Spitzer, Bernhard; Blankenburg, Felix
2012-03-07
Previous studies of delayed-match-to-sample (DMTS) frequency discrimination in animals and humans have succeeded in delineating the neural signature of frequency processing in somatosensory working memory (WM). During retention of vibrotactile frequencies, stimulus-dependent single-cell and population activity in prefrontal cortex was found to reflect the task-relevant memory content, whereas increases in occipital alpha activity signaled the disengagement of areas not relevant for the tactile task. Here, we recorded EEG from human participants to determine the extent to which these mechanisms can be generalized to frequency retention in the visual and auditory domains. Subjects performed analogous variants of a DMTS frequency discrimination task, with the frequency information presented either visually, auditorily, or by vibrotactile stimulation. Examining oscillatory EEG activity during frequency retention, we found characteristic topographical distributions of alpha power over visual, auditory, and somatosensory cortices, indicating systematic patterns of inhibition and engagement of early sensory areas, depending on stimulus modality. The task-relevant frequency information, in contrast, was found to be represented in right prefrontal cortex, independent of presentation mode. In each of the three modality conditions, parametric modulations of prefrontal upper beta activity (20-30 Hz) emerged, in a very similar manner as recently found in vibrotactile tasks. Together, the findings corroborate a view of parametric WM as supramodal internal scaling of abstract quantity information and suggest strong relevance of previous evidence from vibrotactile work for a more general framework of quantity processing in human working memory.
Multiple foci of spatial attention in multimodal working memory.
Katus, Tobias; Eimer, Martin
2016-11-15
The maintenance of sensory information in working memory (WM) is mediated by the attentional activation of stimulus representations that are stored in perceptual brain regions. Using event-related potentials (ERPs), we measured tactile and visual contralateral delay activity (tCDA/CDA components) in a bimodal WM task to concurrently track the attention-based maintenance of information stored in anatomically segregated (somatosensory and visual) brain areas. Participants received tactile and visual sample stimuli on both sides, and in different blocks, memorized these samples on the same side or on opposite sides. After a retention delay, memory was unpredictably tested for touch or vision. In the same side blocks, tCDA and CDA components simultaneously emerged over the same hemisphere, contralateral to the memorized tactile/visual sample set. In opposite side blocks, these two components emerged over different hemispheres, but had the same sizes and onset latencies as in the same side condition. Our results reveal distinct foci of tactile and visual spatial attention that were concurrently maintained on task-relevant stimulus representations in WM. The independence of spatially-specific biasing mechanisms for tactile and visual WM content suggests that multimodal information is stored in distributed perceptual brain areas that are activated through modality-specific processes that can operate simultaneously and largely independently of each other. Copyright © 2016 Elsevier Inc. All rights reserved.
Exploring the Invisible Universe: A Tactile and Braille Exhibit of Astronomical Images
NASA Astrophysics Data System (ADS)
Arcand, K. K.; Watzke, M.; de Pree, C.
2010-06-01
A tactile/Braille exhibit for the visually impaired community in the USA was launched in July 2009. The exhibit is part of the global From Earth to the Universe (FETTU) project, a Cornerstone of the International Year of Astronomy 2009. The science content of the travelling tactile/Braille exhibit includes explanations of our Sun, Eta Carinae, the Crab Nebula, the Whirlpool Galaxy and the electromagnetic spectrum, and was adapted from the tactile/Braille book Touch the Invisible Sky. We present some of the early observations and findings on the tactile/Braille FETTU exhibit. The new exhibit opens a wider door to experiencing and understanding astronomy for the underserved visually impaired population.
Ku, Yixuan; Zhao, Di; Bodner, Mark; Zhou, Yong-Di
2015-08-01
In the present study, causal roles of both the primary somatosensory cortex (SI) and the posterior parietal cortex (PPC) were investigated in a tactile unimodal working memory (WM) task. Individual magnetic resonance imaging-based single-pulse transcranial magnetic stimulation (spTMS) was applied, respectively, to the left SI (ipsilateral to tactile stimuli), right SI (contralateral to tactile stimuli) and right PPC (contralateral to tactile stimuli), while human participants were performing a tactile-tactile unimodal delayed matching-to-sample task. The time points of spTMS were 300, 600 and 900 ms after the onset of the tactile sample stimulus (duration: 200 ms). Compared with ipsilateral SI, application of spTMS over either contralateral SI or contralateral PPC at those time points significantly impaired the accuracy of task performance. Meanwhile, the deterioration in accuracy did not vary with the stimulating time points. Together, these results indicate that the tactile information is processed cooperatively by SI and PPC in the same hemisphere, starting from the early delay of the tactile unimodal WM task. This pattern of processing of tactile information is different from the pattern in tactile-visual cross-modal WM. In a tactile-visual cross-modal WM task, SI and PPC contribute to the processing sequentially, suggesting a process of sensory information transfer during the early delay between modalities. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Acoustic duetting in Drosophila virilis relies on the integration of auditory and tactile signals
LaRue, Kelly M; Clemens, Jan; Berman, Gordon J; Murthy, Mala
2015-01-01
Many animal species, including insects, are capable of acoustic duetting, a complex social behavior in which males and females tightly control the rate and timing of their courtship song syllables relative to each other. The mechanisms underlying duetting remain largely unknown across model systems. Most studies of duetting focus exclusively on acoustic interactions, but the use of multisensory cues should aid in coordinating behavior between individuals. To test this hypothesis, we develop Drosophila virilis as a new model for studies of duetting. By combining sensory manipulations, quantitative behavioral assays, and statistical modeling, we show that virilis females combine precisely timed auditory and tactile cues to drive song production and duetting. Tactile cues delivered to the abdomen and genitalia play the larger role in females, as even headless females continue to coordinate song production with courting males. These data, therefore, reveal a novel, non-acoustic, mechanism for acoustic duetting. Finally, our results indicate that female-duetting circuits are not sexually differentiated, as males can also produce ‘female-like’ duets in a context-dependent manner. DOI: http://dx.doi.org/10.7554/eLife.07277.001 PMID:26046297
Acoustic-tactile rendering of visual information
NASA Astrophysics Data System (ADS)
Silva, Pubudu Madhawa; Pappas, Thrasyvoulos N.; Atkins, Joshua; West, James E.; Hartmann, William M.
2012-03-01
In previous work, we have proposed a dynamic, interactive system for conveying visual information via hearing and touch. The system is implemented with a touch screen that allows the user to interrogate a two-dimensional (2-D) object layout by active finger scanning while listening to spatialized auditory feedback. Sound is used as the primary source of information for object localization and identification, while touch is used both for pointing and for kinesthetic feedback. Our previous work considered shape and size perception of simple objects via hearing and touch. The focus of this paper is on the perception of a 2-D layout of simple objects with identical size and shape. We consider the selection and rendition of sounds for object identification and localization. We rely on the head-related transfer function for rendering sound directionality, and consider variations of sound intensity and tempo as two alternative approaches for rendering proximity. Subjective experiments with visually-blocked subjects are used to evaluate the effectiveness of the proposed approaches. Our results indicate that intensity outperforms tempo as a proximity cue, and that the overall system for conveying a 2-D layout is quite promising.
Neural correlates of tactile perception during pre-, peri-, and post-movement.
Juravle, Georgiana; Heed, Tobias; Spence, Charles; Röder, Brigitte
2016-05-01
Tactile information is differentially processed over the various phases of goal-directed movements. Here, event-related potentials (ERPs) were used to investigate the neural correlates of tactile and visual information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimulation (100 ms) was presented in separate trials during the different phases of the movement (i.e. preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or resting hand. In a control condition, the participants only performed the movement, while omission (i.e. movement-only) ERPs were recorded. Participants were instructed to ignore the presence or absence of any sensory events and to concentrate solely on the execution of the movement. Enhanced ERPs were observed 80-200 ms after tactile stimulation, as well as 100-250 ms after visual stimulation: These modulations were greatest during the execution of the goal-directed movement, and they were effector based (i.e. significantly more negative for stimuli presented to the moving hand). Furthermore, ERPs revealed enhanced sensory processing during goal-directed movements for visual stimuli as well. Such enhanced processing of both tactile and visual information during the execution phase suggests that incoming sensory information is continuously monitored for a potential adjustment of the current motor plan. Furthermore, the results reported here also highlight a tight coupling between spatial attention and the execution of motor actions.
Porcu, Emanuele; Keitel, Christian; Müller, Matthias M
2013-11-27
We investigated effects of inter-modal attention on concurrent visual and tactile stimulus processing by means of stimulus-driven oscillatory brain responses, so-called steady-state evoked potentials (SSEPs). To this end, we frequency-tagged a visual (7.5Hz) and a tactile stimulus (20Hz) and participants were cued, on a trial-by-trial basis, to attend to either vision or touch to perform a detection task in the cued modality. SSEPs driven by the stimulation comprised stimulus frequency-following (i.e. fundamental frequency) as well as frequency-doubling (i.e. second harmonic) responses. We observed that inter-modal attention to vision increased amplitude and phase synchrony of the fundamental frequency component of the visual SSEP while the second harmonic component showed an increase in phase synchrony, only. In contrast, inter-modal attention to touch increased SSEP amplitude of the second harmonic but not of the fundamental frequency, while leaving phase synchrony unaffected in both responses. Our results show that inter-modal attention generally influences concurrent stimulus processing in vision and touch, thus, extending earlier audio-visual findings to a visuo-tactile stimulus situation. The pattern of results, however, suggests differences in the neural implementation of inter-modal attentional influences on visual vs. tactile stimulus processing. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Tactile-Foot Stimulation Can Assist the Navigation of People with Visual Impairment
Velázquez, Ramiro; Pissaloux, Edwige; Lay-Ekuakille, Aimé
2015-01-01
Background. Tactile interfaces that stimulate the plantar surface with vibrations could represent a step forward toward the development of wearable, inconspicuous, unobtrusive, and inexpensive assistive devices for people with visual impairments. Objective. To study how people understand information through their feet and to maximize the capabilities of tactile-foot perception for assisting human navigation. Methods. Based on the physiology of the plantar surface, three prototypes of electronic tactile interfaces for the foot have been developed. With important technological improvements between them, all three prototypes essentially consist of a set of vibrating actuators embedded in a foam shoe-insole. Perceptual experiments involving direction recognition and real-time navigation in space were conducted with a total of 60 voluntary subjects. Results. The developed prototypes demonstrated that they are capable of transmitting tactile information that is easy and fast to understand. Average direction recognition rates were 76%, 88.3%, and 94.2% for subjects wearing the first, second, and third prototype, respectively. Exhibiting significant advances in tactile-foot stimulation, the third prototype was evaluated in navigation tasks. Results show that subjects were capable of following directional instructions useful for navigating spaces. Conclusion. Footwear providing tactile stimulation can be considered for assisting the navigation of people with visual impairments. PMID:27019593
Tactile-Foot Stimulation Can Assist the Navigation of People with Visual Impairment.
Velázquez, Ramiro; Pissaloux, Edwige; Lay-Ekuakille, Aimé
2015-01-01
Background. Tactile interfaces that stimulate the plantar surface with vibrations could represent a step forward toward the development of wearable, inconspicuous, unobtrusive, and inexpensive assistive devices for people with visual impairments. Objective. To study how people understand information through their feet and to maximize the capabilities of tactile-foot perception for assisting human navigation. Methods. Based on the physiology of the plantar surface, three prototypes of electronic tactile interfaces for the foot have been developed. With important technological improvements between them, all three prototypes essentially consist of a set of vibrating actuators embedded in a foam shoe-insole. Perceptual experiments involving direction recognition and real-time navigation in space were conducted with a total of 60 voluntary subjects. Results. The developed prototypes demonstrated that they are capable of transmitting tactile information that is easy and fast to understand. Average direction recognition rates were 76%, 88.3%, and 94.2% for subjects wearing the first, second, and third prototype, respectively. Exhibiting significant advances in tactile-foot stimulation, the third prototype was evaluated in navigation tasks. Results show that subjects were capable of following directional instructions useful for navigating spaces. Conclusion. Footwear providing tactile stimulation can be considered for assisting the navigation of people with visual impairments.
Sensory Intolerance: Latent Structure and Psychopathologic Correlates
Taylor, Steven; Conelea, Christine A.; McKay, Dean; Crowe, Katherine B.; Abramowitz, Jonathan S.
2014-01-01
Background Sensory intolerance refers to high levels of distress evoked by everyday sounds (e.g., sounds of people chewing) or commonplace tactile sensations (e.g., sticky or greasy substances). Sensory intolerance may be associated with obsessive-compulsive (OC) symptoms, OC-related phenomena, and other forms of psychopathology. Sensory intolerance is not included as a syndrome in current diagnostic systems, although preliminary research suggests that it might be a distinct syndrome. Objectives First, to investigate the latent structure of sensory intolerance in adults; that is, to investigate whether it is syndrome-like in nature, in which auditory and tactile sensory intolerance co-occur and are associated with impaired functioning. Second, to investigate the psychopathologic correlates of sensory intolerance. In particular, to investigate whether sensory intolerance is associated with OC-related phenomena, as suggested by previous research. Method A sample of 534 community-based participants were recruited via Amazon.com’s Mechanical Turk program. Participants completed measures of sensory intolerance, OC-related phenomena, and general psychopathology. Results Latent class analysis revealed two classes of individuals: Those who were intolerant of both auditory and tactile stimuli (n = 150), and those who were relatively undisturbed by auditory or tactile stimuli (n = 384). Sensory intolerant individuals, compared to those who were comparatively sensory tolerant, had greater scores on indices of general psychopathology, more severe OC symptoms, a higher likelihood of meeting caseness criteria for OC disorder, elevated scores on measures of OC-related dysfunctional beliefs, a greater tendency to report OC-related phenomena (e.g., a greater frequency of tics), and more impairment on indices of social and occupational functioning. Sensory intolerant individuals had significantly higher scores on OC symptoms even after controlling for general psychopathology. Conclusions Consistent with recent research, these findings provide further evidence for a sensory intolerance syndrome. The findings provide a rationale for conducting future research for determining whether a sensory intolerance syndrome should be included in the diagnostic nomenclature. PMID:24703593
Experience and information loss in auditory and visual memory.
Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K
2017-07-01
Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.
Semi-Immersive Virtual Turbine Engine Simulation System
NASA Astrophysics Data System (ADS)
Abidi, Mustufa H.; Al-Ahmari, Abdulrahman M.; Ahmad, Ali; Darmoul, Saber; Ameen, Wadea
2018-05-01
The design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.
Acquisition and Visualization Techniques of Human Motion Using Master-Slave System and Haptograph
NASA Astrophysics Data System (ADS)
Katsura, Seiichiro; Ohishi, Kiyoshi
Artificial acquisition and reproduction of human sensations are basic technologies of communication engineering. For example, auditory information is obtained by a microphone, and a speaker reproduces it by artificial means. Furthermore, a video camera and a television make it possible to transmit visual sensation by broadcasting. On the contrary, since tactile or haptic information is subject to the Newton's “law of action and reaction” in the real world, a device which acquires, transmits, and reproduces the information has not been established. From the point of view, real-world haptics is the key technology for future haptic communication engineering. This paper proposes a novel acquisition method of haptic information named “haptograph”. The haptograph visualizes the haptic information like photograph. Since temporal and spatial analyses are conducted to represent haptic information as the haptograph, it is possible to be recognized and to be evaluated intuitively. In this paper, the proposed haptograph is applied to visualization of human motion. It is possible to represent the motion characteristics, the expert's skill and the personal habit, and so on. In other words, a personal encyclopedia is attained. Once such a personal encyclopedia is stored in ubiquitous environment, the future human support technology will be developed.
Pillai, Roshni; Yathiraj, Asha
2017-09-01
The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.
Gainotti, Guido; Ciaraffa, Francesca; Silveri, Maria Caterina; Marra, Camillo
2009-11-01
According to the "sensory-motor model of semantic knowledge," different categories of knowledge differ for the weight that different "sources of knowledge" have in their representation. Our study aimed to evaluate this model, checking if subjective evaluations given by normal subjects confirm the different weight that various sources of knowledge have in the representation of different biological and artifact categories and of unique entities, such as famous people or monuments. Results showed that the visual properties are considered as the main source of knowledge for all the living and nonliving categories (as well as for unique entities), but that the clustering of these "sources of knowledge" is different for biological and artifacts categories. Visual data are, indeed, mainly associated with other perceptual (auditory, olfactory, gustatory, and tactual) attributes in the mental representation of living beings and unique entities, whereas they are associated with action-related properties and tactile information in the case of artifacts.
The Cacophony of Space and the Clink Clunk Clang in Architecture The mall corridor redux
NASA Astrophysics Data System (ADS)
Cipriano, Nolan
The element of sound is nearly inescapable. The various ways in which sound is generated, perceived, represented, and hindered resonates not only within the realm of the auditory sense, but as well as the visual and tactile. Through investigating the representation of sound, both in the aural and visual worlds, a deeper understanding of its profound effects can be observed. In the world of architectural space it is the element of sound that is often forgotten, whereas the sonic nature of a space is not designed. This thesis endeavours to examine how, through a comprehensive understanding of the various facets of sound representations, effects, and history, it can inform specifically designed sonorously beneficial spaces that directly reflect and support their purpose. This notion will be explored through the redesign of the shopping-mall corridor within the heritage structure of the Ogilvy Building in Ottawa, Ontario. Through adaptive architecture, the possibility exists to create a subjective aural space.
Arias, M; Gonzalo, I
2004-10-01
The Spanish neuroscientist Justo Gonzalo Rodriguez-Leal (Barcelona 1910, Madrid 1986) carried out different studies on cerebral functions, highlighting those made in patients with encephalic injuries suffered during the Spanish civil war. His book "Investigaciones sobre la nueva dinámica cerebral. La actividad cerebral en función de las condiciones de excitabilidad nerviosa", published in two volumes (the first one in 1945 and the second one five years later), gathers some of his fundamental contributions, among which the so-called central syndrome stands out. A dominant parietal lesion (central) equidistant from the visual, sensorial and auditory projection areas can lead to diverse perceptive dysfunctions, among them inversions in visual, tactile and acoustic perception. As the lesion becomes more peripheral, the resulting defect will be more unisensorial and crossed, while when it approaches the central region, the disorders will be bilateral and polysensorial. Justo Gonzalo explained all these phenomena later by a gradient system.
Probing consciousness in a sensory-disconnected paralyzed patient.
Rohaut, Benjamin; Raimondo, Federico; Galanaud, Damien; Valente, Mélanie; Sitt, Jacobo Diego; Naccache, Lionel
2017-01-01
Diagnosis of consciousness can be very challenging in some clinical situations such as severe sensory-motor impairments. We report the case study of a patient who presented a total "locked-in syndrome" associated with and a multi-sensory deafferentation (visual, auditory and tactile modalities) following a protuberantial infarction. In spite of this severe and extreme disconnection from the external world, we could detect reliable evidence of consciousness using a multivariate analysis of his high-density resting state electroencephalogram. This EEG-based diagnosis was eventually confirmed by the clinical evolution of the patient. This approach illustrates the potential importance of functional brain-imaging data to improve diagnosis of consciousness and of cognitive abilities in critical situations in which the behavioral channel is compromised such as deafferented locked-in syndrome.
Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment
Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru
2013-01-01
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873
Nursing students at a university - a study about learning style preferences.
Hallin, Karin
2014-12-01
In most adult education, teachers use methods that assume all students learn in the same way. But knowledge of students' learning style preferences highlights the importance of adequate teaching and learning adaptation. The aim of the study was to describe and compare final year nursing students' learning style preferences in two campuses during three semesters. A further aim was to identify differences between learning style preferences and personal characteristics. A descriptive cross-sectional study using the Productivity Environmental Preference Survey (PEPS) questionnaire was conducted at a Swedish rural university. Three semester groups with 263 nursing students participated in 2012-2013. The majority of the students were 'flexible' in their learning style preferences and had none or few strong preferences. Students with strong preferences preferred high structure (75%) and an authority figure present (40%). About a third were highly auditory, tactile and/or kinesthetic while 8% were highly visual. Few significant differences were revealed between the groups of campuses and the groups of semesters or between learning style preferences and upper secondary school and care experience. There were no significant differences between learning style preferences and age and assistant nurse graduation. More women than men were highly motivated, auditory, tactile and kinesthetic and preferred structure and mobility. The PEPS questionnaire provides nursing students with self-awareness regarding their strengths and shortcomings in learning and teachers with a valuable and practical basis for their selection of adapted individual and group teaching methods. The findings suggest the need for wide variation and interactive teaching approaches, conscious didactic actions between cooperating teachers and conscious learning strategies for nursing students. Copyright © 2014 Elsevier Ltd. All rights reserved.
Tactile Sun: Bringing an Invisible Universe to the Visually Impaired
NASA Astrophysics Data System (ADS)
Isidro, G. M.; Pantoja, C. A.
2014-07-01
A tactile model of the Sun has been created as a strategy for communicating astronomy to the blind or visually impaired, and as a useful outreach tool for general audiences. The model design was a collaboration between an education specialist, an astronomy specialist and a sculptor. The tactile Sun has been used at astronomy outreach events in Puerto Rico to make activities more inclusive and to increase public awareness of the needs of those with disabilities.
Visual detail about the body modulates tactile localisation biases.
Margolis, Aaron N; Longo, Matthew R
2015-02-01
The localisation of tactile stimuli requires the integration of visual and somatosensory inputs within an internal representation of the body surface and is prone to consistent bias. Joints may play a role in segmenting such internal body representations, and may therefore influence tactile localisation biases, although the nature of this influence remains unclear. Here, we investigate the relationship between conceptual knowledge of joint locations and tactile localisation biases on the hand. In one task, participants localised tactile stimuli applied to the dorsum of their hand. A distal localisation bias was observed in all participants, consistent with previous results. We also manipulated the availability of visual information during this task, to determine whether the absence of this information could account for the distal bias observed here and by Mancini et al. (Neuropsychologia 49:1194-1201, 2011). The observed distal bias increased in magnitude when visual information was restricted, without a corresponding decrease in precision. In a separate task, the same participants indicated, from memory, knuckle locations on a silhouette image of their hand. Analogous distal biases were also seen in the knuckle localisation task. The accuracy of conceptual joint knowledge was not correlated with tactile localisation bias magnitude, although a similarity in observed bias direction suggests that both tasks may rely on a common, higher-order body representation. These results also suggest that distortions of conceptual body representation may be more common in healthy individuals than previously thought.
Pozeg, Polona; Galli, Giulia; Blanke, Olaf
2015-01-01
Experiencing a body part as one’s own, i.e., body ownership, depends on the integration of multisensory bodily signals (including visual, tactile, and proprioceptive information) with the visual top-down signals from peripersonal space. Although it has been shown that the visuo-spatial viewpoint from where the body is seen is an important visual top-down factor for body ownership, different studies have reported diverging results. Furthermore, the role of visuo-spatial viewpoint (sometime also called first-person perspective) has only been studied for hands or the whole body, but not for the lower limbs. We thus investigated whether and how leg visuo-tactile integration and leg ownership depended on the visuo-spatial viewpoint from which the legs were seen and the anatomical similarity of the visual leg stimuli. Using a virtual leg illusion, we tested the strength of visuo-tactile integration of leg stimuli using the crossmodal congruency effect (CCE) as well as the subjective sense of leg ownership (assessed by a questionnaire). Fifteen participants viewed virtual legs or non-corporeal control objects, presented either from their habitual first-person viewpoint or from a viewpoint that was rotated by 90°(third-person viewpoint), while applying visuo-tactile stroking between the participants legs and the virtual legs shown on a head-mounted display. The data show that the first-person visuo-spatial viewpoint significantly boosts the visuo-tactile integration as well as the sense of leg ownership. Moreover, the viewpoint-dependent increment of the visuo-tactile integration was only found in the conditions when participants viewed the virtual legs (absent for control objects). These results confirm the importance of first person visuo-spatial viewpoint for the integration of visuo-tactile stimuli and extend findings from the upper extremity and the trunk to visuo-tactile integration and ownership for the legs. PMID:26635663
Evaluation of the attention network test using vibrotactile stimulations.
Salzer, Yael; Oron-Gilad, Tal; Henik, Avishai
2015-06-01
We report a vibrotactile version of the attention network test (ANT)-the tactile ANT (T-ANT). It has been questioned whether attentional components are modality specific or not. The T-ANT explores alertness, orienting, cognitive control, and their relationships, similar to its visual counterpart, in the tactile modality. The unique features of the T-ANT are in utilizing stimuli on a single plane-the torso-and replacing the original imperative flanker task with a tactile Simon task. Subjects wore a waist belt mounted with two vibrotactile stimulators situated on the back and positioned to the right and left of the spinal column. They responded by pressing keys with their right or left hand in reaction to the type of vibrotactile stimulation (pulsed/continuous signal). On a single trial, an alerting tone was followed by a short tactile (informative/noninformative) peripheral cue and an imperative tactile Simon task target. The T-ANT was compared with a variant of the ANT in which the flanker task was replaced with a visual Simon task. Experimental data showed effects of orienting over control only when the peripheral cues were informative. In contrast to the visual task, interactions between alertness and control or alertness and orienting were not found in the tactile task. A possible rationale for these results is discussed. The T-ANT allows examination of attentional processes among patients with tactile attentional deficits and patients with eyesight deficits who cannot take part in visual tasks. Technological advancement would enable implementation of the T-ANT in brain-imaging studies.
Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain
Garcia, Gabriel J.; Corrales, Juan A.; Pomares, Jorge; Torres, Fernando
2009-01-01
Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors. PMID:22303146
Meet our Neighbours - a tactile experience
NASA Astrophysics Data System (ADS)
Canas, L.; Lobo Correia, A.
2013-09-01
Planetary science is a key field in astronomy that draws lots of attention and that engages large amounts of enthusiasts. On its essence, it is a visual science and the current resources and activities for the inclusion of visually impaired children, although increasing, are still costly and somewhat scarce. Therefore there is a paramount need to develop more low cost resources in order to provide experiences that can reach all, even the more socially deprived communities. "Meet our neighbours!-a tactile experience", plans to promote and provide inclusion activities for visually impaired children and their non-visually impaired peers through the use of astronomy hands-on low cost activities. Is aimed for children from the ages of 6 to 12 years old and produce data set 13 tactile images of the main objects of the Solar System that can be used in schools, science centres and outreach associations. Accessing several common problems through tactile resources, with this project we present ways to successfully provide low cost solutions (avoiding the expensive tactile printing costs), promote inclusion and interactive hands-on activities for visually impaired children and their non-visually impaired peers and create dynamic interactions based on oral knowledge transmission between them. Here we describe the process of implementing such initiative near target communities: establishing a bridge between scientists, children and teachers. The struggles and challenges perceived during the project and the enrichment experience of engaging astronomy with these specific groups, broadening horizons in an overall experience accessible to all.
Chaves-Coira, Irene; Barros-Zulaica, Natali; Rodrigo-Angulo, Margarita; Núñez, Ángel
2016-01-01
Neocortical cholinergic activity plays a fundamental role in sensory processing and cognitive functions. Previous results have suggested a refined anatomical and functional topographical organization of basal forebrain (BF) projections that may control cortical sensory processing in a specific manner. We have used retrograde anatomical procedures to demonstrate the existence of specific neuronal groups in the BF involved in the control of specific sensory cortices. Fluoro-Gold (FlGo) and Fast Blue (FB) fluorescent retrograde tracers were deposited into the primary somatosensory (S1) and primary auditory (A1) cortices in mice. Our results revealed that the BF is a heterogeneous area in which neurons projecting to different cortical areas are segregated into different neuronal groups. Most of the neurons located in the horizontal limb of the diagonal band of Broca (HDB) projected to the S1 cortex, indicating that this area is specialized in the sensory processing of tactile stimuli. However, the nucleus basalis magnocellularis (B) nucleus shows a similar number of cells projecting to the S1 as to the A1 cortices. In addition, we analyzed the cholinergic effects on the S1 and A1 cortical sensory responses by optogenetic stimulation of the BF neurons in urethane-anesthetized transgenic mice. We used transgenic mice expressing the light-activated cation channel, channelrhodopsin-2, tagged with a fluorescent protein (ChR2-YFP) under the control of the choline-acetyl transferase promoter (ChAT). Cortical evoked potentials were induced by whisker deflections or by auditory clicks. According to the anatomical results, optogenetic HDB stimulation induced more extensive facilitation of tactile evoked potentials in S1 than auditory evoked potentials in A1, while optogenetic stimulation of the B nucleus facilitated either tactile or auditory evoked potentials equally. Consequently, our results suggest that cholinergic projections to the cortex are organized into segregated pools of neurons that may modulate specific cortical areas. PMID:27147975
Chaves-Coira, Irene; Barros-Zulaica, Natali; Rodrigo-Angulo, Margarita; Núñez, Ángel
2016-01-01
Neocortical cholinergic activity plays a fundamental role in sensory processing and cognitive functions. Previous results have suggested a refined anatomical and functional topographical organization of basal forebrain (BF) projections that may control cortical sensory processing in a specific manner. We have used retrograde anatomical procedures to demonstrate the existence of specific neuronal groups in the BF involved in the control of specific sensory cortices. Fluoro-Gold (FlGo) and Fast Blue (FB) fluorescent retrograde tracers were deposited into the primary somatosensory (S1) and primary auditory (A1) cortices in mice. Our results revealed that the BF is a heterogeneous area in which neurons projecting to different cortical areas are segregated into different neuronal groups. Most of the neurons located in the horizontal limb of the diagonal band of Broca (HDB) projected to the S1 cortex, indicating that this area is specialized in the sensory processing of tactile stimuli. However, the nucleus basalis magnocellularis (B) nucleus shows a similar number of cells projecting to the S1 as to the A1 cortices. In addition, we analyzed the cholinergic effects on the S1 and A1 cortical sensory responses by optogenetic stimulation of the BF neurons in urethane-anesthetized transgenic mice. We used transgenic mice expressing the light-activated cation channel, channelrhodopsin-2, tagged with a fluorescent protein (ChR2-YFP) under the control of the choline-acetyl transferase promoter (ChAT). Cortical evoked potentials were induced by whisker deflections or by auditory clicks. According to the anatomical results, optogenetic HDB stimulation induced more extensive facilitation of tactile evoked potentials in S1 than auditory evoked potentials in A1, while optogenetic stimulation of the B nucleus facilitated either tactile or auditory evoked potentials equally. Consequently, our results suggest that cholinergic projections to the cortex are organized into segregated pools of neurons that may modulate specific cortical areas.
Bonino, D; Ricciardi, E; Sani, L; Gentili, C; Vanello, N; Guazzelli, M; Vecchi, T; Pietrini, P
2008-09-01
In sighted individuals, both the visual and tactile version of the same spatial working memory task elicited neural responses in the dorsal "where" cortical pathway (Ricciardi et al., 2006). Whether the neural response during the tactile working memory task is due to visually-based spatial imagery or rather reflects a more abstract, supramodal organization of the dorsal cortical pathway remains to be determined. To understand the role of visual experience on the functional organization of the dorsal cortical stream, using functional magnetic resonance imaging (fMRI) here we examined brain response in four individuals with congenital or early blindness and no visual recollection, while they performed the same tactile spatial working memory task, a one-back recognition of 2D and 3D matrices. The blind subjects showed a significant activation in bilateral posterior parietal cortex, dorsolateral and inferior prefrontal areas, precuneus, lateral occipital cortex, and cerebellum. Thus, dorsal occipito-parietal areas are involved in mental imagery dealing with spatial components in subjects without prior visual experience and in response to a non-visual task. These data indicate that recruitment of the dorsal cortical pathway in response to the tactile spatial working memory task is not mediated by visually-based imagery and that visual experience is not a prerequisite for the development of a more abstract functional organization of the dorsal stream. These findings, along with previous data indicating a similar supramodal functional organization within the ventral cortical pathway and the motion processing brain regions, may contribute to explain how individuals who are born deprived of sight are able to interact effectively with the surrounding world.
Dynamics of Propofol-Induced Loss of Consciousness Across Primate Neocortex.
Ishizawa, Yumiko; Ahmed, Omar J; Patel, Shaun R; Gale, John T; Sierra-Mercado, Demetrio; Brown, Emery N; Eskandar, Emad N
2016-07-20
The precise neural mechanisms underlying transitions between consciousness and anesthetic-induced unconsciousness remain unclear. Here, we studied intracortical neuronal dynamics leading to propofol-induced unconsciousness by recording single-neuron activity and local field potentials directly in the functionally interconnecting somatosensory (S1) and frontal ventral premotor (PMv) network during a gradual behavioral transition from full alertness to loss of consciousness (LOC) and on through a deeper anesthetic level. Macaque monkeys were trained for a behavioral task designed to determine the trial-by-trial alertness and neuronal response to tactile and auditory stimulation. We show that disruption of coherent beta oscillations between S1 and PMv preceded, but did not coincide with, the LOC. LOC appeared to correspond to pronounced but brief gamma-/high-beta-band oscillations (lasting ∼3 min) in PMv, followed by a gamma peak in S1. We also demonstrate that the slow oscillations appeared after LOC in S1 and then in PMv after a delay, together suggesting that neuronal dynamics are very different across S1 versus PMv during LOC. Finally, neurons in both S1 and PMv transition from responding to bimodal (tactile and auditory) stimulation before LOC to only tactile modality during unconsciousness, consistent with an inhibition of multisensory integration in this network. Our results show that propofol-induced LOC is accompanied by spatiotemporally distinct oscillatory neuronal dynamics across the somatosensory and premotor network and suggest that a transitional state from wakefulness to unconsciousness is not a continuous process, but rather a series of discrete neural changes. How information is processed by the brain during awake and anesthetized states and, crucially, during the transition is not clearly understood. We demonstrate that neuronal dynamics are very different within an interconnecting cortical network (primary somatosensory and frontal premotor area) during the loss of consciousness (LOC) induced by propofol in nonhuman primates. Coherent beta oscillations between these regions are disrupted before LOC. Pronounced but brief gamma-band oscillations appear to correspond to LOC. In addition, neurons in both of these cortices transition from responding to both tactile and auditory stimulation before LOC to only tactile modality during unconsciousness. We demonstrate that propofol-induced LOC is accompanied by spatiotemporally distinctive neuronal dynamics in this network with concurrent changes in multisensory processing. Copyright © 2016 the authors 0270-6474/16/367718-09$15.00/0.
Temporal Influence on Awareness
1995-12-01
43 38. Test Setup Timing: Measured vs Expected Modal Delays (in ms) ............. 46 39. Experiment I: visual and auditory stimuli...presented simultaneously; visual- auditory delay=Oms, visual-visual delay=0ms ....... .......................... 47 40. Experiment II: visual and auditory ...stimuli presented in order; visual- auditory de- lay=Oms, visual-visual delay=variable ................................ 48 41. Experiment II: visual and
Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2014-01-01
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403
Bravi, Riccardo; Quarta, Eros; Del Tongo, Claudia; Carbonaro, Nicola; Tognetti, Alessandro; Minciacchi, Diego
2015-06-01
The involvement or noninvolvement of a clock-like neural process, an effector-independent representation of the time intervals to produce, is described as the essential difference between event-based and emergent timing. In a previous work (Bravi et al. in Exp Brain Res 232:1663-1675, 2014a. doi: 10.1007/s00221-014-3845-9 ), we studied repetitive isochronous wrist's flexion-extensions (IWFEs), performed while minimizing visual and tactile information, to clarify whether non-temporal and temporal characteristics of paced auditory stimuli affect the precision and accuracy of the rhythmic motor performance. Here, with the inclusion of new recordings, we expand the examination of the dataset described in our previous study to investigate whether simple and complex paced auditory stimuli (clicks and music) and their imaginations influence in a different way the timing mechanisms for repetitive IWFEs. Sets of IWFEs were analyzed by the windowed (lag one) autocorrelation-wγ(1), a statistical method recently introduced for the distinction between event-based and emergent timing. Our findings provide evidence that paced auditory information and its imagination favor the engagement of a clock-like neural process, and specifically that music, unlike clicks, lacks the power to elicit event-based timing, not counteracting the natural shift of wγ(1) toward positive values as frequency of movements increase.
DOT National Transportation Integrated Search
2011-06-01
People with vision impairment have different perception and spatial cognition as compared to the sighted people. Blind pedestrians primarily rely on auditory, olfactory, or tactile feedback to determine spatial location and find their way. They gener...
Auditory, Visual, and Auditory-Visual Perception of Vowels by Hearing-Impaired Children.
ERIC Educational Resources Information Center
Hack, Zarita Caplan; Erber, Norman P.
1982-01-01
Vowels were presented through auditory, visual, and auditory-visual modalities to 18 hearing impaired children (12 to 15 years old) having good, intermediate, and poor auditory word recognition skills. All the groups had difficulty with acoustic information and visual information alone. The first two groups had only moderate difficulty identifying…
NASA Astrophysics Data System (ADS)
Paganotti, A.; Reis, C.; Voelzke, M. R.
2017-12-01
This work deals with the use of tactile materials as a pedagogical tool for the teaching of Astronomy, and this material was used in a didactic activity with 44 students of the public elementary school in Minas Gerais. A visually impaired student and another hearing impaired participated, being these the focus of the research. With the tactile visual material elaborated, the objective was to develop themes such as phases of the Moon, eclipses and Solar System. Two questionnaires were applied and revealed an improvement in the concepts related to Astronomy and in the socialization of disabled students with the group after the didactic activity.
Misunderstanding and Repair in Tactile Auslan
ERIC Educational Resources Information Center
Willoughby, Louisa; Manns, Howard; Iwasaki, Shimako; Bartlett, Meredith
2014-01-01
This article discusses ways in which misunderstandings arise in Tactile Australian Sign Language (Tactile Auslan) and how they are resolved. Of particular interest are the similarities to and differences from the same processes in visually signed and spoken conversation. This article draws on detailed conversation analysis (CA) and demonstrates…
Hollins, Mark
2009-01-01
During haptic exploration of surfaces, complex mechanical oscillations—of surface displacement and air pressure—are generated, which are then transduced by receptors in the skin and in the inner ear. Tactile and auditory signals thus convey redundant information about texture, partially carried in the spectral content of these signals. It is no surprise, then, that the representation of temporal frequency is linked in the auditory and somatosensory systems. An emergent hypothesis is that there exists a supramodal representation of temporal frequency, and by extension texture. PMID:19721886
Cacciamani, Laura; Likova, Lora T
2017-05-01
The perirhinal cortex (PRC) is a medial temporal lobe structure that has been implicated in not only visual memory in the sighted, but also tactile memory in the blind (Cacciamani & Likova, 2016). It has been proposed that, in the blind, the PRC may contribute to modulation of tactile memory responses that emerge in low-level "visual" area V1 as a result of training-induced cortical reorganization (Likova, 2012, 2015). While some studies in the sighted have indicated that the PRC is indeed structurally and functionally connected to the visual cortex (Clavagnier, Falchier, & Kennedy, 2004; Peterson, Cacciamani, Barense, & Scalf, 2012), the PRC's direct modulation of V1 is unknown-particularly in those who lack the visual input that typically stimulates this region. In the present study, we tested Likova's PRC modulation hypothesis; specifically, we used fMRI to assess the PRC's Granger causal influence on V1 activation in the blind during a tactile memory task. To do so, we trained congenital and acquired blind participants on a unique memory-guided drawing technique previously shown to result in V1 reorganization towards tactile memory representations (Likova, 2012). The tasks (20s each) included: tactile exploration of raised line drawings of faces and objects, tactile memory retrieval via drawing, and a scribble motor/memory control. FMRI before and after a week of the Cognitive-Kinesthetic training on these tasks revealed a significant increase in PRC-to-V1 Granger causality from pre- to post-training during the memory drawing task, but not during the motor/memory control. This increase in causal connectivity indicates that the training strengthened the top-down modulation of visual cortex from the PRC. This is the first study to demonstrate enhanced directed functional connectivity from the PRC to the visual cortex in the blind, implicating the PRC as a potential source of the reorganization towards tactile representations that occurs in V1 in the blind brain (Likova, 2012). Copyright © 2017 Elsevier Inc. All rights reserved.
Massive cortical reorganization in sighted Braille readers.
Siuda-Krzywicka, Katarzyna; Bola, Łukasz; Paplińska, Małgorzata; Sumera, Ewa; Jednoróg, Katarzyna; Marchewka, Artur; Śliwińska, Magdalena W; Amedi, Amir; Szwed, Marcin
2016-03-15
The brain is capable of large-scale reorganization in blindness or after massive injury. Such reorganization crosses the division into separate sensory cortices (visual, somatosensory...). As its result, the visual cortex of the blind becomes active during tactile Braille reading. Although the possibility of such reorganization in the normal, adult brain has been raised, definitive evidence has been lacking. Here, we demonstrate such extensive reorganization in normal, sighted adults who learned Braille while their brain activity was investigated with fMRI and transcranial magnetic stimulation (TMS). Subjects showed enhanced activity for tactile reading in the visual cortex, including the visual word form area (VWFA) that was modulated by their Braille reading speed and strengthened resting-state connectivity between visual and somatosensory cortices. Moreover, TMS disruption of VWFA activity decreased their tactile reading accuracy. Our results indicate that large-scale reorganization is a viable mechanism recruited when learning complex skills.
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
2017-04-01
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Brayda, Luca; Campus, Claudio; Memeo, Mariacarla; Lucagrossi, Laura
2015-01-01
Tactile maps are efficient tools to improve spatial understanding and mobility skills of visually impaired people. Their limited adaptability can be compensated with haptic devices which display graphical information, but their assessment is frequently limited to performance-based metrics only which can hide potential spatial abilities in O&M protocols. We assess a low-tech tactile mouse able to deliver three-dimensional content considering how performance, mental workload, behavior, and anxiety status vary with task difficulty and gender in congenitally blind, late blind, and sighted subjects. Results show that task difficulty coherently modulates the efficiency and difficulty to build mental maps, regardless of visual experience. Although exhibiting attitudes that were similar and gender-independent, the females had lower performance and higher cognitive load, especially when congenitally blind. All groups showed a significant decrease in anxiety after using the device. Tactile graphics with our device seems therefore to be applicable with different visual experiences, with no negative emotional consequences of mentally demanding spatial tasks. Going beyond performance-based assessment, our methodology can help with better targeting technological solutions in orientation and mobility protocols.
Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction
Kaliuzhna, Mariia; Ferrè, Elisa Raffaella; Herbelin, Bruno; Blanke, Olaf; Haggard, Patrick
2016-01-01
Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept, and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion. PMID:27198907
McKetin, Rebecca; Baker, Amanda L; Dawe, Sharon; Voce, Alexandra; Lubman, Dan I
2017-05-01
We examined the lifetime experience of hallucinations and delusions associated with transient methamphetamine-related psychosis (MAP), persistent MAP and primary psychosis among a cohort of dependent methamphetamine users. Participants were classified as having (a) no current psychotic symptoms, (n=110); (b) psychotic symptoms only when using methamphetamine (transient MAP, n=85); (c) psychotic symptoms both when using methamphetamine and when abstaining from methamphetamine (persistent MAP, n=37), or (d) meeting DSM-IV criteria for lifetime schizophrenia or mania (primary psychosis, n=52). Current psychotic symptoms were classified as a score of 4 or more on any of the Brief Psychiatric Rating Scale items of suspiciousness, hallucinations or unusual thought content in the past month. Lifetime psychotic diagnoses and symptoms were assessed using the Composite International Diagnostic Interview. Transient MAP was associated with persecutory delusions and tactile hallucinations (compared to the no symptom group). Persistent MAP was additionally associated with delusions of reference, thought interference and complex auditory, visual, olfactory and tactile hallucinations, while primary psychosis was also associated with delusions of thought projection, erotomania and passivity. The presence of non-persecutory delusions and hallucinations across various modalities is a marker for persistent MAP or primary psychosis in people who use methamphetamine. Copyright © 2017. Published by Elsevier B.V.
Laasonen, M; Service, E; Virsu, V
2001-12-01
We studied the temporal acuity of 16 developmentally dyslexic young adults in three perceptual modalities. The control group consisted of 16 age- and IQ-matched normal readers. Two methods were used. In the temporal order judgment (TOJ) method, the stimuli were spatially separate fingertip indentations in the tactile system, tone bursts of different pitches in audition, and light flashes in vision. Participants indicated which one of two stimuli appeared first. To test temporal processing acuity (TPA), the same 8-msec nonspeech stimuli were presented as two parallel sequences of three stimulus pulses. Participants indicated, without order judgments, whether the pulses of the two sequences were simultaneous or nonsimultaneous. The dyslexic readers were somewhat inferior to the normal readers in all six temporal acuity tasks on average. Thus, our results agreed with the existence of a pansensory temporal processing deficit associated with dyslexia in a language with shallow orthography (Finnish) and in well-educated adults. The dyslexic and normal readers' temporal acuities overlapped so much, however, that acuity deficits alone would not allow dyslexia diagnoses. It was irrelevant whether or not the acuity task required order judgments. The groups did not differ in the nontemporal aspects of our experiments. Correlations between temporal acuity and reading-related tasks suggested that temporal acuity is associated with phonological awareness.
Proulx, Michael J.; Gwinnutt, James; Dell’Erba, Sara; Levy-Tzedek, Shelly; de Sousa, Alexandra A.; Brown, David J.
2015-01-01
Vision is the dominant sense for perception-for-action in humans and other higher primates. Advances in sight restoration now utilize the other intact senses to provide information that is normally sensed visually through sensory substitution to replace missing visual information. Sensory substitution devices translate visual information from a sensor, such as a camera or ultrasound device, into a format that the auditory or tactile systems can detect and process, so the visually impaired can see through hearing or touch. Online control of action is essential for many daily tasks such as pointing, grasping and navigating, and adapting to a sensory substitution device successfully requires extensive learning. Here we review the research on sensory substitution for vision restoration in the context of providing the means of online control for action in the blind or blindfolded. It appears that the use of sensory substitution devices utilizes the neural visual system; this suggests the hypothesis that sensory substitution draws on the same underlying mechanisms as unimpaired visual control of action. Here we review the current state of the art for sensory substitution approaches to object recognition, localization, and navigation, and the potential these approaches have for revealing a metamodal behavioral and neural basis for the online control of action. PMID:26599473
A tactile display for international space station (ISS) extravehicular activity (EVA).
Rochlis, J L; Newman, D J
2000-06-01
A tactile display to increase an astronaut's situational awareness during an extravehicular activity (EVA) has been developed and ground tested. The Tactor Locator System (TLS) is a non-intrusive, intuitive display capable of conveying position and velocity information via a vibrotactile stimulus applied to the subject's neck and torso. In the Earth's 1 G environment, perception of position and velocity is determined by the body's individual sensory systems. Under normal sensory conditions, redundant information from these sensory systems provides humans with an accurate sense of their position and motion. However, altered environments, including exposure to weightlessness, can lead to conflicting visual and vestibular cues, resulting in decreased situational awareness. The TLS was designed to provide somatosensory cues to complement the visual system during EVA operations. An EVA task was simulated on a computer graphics workstation with a display of the International Space Station (ISS) and a target astronaut at an unknown location. Subjects were required to move about the ISS and acquire the target astronaut using either an auditory cue at the outset, or the TLS. Subjects used a 6 degree of freedom input device to command translational and rotational motion. The TLS was configured to act as a position aid, providing target direction information to the subject through a localized stimulus. Results show that the TLS decreases reaction time (p = 0.001) and movement time (p = 0.001) for simulated subject (astronaut) motion around the ISS. The TLS is a useful aid in increasing an astronaut's situational awareness, and warrants further testing to explore other uses, tasks and configurations.
Beyond sensory images: Object-based representation in the human ventral pathway
Pietrini, Pietro; Furey, Maura L.; Ricciardi, Emiliano; Gobbini, M. Ida; Wu, W.-H. Carolyn; Cohen, Leonardo; Guazzelli, Mario; Haxby, James V.
2004-01-01
We investigated whether the topographically organized, category-related patterns of neural response in the ventral visual pathway are a representation of sensory images or a more abstract representation of object form that is not dependent on sensory modality. We used functional MRI to measure patterns of response evoked during visual and tactile recognition of faces and manmade objects in sighted subjects and during tactile recognition in blind subjects. Results showed that visual and tactile recognition evoked category-related patterns of response in a ventral extrastriate visual area in the inferior temporal gyrus that were correlated across modality for manmade objects. Blind subjects also demonstrated category-related patterns of response in this “visual” area, and in more ventral cortical regions in the fusiform gyrus, indicating that these patterns are not due to visual imagery and, furthermore, that visual experience is not necessary for category-related representations to develop in these cortices. These results demonstrate that the representation of objects in the ventral visual pathway is not simply a representation of visual images but, rather, is a representation of more abstract features of object form. PMID:15064396
Jordan, Timothy R; Abedipour, Lily
2010-01-01
Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.
Advanced Mathematics Communication beyond Modality of Sight
ERIC Educational Resources Information Center
Sedaghatjou, Mina
2018-01-01
This study illustrates how mathematical communication and learning are inherently multimodal and embodied; hence, sight-disabled students are also able to conceptualize visuospatial information and mathematical concepts through tactile and auditory activities. Adapting a perceptuomotor integration approach, the study shows that the lack of access…
The role of visual deprivation and experience on the performance of sensory substitution devices.
Stronks, H Christiaan; Nau, Amy C; Ibbotson, Michael R; Barnes, Nick
2015-10-22
It is commonly accepted that the blind can partially compensate for their loss of vision by developing enhanced abilities with their remaining senses. This visual compensation may be related to the fact that blind people rely on their other senses in everyday life. Many studies have indeed shown that experience plays an important role in visual compensation. Numerous neuroimaging studies have shown that the visual cortices of the blind are recruited by other functional brain areas and can become responsive to tactile or auditory input instead. These cross-modal plastic changes are more pronounced in the early blind compared to late blind individuals. The functional consequences of cross-modal plasticity on visual compensation in the blind are debated, as are the influences of various etiologies of vision loss (i.e., blindness acquired early or late in life). Distinguishing between the influences of experience and visual deprivation on compensation is especially relevant for rehabilitation of the blind with sensory substitution devices. The BrainPort artificial vision device and The vOICe are assistive devices for the blind that redirect visual information to another intact sensory system. Establishing how experience and different etiologies of vision loss affect the performance of these devices may help to improve existing rehabilitation strategies, formulate effective selection criteria and develop prognostic measures. In this review we will discuss studies that investigated the influence of training and visual deprivation on the performance of various sensory substitution approaches. Copyright © 2015 Elsevier B.V. All rights reserved.
Loots, Gerrit; Devisé, Isabel; Jacquet, Wolfgang
2005-01-01
This article presents a study that examined the impact of visual communication on the quality of the early interaction between deaf and hearing mothers and fathers and their deaf children aged between 18 and 24 months. Three communication mode groups of parent-deaf child dyads that differed by the use of signing and visual-tactile communication strategies were involved: (a) hearing parents communicating with their deaf child in an auditory/oral way, (b) hearing parents using total communication, and (c) deaf parents using sign language. Based on Loots and colleagues' intersubjective developmental theory, parent-deaf child interaction was analyzed according to the occurrence of intersubjectivity during free play with a standard set of toys. The data analyses indicated that the use of sign language in a sequential visual way of communication enabled the deaf parents to involve their 18- to 24-month-old deaf infants in symbolic intersubjectivity, whereas hearing parents who hold on to oral-only communication were excluded from involvement in symbolic intersubjectivity with their deaf infants. Hearing parents using total communication were more similar to deaf parents, but they still differed from deaf parents in exchanging and sharing symbolic and linguistic meaning with their deaf child.
Visual form predictions facilitate auditory processing at the N1.
Paris, Tim; Kim, Jeesun; Davis, Chris
2017-02-20
Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.
Investigating the Role of Auditory and Tactile Modalities in Violin Quality Evaluation
Wollman, Indiana; Fritz, Claudia; Poitevineau, Jacques; McAdams, Stephen
2014-01-01
The role of auditory and tactile modalities involved in violin playing and evaluation was investigated in an experiment employing a blind violin evaluation task under different conditions: i) normal playing conditions, ii) playing with auditory masking, and iii) playing with vibrotactile masking. Under each condition, 20 violinists evaluated five violins according to criteria related to violin playing and sound characteristics and rated their overall quality and relative preference. Results show that both auditory and vibrotactile feedback are important in the violinists’ evaluations but that their relative importance depends on the violinist, the violin and the type of evaluation (different criteria ratings or preference). In this way, the overall quality ratings were found to be accurately predicted by the rating criteria, which also proved to be perceptually relevant to violinists, but were poorly correlated with the preference ratings; this suggests that the two types of ratings (overall quality vs preference) may stem from different decision-making strategies. Furthermore, the experimental design confirmed that violinists agree more on the importance of criteria in their overall evaluation than on their actual ratings for different violins. In particular, greater agreement was found on the importance of criteria related to the sound of the violin. Nevertheless, this study reveals that there are fundamental differences in the way players interpret and evaluate each criterion, which may explain why correlating physical properties with perceptual properties has been challenging so far in the field of musical acoustics. PMID:25474036
Tactile and Visual Identification of the XM106 Bursting Smoke Grenade: Limited User Evaluation
2010-12-01
situations representing the typical handwear and eyewear configurations of dismounted Warfighters. Thirty-six test Soldiers participated in the evaluation...all handwear and eyewear conditions. 15. SUBJECT TERMS XM106, smoke grenade, tactile/visual identification 16. SECURITY CLASSIFICATION OF: 17...1.3.2 Eyewear Compatibility ........................................................................................3 1.3.3 Physical Load
Developing Learning Readiness; A Visual-Motor-Tactile Skills Program. Teacher's Manual.
ERIC Educational Resources Information Center
Getman, G.N.; And Others
A flexible program for preschool, primary grades, or remedial classes provides opportunities for the child to achieve readiness for learning through the development of visual, motor, and tactile skills. A cardboard doll is discussed which may be utilized by the teacher and children in a variety of gymnasium routines to increase knowledge of body…
Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang
2015-01-01
Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Our results indicate "cross-modal" plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents.
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Auditory and visual spatial impression: Recent studies of three auditoria
NASA Astrophysics Data System (ADS)
Nguyen, Andy; Cabrera, Densil
2004-10-01
Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.
Urazán-Torres, Gina Rocío; Puche-Cabrera, Mario José; Caballero-Forero, Mangelli; Rey-Anacona, César Armando
2013-12-01
Most of the studies that have examined cognitive and executive functions in conduct disorders (CD) have been conducted on institutionalized male adolescents. In this research the cognitive and executive functions of non-institutionalized Colombian school children with CD were compared with normal school children, all between 6 and 12 years-old. We used a case-control design. The cases were participants who met the diagnostic criteria for CD (n=39) and controls who did not meet these criteria (n=39), according to reports of a professional of the participants' institution, and a structured interview for childhood psychiatric syndromes. The two groups were selected from educational institutions, and there were no differences in age, school grade, or socioeconomic level. The IQ was reviewed, as well as the presence of other mental disorders, serious physical illnesses, and more serious neurological signs. The cognitive and executive functions were evaluated using a child neuropsychological test battery. We found that participants with CD had significantly lower scores in construction abilities, perceptual abilities (tactile, visual and auditory), differed in verbal memory, differed in visual memory, language (repetition, expression and understanding), meta-linguistic abilities, spatial abilities, visual and auditory attention, conceptual abilities, verbal and graphic fluency, and cognitive flexibility. The same differences were found between males, except in repetition, whereas girls showed fewer differences, thus the cognitive and executive performance was poorer in males with CD than in females, especially in verbal and linguistic-related functions. Children with CD could show generalized cognitive and executive deficits. These deficits seem to be more frequent in boys than in girls with CD. Copyright © 2013 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.
NASA Astrophysics Data System (ADS)
Clark, Douglas; Jorde, Doris
2004-01-01
This study analyzes the impact of an integrated sensory model within a thermal equilibrium visualization. We hypothesized that this intervention would not only help students revise their disruptive experientially supported ideas about why objects feel hot or cold, but also increase their understanding of thermal equilibrium. The analysis synthesizes test data and interviews to measure the impact of this strategy. Results show that students in the experimental tactile group significantly outperform their control group counterparts on posttests and delayed posttests, not only on tactile explanations, but also on thermal equilibrium explanations. Interview transcripts of experimental and control group students corroborate these findings. Discussion addresses improving the tactile model as well as application of the strategy to other science topics. The discussion also considers possible incorporation of actual kinetic or thermal haptic feedback to reinforce the current audio and visual feedback of the visualization. This research builds on the conceptual change literature about the nature and role of students' experientially supported ideas as well as our understanding of curriculum and visualization design to support students in learning about thermodynamics, a science topic on which students perform poorly as shown by the National Assessment of Educational Progress (NAEP) and Third International Mathematics and Science Study (TIMSS) studies.
Tactile cueing effects on performance in simulated aerial combat with high acceleration.
van Erp, Jan B F; Eriksson, Lars; Levin, Britta; Carlander, Otto; Veltman, J A; Vos, Wouter K
2007-12-01
Recent evidence indicates that vibrotactile displays can potentially reduce the risk of sensory and cognitive overload. Before these displays can be introduced in super agile aircraft, it must be ascertained that vibratory stimuli can be sensed and interpreted by pilots subjected to high G loads. Each of 9 pilots intercepted 32 targets in the Swedish Dynamic Flight Simulator. Targets were indicated on simulated standard Gripen visual displays. In addition, in half of the trials target direction was also displayed on a 60-element tactile torso display. Performance measures and subjective ratings were recorded. Each pilot pulled G peaks above +8 Gz. With tactile cueing present, mean reaction time was reduced from 1458 ms (SE = 54) to 1245 ms (SE = 88). Mean total chase time for targets that popped up behind the pilot's aircraft was reduced from 13 s (SE = 0.45) to 12 s (SE = 0.41). Pilots rated the tactile display favorably over the visual displays at target pop-up on the easiness of detecting a threat presence and on the clarity of initial position of the threats. This study is the first to show that tactile display information is perceivable and useful in hypergravity (up to +9 Gz). The results show that the tactile display can capture attention at threat pop-up and improve threat awareness for threats in the back, even in the presence of high-end visual displays. It is expected that the added value of tactile displays may further increase after formal training and in situations of unexpected target pop-up.
Chowdary, P Brahmanna; Uloopi, K S; Vinay, C; Rao, V Veerabhadra; Rayala, Chandrasekhar
2016-01-01
Visually impaired children face limitations in interacting with the environment, as they cannot see the facial expression of parents, teachers and cannot perceive social behavior. These children are challenged every day in learning basic life skills and maintenance of oral hygiene being one among them. To evaluate the impact of verbal, braille text, and tactile oral hygiene awareness instructions on oral health status of visually impaired children. One hundred and twenty institutionalized visually impaired children aged 6-16 years were selected and divided into three groups (40 children each). Group I: Verbal and tactile, Group II: Verbal and braille, Group III: Verbal, braille, and tactile. Instructions regarding maintenance of good oral hygiene and brushing technique were explained to all the children, and oral health status of these children using plaque index (Silness and Loe) and gingival index (Loe and Silness) was evaluated at 1, 3, and 6 months interval. ANOVA test was used to analyze the intra- and inter-group comparisons and Tukey post-hoc test for multiple group comparisons. Children in all the groups showed reduction in plaque and gingival scores. There was the highest percentage of reduction in plaque scores in Group III (70.6%), and the decrease in gingival scores was the highest in Group II (84%). Severity of dental plaque and gingivitis in visually impaired individuals can be reduced by a controlled and supervised educational program. The combination of all three, i.e., verbal, braille, and tactile mode of oral health educational aids proved to be effective.
Response format, magnitude of laterality effects, and sex differences in laterality.
Voyer, Daniel; Doyle, Randi A
2012-01-01
The present study examined the evidence for the claim that response format might affect the magnitude of laterality effects by means of a meta-analysis. The analysis included the 396 effect sizes drawn from 266 studies retrieved by Voyer (1996) and relevant to the main effect of laterality and sex differences in laterality for verbal and non-verbal tasks in the auditory, tactile, and visual sensory modality. The response format used in specific studies was the only moderator variable of interest in the present analysis, resulting in four broad response categories (oral, written, computer, and pointing). A meta-analysis analogue to ANOVA showed no significant influence of response format on either the main effect of laterality or sex differences in laterality when all sensory modalities were combined. However, when modalities were considered separately, response format affected the main effect of laterality in the visual modality, with a clear advantage for written responses. Further pointed analyses revealed some specific differences among response formats. Results are discussed in terms of their implications for the measurement of laterality.
Decoding magnetoencephalographic rhythmic activity using spectrospatial information.
Kauppi, Jukka-Pekka; Parkkonen, Lauri; Hari, Riitta; Hyvärinen, Aapo
2013-12-01
We propose a new data-driven decoding method called Spectral Linear Discriminant Analysis (Spectral LDA) for the analysis of magnetoencephalography (MEG). The method allows investigation of changes in rhythmic neural activity as a result of different stimuli and tasks. The introduced classification model only assumes that each "brain state" can be characterized as a combination of neural sources, each of which shows rhythmic activity at one or several frequency bands. Furthermore, the model allows the oscillation frequencies to be different for each such state. We present decoding results from 9 subjects in a four-category classification problem defined by an experiment involving randomly alternating epochs of auditory, visual and tactile stimuli interspersed with rest periods. The performance of Spectral LDA was very competitive compared with four alternative classifiers based on different assumptions concerning the organization of rhythmic brain activity. In addition, the spectral and spatial patterns extracted automatically on the basis of trained classifiers showed that Spectral LDA offers a novel and interesting way of analyzing spectrospatial oscillatory neural activity across the brain. All the presented classification methods and visualization tools are freely available as a Matlab toolbox. © 2013.
Cacciamani, Laura; Likova, Lora T.
2017-01-01
The perirhinal cortex (PRC) is a medial temporal lobe structure that has been implicated in not only visual memory in the sighted, but also tactile memory in the blind (Cacciamani & Likova, 2016). It has been proposed that, in the blind, the PRC may contribute to modulation of tactile memory responses that emerge in low-level “visual” area V1 as a result of training-induced cortical reorganization (Likova, 2012; 2015). While some studies in the sighted have indicated that the PRC is indeed structurally and functionally connected to the visual cortex (Clavagnier et al., 2004; Peterson et al., 2012), the PRC’s direct modulation of V1 is unknown—particularly in those who lack the visual input that typically stimulates this region. In the present study, we tested Likova’s PRC modulation hypothesis; specifically, we used fMRI to assess the PRC’s Granger causal influence on V1 activation in the blind during a tactile memory task. To do so, we trained congenital and acquired blind participants on a unique memory-guided drawing technique previously shown to result in V1 reorganization towards tactile memory representations (Likova, 2012). The tasks (20s each) included: tactile exploration of raised line drawings of faces and objects, tactile memory retrieval via drawing, and a scribble motor/memory control. FMRI before and after a week of the Cognitive-Kinesthetic training on these tasks revealed a significant increase in PRC-to-V1 Granger causality from pre- to post-training during the memory drawing task, but not during the motor/memory control. This increase in causal connectivity indicates that the training strengthened the top-down modulation of visual cortex from the PRC. This is the first study to demonstrate enhanced directed functional connectivity from the PRC to the visual cortex in the blind, implicating the PRC as a potential source of the reorganization towards tactile representations that occurs in V1 in the blind brain (Likova, 2012). PMID:28347878
Should visual speech cues (speechreading) be considered when fitting hearing aids?
NASA Astrophysics Data System (ADS)
Grant, Ken
2002-05-01
When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.
ERIC Educational Resources Information Center
Cascio, Carissa J.; Foss-Feig, Jennifer H.; Burnette, Courtney P.; Heacock, Jessica L.; Cosby, Akua A.
2012-01-01
In the rubber hand illusion, perceived hand ownership can be transferred to a rubber hand after synchronous visual and tactile stimulation. Perceived body ownership and self-other relation are foundational for development of self-awareness, imitation, and empathy, which are all affected in autism spectrum disorders (ASD). We examined the rubber…
ERIC Educational Resources Information Center
Williams, Michael D.; Ray, Christopher T.; Griffith, Jennifer; De l'Aune, William
2011-01-01
The promise of novel technological strategies and solutions to assist persons with visual impairments (that is, those who are blind or have low vision) is frequently discussed and held to be widely beneficial in countless applications and daily activities. One such approach involving a tactile-vision sensory substitution modality as a mechanism to…
ERIC Educational Resources Information Center
Grewe, Oliver; Katzur, Bjorn; Kopiez, Reinhard; Altenmuller, Eckart
2011-01-01
"Chills" (frisson manifested as goose bumps or shivers) have been used in an increasing number of studies as indicators of emotions in response to music (e.g., Craig, 2005; Guhn, Hamm, & Zentner, 2007; McCrae, 2007; Panksepp, 1995; Sloboda, 1991). In this study we present evidence that chills can be induced through aural, visual, tactile, and…
Massive cortical reorganization in sighted Braille readers
Siuda-Krzywicka, Katarzyna; Bola, Łukasz; Paplińska, Małgorzata; Sumera, Ewa; Jednoróg, Katarzyna; Marchewka, Artur; Śliwińska, Magdalena W; Amedi, Amir; Szwed, Marcin
2016-01-01
The brain is capable of large-scale reorganization in blindness or after massive injury. Such reorganization crosses the division into separate sensory cortices (visual, somatosensory...). As its result, the visual cortex of the blind becomes active during tactile Braille reading. Although the possibility of such reorganization in the normal, adult brain has been raised, definitive evidence has been lacking. Here, we demonstrate such extensive reorganization in normal, sighted adults who learned Braille while their brain activity was investigated with fMRI and transcranial magnetic stimulation (TMS). Subjects showed enhanced activity for tactile reading in the visual cortex, including the visual word form area (VWFA) that was modulated by their Braille reading speed and strengthened resting-state connectivity between visual and somatosensory cortices. Moreover, TMS disruption of VWFA activity decreased their tactile reading accuracy. Our results indicate that large-scale reorganization is a viable mechanism recruited when learning complex skills. DOI: http://dx.doi.org/10.7554/eLife.10762.001 PMID:26976813
Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G
2017-03-01
We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.
Graham, N.; Zeman, A.; Young, A.; Patterson, K.; Hodges, J.
1999-01-01
OBJECTIVES—To investigate the roles of visual and tactile information in a dyspraxic patient with corticobasal degeneration (CBD) who showed dramatic facilitation in miming the use of a tool or object when he was given a tool to manipulate; and to study the nature of the praxic and neuropsychological deficits in CBD. METHODS—The subject had clinically diagnosed CBD, and exhibited alien limb behaviour and striking ideomotor dyspraxia. General neuropsychological evaluation focused on constructional and visuospatial abilities, calculation, verbal fluency, episodic and semantic memory, plus spelling and writing because impairments in this domain were presenting complaints. Four experiments assessed the roles of visual and tactile information in the facilitation of motor performance by tools. Experiment 1 evaluated the patient's performance of six limb transitive actions under six conditions: (1) after he described the relevant tool from memory, (2) after he was shown a line drawing of the tool, (3) after he was shown a real exemplar of the tool, (4) after he watched the experimenter perform the action, (5) while he was holding the tool, and (6) immediately after he had performed the action with the tool but with the tool removed from his grasp. Experiment 2 evaluated the use of the same six tools when the patient had tactile but no visual information (while he was blindfolded). Experiments 3 and 4 assessed performance of actions appropriate to the same six tools when the patient had either neutral or inappropriate tactile feedback—that is, while he was holding a non-tool object or a different tool. RESULTS—Miming of tool use was not facilitated by visual input; moreover, lack of visual information in the blindfolded condition did not reduce performance. The principal positive finding was a dramatic facilitation of the patient's ability to demonstrate object use when he was holding either the appropriate tool or a neutral object. Tools inappropriate to the requested action produced involuntary performance of the stimulus relevant action. CONCLUSIONS—Tactile stimulation was paramount in the facilitation of motor performance in tool use by this patient with CBD. This outcome suggests that tactile information should be included in models which hypothesise modality specific inputs to the action production system. Significant impairments in spelling and letter production that have not previously been reported in CBD have also been documented. PMID:10449556
NASA Astrophysics Data System (ADS)
Gonzales, Ashleigh
Blind and visually impaired individuals have historically demonstrated a low participation in the fields of science, engineering, mathematics, and technology (STEM). This low participation is reflected in both their education and career choices. Despite the establishment of the Americans with Disabilities Act (ADA) and the Individuals with Disabilities Education Act (IDEA), blind and visually impaired (BVI) students continue to academically fall below the level of their sighted peers in the areas of science and math. Although this deficit is created by many factors, this study focuses on the lack of adequate accessible image based materials. Traditional methods for creating accessible image materials for the vision impaired have included detailed verbal descriptions accompanying an image or conversion into a simplified tactile graphic. It is very common that no substitute materials will be provided to students within STEM courses because they are image rich disciplines and often include a large number images, diagrams and charts. Additionally, images that are translated into text or simplified into basic line drawings are frequently inadequate because they rely on the interpretations of resource personnel who do not have expertise in STEM. Within this study, a method to create a new type of tactile 3D image was developed using High Density Polyethylene (HDPE) and Computer Numeric Control (CNC) milling. These tactile image boards preserve high levels of detail when compared to the original print image. To determine the discernibility and effectiveness of tactile images, these customizable boards were tested in various university classrooms as well as in participation studies which included BVI and sighted students. Results from these studies indicate that tactile images are discernable and were found to improve performance in lab exercises as much as 60% for those with visual impairment. Incorporating tactile HDPE 3D images into a classroom setting was shown to increase the interest, participation and performance of BVI students suggesting that this type of 3D tactile image should be incorporated into STEM classes to increase the participation of these students and improve the level of training they receive in science and math.
An Evaluation of Substrates for Tactile Maps and Diagrams: Scanning Speed and Users' Preferences
ERIC Educational Resources Information Center
Jehoel, Sandra; Ungar, Simon; McCallum, Don; Rowell, Jonathan
2005-01-01
This study evaluated the relative suitability of a range of base materials for producing tactile maps and diagrams via a new ink-jet process. The visually impaired and sighted participants tactilely scanned arrays of symbols that were printed on seven substrate materials, including paper, plastic, and aluminum. In general, the rougher substrates…
ERIC Educational Resources Information Center
Cardini, Flavia; Haggard, Patrick; Ladavas, Elisabetta
2013-01-01
We have investigated the relation between visuo-tactile interactions and the self-other distinction. In the Visual Enhancement of Touch (VET) effect, non-informative vision of one's own hand improves tactile spatial perception. Previous studies suggested that looking at "another"person's hand could also enhance tactile perception, but did not…
Barlow-Brown, Fiona; Barker, Christopher; Harris, Margaret
2018-06-17
Beginning readers are typically introduced to enlarged print, and the size of this print decreases as readers become more fluent. In comparison, beginning blind readers are expected to learn standard-sized Braille from the outset because past research suggests letter knowledge cannot be transferred across different sizes of Braille. The study aims to investigate whether learning Braille using an oversized pegboard leads to faster, transferable, letter learning and whether performance is mediated by either tactile or visual learning. Sixty-eight children participated in the study. All children were sighted pre-readers with no previous knowledge of Braille. The children came from two nursery schools with an average age of 47.8 months. Children were taught specific Braille letters using either an enlarged pegboard or standard Braille. Two other groups of children were taught using visually presented Braille characters in either an enlarged or standard size and a further control group mirrored the experience of blind children in receiving non-specific tactile training prior to being introduced to Braille. In all tactile conditions it was ensured that the children did not visually experience any Braille for the duration of the study. Results demonstrated that initially training children with large Braille tactually led to the best subsequent learning of standard Braille. Despite the fact that both initial visual and large tactual learning were significantly faster than learning standard Braille, when transferring letter knowledge to standard tactile Braille, previous tactile experience with the large pegboard offered the most efficient route. Braille letter knowledge can be transferred across size and modality particularly effectively with large tactile Braille. This has significant implications for the education of blind children. © 2018 The British Psychological Society.
Multisensory and Modality-Specific Influences on Adaptation to Optical Prisms
Calzolari, Elena; Albini, Federica; Bolognini, Nadia; Vallar, Giuseppe
2017-01-01
Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target. Visual, auditory, and multisensory – audio-visual – targets in the adaptation phase were used, while participants wore prisms displacing the visual field rightward by 10°. Proprioceptive, visual, visual-proprioceptive, auditory-proprioceptive straight-ahead shifts were measured. Pointing to auditory and to audio-visual targets in the adaptation phase produces proprioceptive, visual-proprioceptive, and auditory-proprioceptive AEs, as the typical visual targets did. This finding reveals that cross-modal plasticity effects involve both the auditory and the visual modality, and their interactions (Experiment 1). Even a shortened PA phase, requiring only 24 pointings to visual and audio-visual targets (Experiment 2), is sufficient to bring about AEs, as compared to the standard 92-pointings procedure. Finally, pointings to auditory targets cause AEs, although PA with a reduced number of pointings (24) to auditory targets brings about smaller AEs, as compared to the 92-pointings procedure (Experiment 3). Together, results from the three experiments extend to the auditory modality the sensorimotor plasticity underlying the typical AEs produced by PA to visual targets. Importantly, PA to auditory targets appears characterized by less accurate pointings and error correction, suggesting that the auditory component of the PA process may be less central to the building up of the AEs, than the sensorimotor pointing activity per se. These findings highlight both the effectiveness of a reduced number of pointings for bringing about AEs, and the possibility of inducing PA with auditory targets, which may be used as a compensatory route in patients with visual deficits. PMID:29213233
Differential Cognitive and Perceptual Correlates of Print Reading versus Braille Reading
ERIC Educational Resources Information Center
Veispak, Anneli; Boets, Bart; Ghesquiere, Pol
2013-01-01
The relations between reading, auditory, speech, phonological and tactile spatial processing are investigated in a Dutch speaking sample of blind braille readers as compared to sighted print readers. Performance is assessed in blind and sighted children and adults. Regarding phonological ability, braille readers perform equally well compared to…
Tactile Aid Usage with Young Hearing-Impaired Children.
ERIC Educational Resources Information Center
Proctor, Adele
Five hearing impaired children (2 to 4 years old) were followed longitudinally while using a single channel, vibrotactile aid as a supplement to hearing aids. Standardized language tests (including the Scales of Early Communication Skills for Hearing Impaired Children, the Test for Auditory Comprehension of Language, and the Test for Auditory…
Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.
2016-01-01
Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630
Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.
Reimers, Stian; Stewart, Neil
2016-09-01
Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.
Multisensory Technology for Flavor Augmentation: A Mini Review.
Velasco, Carlos; Obrist, Marianna; Petit, Olivia; Spence, Charles
2018-01-01
There is growing interest in the development of new technologies that capitalize on our emerging understanding of the multisensory influences on flavor perception in order to enhance human-food interaction design. This review focuses on the role of (extrinsic) visual, auditory, and haptic/tactile elements in modulating flavor perception and more generally, our food and drink experiences. We review some of the most exciting examples of recent multisensory technologies for augmenting such experiences. Here, we discuss applications for these technologies, for example, in the field of food experience design, in the support of healthy eating, and in the rapidly growing world of sensory marketing. However, as the review makes clear, while there are many opportunities for novel human-food interaction design, there are also a number of challenges that will need to be tackled before new technologies can be meaningfully integrated into our everyday food and drink experiences.
Validity of false belief tasks in blind children.
Brambring, Michael; Asbrock, Doreen
2010-12-01
Previous studies have reported that congenitally blind children without any additional impairment reveal a developmental delay of at least 4 years in perspective taking based on testing first-order false-belief tasks. These authors interpret this delay as a sign of autism-like behavior. However, the delay may be caused by testing blind children with false-belief tasks that require visual experience. Therefore, the present study gave alternative false-belief tasks based on tactile or auditory experience to 45 congenitally blind 4-10-year-olds and 37 sighted 3-6-year-olds. Results showed criterion performance at 80 months (6; 8 years) in blind children compared with 61 months (5; 1 years) in sighted controls. It is concluded that this 19-month (1; 7 year) difference, which is comparable with delays in other developmental areas, is a developmental delay caused by the fact of congenital blindness rather than a sign of a psychopathological disorder of autism-like behavior.
Multisensory Technology for Flavor Augmentation: A Mini Review
Velasco, Carlos; Obrist, Marianna; Petit, Olivia; Spence, Charles
2018-01-01
There is growing interest in the development of new technologies that capitalize on our emerging understanding of the multisensory influences on flavor perception in order to enhance human–food interaction design. This review focuses on the role of (extrinsic) visual, auditory, and haptic/tactile elements in modulating flavor perception and more generally, our food and drink experiences. We review some of the most exciting examples of recent multisensory technologies for augmenting such experiences. Here, we discuss applications for these technologies, for example, in the field of food experience design, in the support of healthy eating, and in the rapidly growing world of sensory marketing. However, as the review makes clear, while there are many opportunities for novel human–food interaction design, there are also a number of challenges that will need to be tackled before new technologies can be meaningfully integrated into our everyday food and drink experiences. PMID:29441030
Validity of Sensory Systems as Distinct Constructs
Su, Chia-Ting
2014-01-01
This study investigated the validity of sensory systems as distinct measurable constructs as part of a larger project examining Ayres’s theory of sensory integration. Confirmatory factor analysis (CFA) was conducted to test whether sensory questionnaire items represent distinct sensory system constructs. Data were obtained from clinical records of two age groups, 2- to 5-yr-olds (n = 231) and 6- to 10-yr-olds (n = 223). With each group, we tested several CFA models for goodness of fit with the data. The accepted model was identical for each group and indicated that tactile, vestibular–proprioceptive, visual, and auditory systems form distinct, valid factors that are not age dependent. In contrast, alternative models that grouped items according to sensory processing problems (e.g., over- or underresponsiveness within or across sensory systems) did not yield valid factors. Results indicate that distinct sensory system constructs can be measured validly using questionnaire data. PMID:25184467
Rapid temporal recalibration is unique to audiovisual stimuli.
Van der Burg, Erik; Orchard-Mills, Emily; Alais, David
2015-01-01
Following prolonged exposure to asynchronous multisensory signals, the brain adapts to reduce the perceived asynchrony. Here, in three separate experiments, participants performed a synchrony judgment task on audiovisual, audiotactile or visuotactile stimuli and we used inter-trial analyses to examine whether temporal recalibration occurs rapidly on the basis of a single asynchronous trial. Even though all combinations used the same subjects, task and design, temporal recalibration occurred for audiovisual stimuli (i.e., the point of subjective simultaneity depended on the preceding trial's modality order), but none occurred when the same auditory or visual event was combined with a tactile event. Contrary to findings from prolonged adaptation studies showing recalibration for all three combinations, we show that rapid, inter-trial recalibration is unique to audiovisual stimuli. We conclude that recalibration occurs at two different timescales for audiovisual stimuli (fast and slow), but only on a slow timescale for audiotactile and visuotactile stimuli.
Maturation of Visual and Auditory Temporal Processing in School-Aged Children
ERIC Educational Resources Information Center
Dawes, Piers; Bishop, Dorothy V. M.
2008-01-01
Purpose: To examine development of sensitivity to auditory and visual temporal processes in children and the association with standardized measures of auditory processing and communication. Methods: Normative data on tests of visual and auditory processing were collected on 18 adults and 98 children aged 6-10 years of age. Auditory processes…
Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities
ERIC Educational Resources Information Center
Norrix, Linda W.; Plante, Elena; Vance, Rebecca
2006-01-01
Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…
Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.
Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina
2015-07-01
It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Cangemi, Sam
This guide describes and illustrates 50 perceptual games for preschool children which may be constructed by teachers. Inexpensive, easily obtained game materials are suggested. The use of tactile and visual perceptual games gives children opportunities to make choices and discriminations, and provides reading readiness experiences. Games depicted…
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru
2016-01-01
The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060
Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru
2017-03-01
The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.
1981-07-10
Pohlmann, L. D. Some models of observer behavior in two-channel auditory signal detection. Perception and Psychophy- sics, 1973, 14, 101-109. Spelke...spatial), and processing modalities ( auditory versus visual input, vocal versus manual response). If validated, this configuration has both theoretical...conclusion that auditory and visual processes will compete, as will spatial and verbal (albeit to a lesser extent than auditory - auditory , visual-visual
Age-equivalent top-down modulation during cross-modal selective attention.
Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam
2014-12-01
Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.
Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne
2016-01-01
Abstract One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers’ visual and manual distractions with ‘infotainment’ technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual–manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox ‘one-shot’ voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory–vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers’ interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation. PMID:26269281
Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro
2016-10-01
Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Why Do Pictures, but Not Visual Words, Reduce Older Adults’ False Memories?
Smith, Rebekah E.; Hunt, R. Reed; Dunlap, Kathryn R.
2015-01-01
Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both the case of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment we provide the first simultaneous comparison of all three study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. PMID:26213799
Why do pictures, but not visual words, reduce older adults' false memories?
Smith, Rebekah E; Hunt, R Reed; Dunlap, Kathryn R
2015-09-01
Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both cases of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment, we provide the first simultaneous comparison of all 3 study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Visual and auditory perception in preschool children at risk for dyslexia.
Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina
2014-11-01
Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.
When Content Matters: The Role of Processing Code in Tactile Display Design.
Ferris, Thomas K; Sarter, Nadine
2010-01-01
The distribution of tasks and stimuli across multiple modalities has been proposed as a means to support multitasking in data-rich environments. Recently, the tactile channel and, more specifically, communication via the use of tactile/haptic icons have received considerable interest. Past research has examined primarily the impact of concurrent task modality on the effectiveness of tactile information presentation. However, it is not well known to what extent the interpretation of iconic tactile patterns is affected by another attribute of information: the information processing codes of concurrent tasks. In two driving simulation studies (n = 25 for each), participants decoded icons composed of either spatial or nonspatial patterns of vibrations (engaging spatial and nonspatial processing code resources, respectively) while concurrently interpreting spatial or nonspatial visual task stimuli. As predicted by Multiple Resource Theory, performance was significantly worse (approximately 5-10 percent worse) when the tactile icons and visual tasks engaged the same processing code, with the overall worst performance in the spatial-spatial task pairing. The findings from these studies contribute to an improved understanding of information processing and can serve as input to multidimensional quantitative models of timesharing performance. From an applied perspective, the results suggest that competition for processing code resources warrants consideration, alongside other factors such as the naturalness of signal-message mapping, when designing iconic tactile displays. Nonspatially encoded tactile icons may be preferable in environments which already rely heavily on spatial processing, such as car cockpits.
Shen, Guannan; Saby, Joni N; Drew, Ashley R; Marshall, Peter J
2017-03-15
This study explored interpersonal influences on electrophysiological responses during the anticipation of tactile stimulation. It is well-known that broad, negative-going potentials are present in the event-related potential (ERP) between a forewarning cue and a tactile stimulus. It has also been shown that the alpha-range mu rhythm shows a lateralized desynchronization over central electrode sites during anticipation of tactile stimulation of the hand. The current study used a tactile discrimination task in which a visual cue signaled that an upcoming stimulus would either be delivered 1500ms later to the participant's hand, to a task partner's hand, or to neither person. For the condition in which participants anticipated the tactile stimulation to their own hand, a negative potential (contingent negative variation, CNV) was observed in the ERP at central sites in the 1000ms prior to the tactile stimulus. Significant mu rhythm desynchronization was also present in the same time window. The magnitudes of the ERPs and of the mu desynchronization were greater in the contralateral than in the ipsilateral hemisphere prior to right hand stimulation. Similar ERP and EEG changes were not present when the visual cue indicated that stimulation would be delivered to the task partner or to neither person. The absence of social influences during anticipation of tactile stimulation, and the relationship between the two brain signatures of anticipatory attention (CNV and mu rhythm) are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Grahn, Jessica A.; Henry, Molly J.; McAuley, J. Devin
2011-01-01
How we measure time and integrate temporal cues from different sensory modalities are fundamental questions in neuroscience. Sensitivity to a “beat” (such as that routinely perceived in music) differs substantially between auditory and visual modalities. Here we examined beat sensitivity in each modality, and examined cross-modal influences, using functional magnetic resonance imaging (fMRI) to characterize brain activity during perception of auditory and visual rhythms. In separate fMRI sessions, participants listened to auditory sequences or watched visual sequences. The order of auditory and visual sequence presentation was counterbalanced so that cross-modal order effects could be investigated. Participants judged whether sequences were speeding up or slowing down, and the pattern of tempo judgments was used to derive a measure of sensitivity to an implied beat. As expected, participants were less sensitive to an implied beat in visual sequences than in auditory sequences. However, visual sequences produced a stronger sense of beat when preceded by auditory sequences with identical temporal structure. Moreover, increases in brain activity were observed in the bilateral putamen for visual sequences preceded by auditory sequences when compared to visual sequences without prior auditory exposure. No such order-dependent differences (behavioral or neural) were found for the auditory sequences. The results provide further evidence for the role of the basal ganglia in internal generation of the beat and suggest that an internal auditory rhythm representation may be activated during visual rhythm perception. PMID:20858544
Cornell Kärnekull, Stina; Arshamian, Artin; Nilsson, Mats E.; Larsson, Maria
2016-01-01
Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15), late blind (n = 15), and sighted (n = 30) participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA) showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity. PMID:27729884
Is sensorimotor BCI performance influenced differently by mono, stereo, or 3-D auditory feedback?
McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh
2014-05-01
Imagination of movement can be used as a control method for a brain-computer interface (BCI) allowing communication for the physically impaired. Visual feedback within such a closed loop system excludes those with visual problems and hence there is a need for alternative sensory feedback pathways. In the context of substituting the visual channel for the auditory channel, this study aims to add to the limited evidence that it is possible to substitute visual feedback for its auditory equivalent and assess the impact this has on BCI performance. Secondly, the study aims to determine for the first time if the type of auditory feedback method influences motor imagery performance significantly. Auditory feedback is presented using a stepped approach of single (mono), double (stereo), and multiple (vector base amplitude panning as an audio game) loudspeaker arrangements. Visual feedback involves a ball-basket paradigm and a spaceship game. Each session consists of either auditory or visual feedback only with runs of each type of feedback presentation method applied in each session. Results from seven subjects across five sessions of each feedback type (visual, auditory) (10 sessions in total) show that auditory feedback is a suitable substitute for the visual equivalent and that there are no statistical differences in the type of auditory feedback presented across five sessions.
Most, Tova; Michaelis, Hilit
2012-08-01
This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.
Tsunoda, Naoko; Hashimoto, Mamoru; Ishikawa, Tomohisa; Fukuhara, Ryuji; Yuki, Seiji; Tanaka, Hibiki; Hatada, Yutaka; Miyagawa, Yusuke; Ikeda, Manabu
2018-05-08
Auditory hallucinations are an important symptom for diagnosing dementia with Lewy bodies (DLB), yet they have received less attention than visual hallucinations. We investigated the clinical features of auditory hallucinations and the possible mechanisms by which they arise in patients with DLB. We recruited 124 consecutive patients with probable DLB (diagnosis based on the DLB International Workshop 2005 criteria; study period: June 2007-January 2015) from the dementia referral center of Kumamoto University Hospital. We used the Neuropsychiatric Inventory to assess the presence of auditory hallucinations, visual hallucinations, and other neuropsychiatric symptoms. We reviewed all available clinical records of patients with auditory hallucinations to assess their clinical features. We performed multiple logistic regression analysis to identify significant independent predictors of auditory hallucinations. Of the 124 patients, 44 (35.5%) had auditory hallucinations and 75 (60.5%) had visual hallucinations. The majority of patients (90.9%) with auditory hallucinations also had visual hallucinations. Auditory hallucinations consisted mostly of human voices, and 90% of patients described them as like hearing a soundtrack of the scene. Multiple logistic regression showed that the presence of auditory hallucinations was significantly associated with female sex (P = .04) and hearing impairment (P = .004). The analysis also revealed independent correlations between the presence of auditory hallucinations and visual hallucinations (P < .001), phantom boarder delusions (P = .001), and depression (P = .038). Auditory hallucinations are common neuropsychiatric symptoms in DLB and usually appear as a background soundtrack accompanying visual hallucinations. Auditory hallucinations in patients with DLB are more likely to occur in women and those with impaired hearing, depression, delusions, or visual hallucinations. © Copyright 2018 Physicians Postgraduate Press, Inc.
Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.
Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H
2013-07-01
Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.
ERIC Educational Resources Information Center
Aleman, Cheryl; And Others
1990-01-01
Compares auditory/visual practice to visual/motor practice in spelling with seven elementary school learning-disabled students enrolled in a resource room setting. Finds that the auditory/visual practice was superior to the visual/motor practice on the weekly spelling performance for all seven students. (MG)
A Short Term Therapy Approach to Processing Trauma: Art Therapy and Bilateral Stimulation
ERIC Educational Resources Information Center
Tripp, Tally
2007-01-01
This article describes a dynamic, short-term art therapy approach that has been developed for the treatment of trauma related disorders. Using a modified Eye Movement Desensitization and Reprocessing (EMDR) protocol with alternating tactile and auditory bilateral stimulation, associations are rapidly brought to conscious awareness and expressed in…
Joint attention and oromotor abilities in young children with and without autism spectrum disorder.
Dalton, Jennifer C; Crais, Elizabeth R; Velleman, Shelley L
2017-09-01
This study examined the relationship between joint attention ability and oromotor imitation skill in three groups of young children with and without Autism Spectrum Disorder using both nonverbal oral and verbal motor imitation tasks. Research questions addressed a) differences among joint attention and oromotor imitation abilities; b) the relationship between independently measured joint attention and oromotor imitation, both nonverbal oral and verbal motor; c) the relationships between joint attention and verbal motor imitation during interpersonal interaction; and d) the relationship between the sensory input demands (auditory, visual, and tactile) and oromotor imitation, both nonverbal oral and verbal motor. A descriptive, nonexperimental design was used to compare joint attention and oromotor skills of 10 preschool-aged children with ASD, with those of two control groups: 6 typically developing children (TD), and 6 children with suspected Childhood Apraxia of Speech (sCAS) or apraxic-like symptoms. All children had at least a 3.0 mean length utterance. Children with ASD had poorer joint attention skills overall than children with sCAS or typically developing children. Typically developing children demonstrated higher verbal motor imitation skills overall compared to children with sCAS. Correlational analyses revealed that nonverbal oral imitation and verbal motor imitation were positively related to joint attention abilities only in the children with ASD. Strong positive relationships between joint attention in a naturalistic context (e.g., shared story experience) and oromotor imitation skills, both nonverbal oral and verbal motor, were found only for children with ASD. These data suggest there is a strong positive relationship between joint attention skills and the ability to sequence nonverbal oral and verbal motor movements in children with ASD. The combined sensory input approach involving auditory, visual, and tactile modalities contributed to significantly higher nonverbal oral and verbal motor imitation performance for all groups of children. Verbal children with ASD in this study had difficulties with both the social and cognitive demands of oromotor imitation within a natural environment that demanded cross-modal processing of incoming stimuli within an interpersonal interaction. Further, joint attention and oral praxis may serve as components of an important coupling mechanism in the development of spoken communication and later developing socialcognitive skills. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Ikeda, Kohei; Higashi, Toshio; Sugawara, Kenichi; Tomori, Kounosuke; Kinoshita, Hiroshi; Kasai, Tatsuya
2012-01-01
The effect of visual and auditory enhancements of finger movement on corticospinal excitability during motor imagery (MI) was investigated using the transcranial magnetic stimulation technique. Motor-evoked potentials were elicited from the abductor digit minimi muscle during MI with auditory, visual and, auditory and visual information, and no…
Opposite brain laterality in analogous auditory and visual tests.
Oltedal, Leif; Hugdahl, Kenneth
2017-11-01
Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.
The spatiotopic 'visual' cortex of the blind
NASA Astrophysics Data System (ADS)
Likova, Lora
2012-03-01
Visual cortex activity in the blind has been shown in sensory tasks. Can it be activated in memory tasks? If so, are inherent features of its organization meaningfully employed? Our recent results in short-term blindfolded subjects imply that human primary visual cortex (V1) may operate as a modality-independent 'sketchpad' for working memory (Likova, 2010a). Interestingly, the spread of the V1 activation approximately corresponded to the spatial extent of the images in terms of their angle of projection to the subject. We now raise the questions of whether under long-term visual deprivation V1 is also employed in non-visual memory task, in particular in congenitally blind individuals, who have never had visual stimulation to guide the development of the visual area organization, and whether such spatial organization is still valid for the same paradigm that was used in blindfolded individuals. The outcome has implications for an emerging reconceptualization of the principles of brain architecture and its reorganization under sensory deprivation. Methods: We used a novel fMRI drawing paradigm in congenitally and late-onset blind, compared with sighted and blindfolded subjects in three conditions of 20s duration, separated by 20s rest-intervals, (i) Tactile Exploration: raised-line images explored and memorized; (ii) Tactile Memory Drawing: drawing the explored image from memory; (iii) Scribble: mindless drawing movements with no memory component. Results and Conclusions: V1 was strongly activated for Tactile Memory Drawing and Tactile Exploration in these totally blind subjects. Remarkably, after training, even in the memory task, the mapping of V1 activation largely corresponded to the angular projection of the tactile stimuli relative to the ego-center (i.e., the effective visual angle at the head); beyond this projective boundary, peripheral V1 signals were dramatically reduced or even suppressed. The matching extent of the activation in the congenitally blind rules out vision-based explanatory mechanisms, and supports the more radical idea of V1 as a modality-independent 'projection screen' or a 'sketchpad', whose mapping scales to the projective dimensions of objects explored in the peri-personal space.
NASA Astrophysics Data System (ADS)
Ducasse, J.; Macé, M.; Jouffrais, C.
2015-08-01
Visual maps must be transcribed into (interactive) raised-line maps to be accessible for visually impaired people. However, these tactile maps suffer from several shortcomings: they are long and expensive to produce, they cannot display a large amount of information, and they are not dynamically modifiable. A number of methods have been developed to automate the production of raised-line maps, but there is not yet any tactile map editor on the market. Tangible interactions proved to be an efficient way to help a visually impaired user manipulate spatial representations. Contrary to raised-line maps, tangible maps can be autonomously constructed and edited. In this paper, we present the scenarios and the main expected contributions of the AccessiMap project, which is based on the availability of many sources of open spatial data: 1/ facilitating the production of interactive tactile maps with the development of an open-source web-based editor; 2/ investigating the use of tangible interfaces for the autonomous construction and exploration of a map by a visually impaired user.
How the blind "see" Braille: lessons from functional magnetic resonance imaging.
Sadato, Norihiro
2005-12-01
What does the visual cortex of the blind do during Braille reading? This process involves converting simple tactile information into meaningful patterns that have lexical and semantic properties. The perceptual processing of Braille might be mediated by the somatosensory system, whereas visual letter identity is accomplished within the visual system in sighted people. Recent advances in functional neuroimaging techniques, such as functional magnetic resonance imaging, have enabled exploration of the neural substrates of Braille reading. The primary visual cortex of early-onset blind subjects is functionally relevant to Braille reading, suggesting that the brain shows remarkable plasticity that potentially permits the additional processing of tactile information in the visual cortical areas.
Graphical tactile displays for visually-impaired people.
Vidal-Verdú, Fernando; Hafez, Moustapha
2007-03-01
This paper presents an up-to-date survey of graphical tactile displays. These devices provide information through the sense of touch. At best, they should display both text and graphics (text may be considered a type of graphic). Graphs made with shapeable sheets result in bulky items awkward to store and transport; their production is expensive and time-consuming and they deteriorate quickly. Research is ongoing for a refreshable tactile display that acts as an output device for a computer or other information source and can present the information in text and graphics. The work in this field has branched into diverse areas, from physiological studies to technological aspects and challenges. Moreover, interest in these devices is now being shown by other fields such as virtual reality, minimally invasive surgery and teleoperation. It is attracting more and more people, research and money. Many proposals have been put forward, several of them succeeding in the task of presenting tactile information. However, most are research prototypes and very expensive to produce commercially. Thus the goal of an efficient low-cost tactile display for visually-impaired people has not yet been reached.
Troise, Denise; Yoneyama, Simone; Resende, Maria Bernadette; Reed, Umbertina; Xavier, Gilberto Fernando; Hasue, Renata
2014-09-01
To investigate tactile perception and manual dexterity, with or without visual feedback, in males with Duchenne muscular dystrophy (DMD). Forty males with DMD (mean age 9 y 8 mo, SD 2 y 3 mo; range 5-14 y), recruited from the teaching hospital of the School of Medicine of the University of São Paulo, with disease severity graded as '1' to '6' on the Vignos Scale and '1' on Brooke's Scale, and 49 healthy males (mean age 8 y 2 mo; range 5-11 y; SD 1 y 11 mo), recruited from a local education center, participated in the study. We assessed tactile perception using two-point discrimination and stereognosis tests, and manual dexterity using the Pick-Up test with the eyes either open or closed. Analysis of variance was used to compare groups; a p value of less than 0.05 was considered statistically significant. Males with DMD exhibited no impairment in tactile perception, as measured by the two-point discrimination test and the number of objects correctly named in the stereognosis test. Manipulation during stereognosis was statistically slower with both hands (p<0.001), and manual dexterity was much worse in males with DMD when there was no visual feedback (p<0.001). Males with DMD exhibited disturbances in manipulation during stereognosis and dexterity tests. Hand control was highly dependent on visual information rather than on tactile perception. Motor dysfunction in males with DMD, therefore, might be related to altered neural control. © 2014 Mac Keith Press.
Anthony Eikema, Diderik Jan A.; Chien, Jung Hung; Stergiou, Nicholas; Myers, Sara A.; Scott-Pandorf, Melissa M.; Bloomberg, Jacob J.; Mukherjee, Mukul
2015-01-01
Human locomotor adaptation requires feedback and feed-forward control processes to maintain an appropriate walking pattern. Adaptation may require the use of visual and proprioceptive input to decode altered movement dynamics and generate an appropriate response. After a person transfers from an extreme sensory environment and back, as astronauts do when they return from spaceflight, the prolonged period required for re-adaptation can pose a significant burden. In our previous paper, we showed that plantar tactile vibration during a split-belt adaptation task did not interfere with the treadmill adaptation however, larger overground transfer effects with a slower decay resulted. Such effects, in the absence of visual feedback (of motion) and perturbation of tactile feedback, is believed to be due to a higher proprioceptive gain because, in the absence of relevant external dynamic cues such as optic flow, reliance on body-based cues is enhanced during gait tasks through multisensory integration. In this study we therefore investigated the effect of optic flow on tactile stimulated split-belt adaptation as a paradigm to facilitate the sensorimotor adaptation process. Twenty healthy young adults, separated into two matched groups, participated in the study. All participants performed an overground walking trial followed by a split-belt treadmill adaptation protocol. The tactile group (TC) received vibratory plantar tactile stimulation only, whereas the virtual reality and tactile group (VRT) received an additional concurrent visual stimulation: a moving virtual corridor, inducing perceived self-motion. A post-treadmill overground trial was performed to determine adaptation transfer. Interlimb coordination of spatiotemporal and kinetic variables was quantified using symmetry indices, and analyzed using repeated-measures ANOVA. Marked changes of step length characteristics were observed in both groups during split-belt adaptation. Stance and swing time symmetry were similar in the two groups, suggesting that temporal parameters are not modified by optic flow. However, whereas the TC group displayed significant stance time asymmetries during the post-treadmill session, such aftereffects were absent in the VRT group. The results indicated that the enhanced transfer resulting from exposure to plantar cutaneous vibration during adaptation was alleviated by optic flow information. The presence of visual self-motion information may have reduced proprioceptive gain during learning. Thus, during overground walking, the learned proprioceptive split-belt pattern is more rapidly overridden by visual input due to its increased relative gain. The results suggest that when visual stimulation is provided during adaptive training, the system acquires the novel movement dynamics while maintaining the ability to flexibly adapt to different environments. PMID:26525712
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Cecere, Roberto; Gross, Joachim; Thut, Gregor
2016-06-01
The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Méndez-Balbuena, Ignacio; Huidobro, Nayeli; Silva, Mayte; Flores, Amira; Trenado, Carlos; Quintanar, Luis; Arias-Carrión, Oscar; Kristeva, Rumyana; Manjarrez, Elias
2015-10-01
The present investigation documents the electrophysiological occurrence of multisensory stochastic resonance in the human visual pathway elicited by tactile noise. We define multisensory stochastic resonance of brain evoked potentials as the phenomenon in which an intermediate level of input noise of one sensory modality enhances the brain evoked response of another sensory modality. Here we examined this phenomenon in visual evoked potentials (VEPs) modulated by the addition of tactile noise. Specifically, we examined whether a particular level of mechanical Gaussian noise applied to the index finger can improve the amplitude of the VEP. We compared the amplitude of the positive P100 VEP component between zero noise (ZN), optimal noise (ON), and high mechanical noise (HN). The data disclosed an inverted U-like graph for all the subjects, thus demonstrating the occurrence of a multisensory stochastic resonance in the P100 VEP. Copyright © 2015 the American Physiological Society.
Minagawa, N; Kashu, K
1989-06-01
16 adult subjects performed a tactile recognition task. According to our 1984 study, half of the subjects were classified as having a left hemispheric preference for the processing of visual stimuli, while the other half were classified as having a right hemispheric preference for the processing of visual stimuli. The present task was conducted according to the S1-S2 matching paradigm. The standard stimulus was a readily recognizable object and was presented tactually to either the left or right hand of each subject. The comparison stimulus was an object-picture and was presented visually by slide in a tachistoscope. The interstimulus interval was .05 sec. or 2.5 sec. Analysis indicated that the left-preference group showed right-hand superiority, and the right-preference group showed left-hand superiority. The notion of individual hemisphericity was supported in tactile processing.
Higher dietary diversity is related to better visual and auditory sustained attention.
Shiraseb, Farideh; Siassi, Fereydoun; Qorbani, Mostafa; Sotoudeh, Gity; Rostami, Reza; Narmaki, Elham; Yavari, Parvaneh; Aghasi, Mohadeseh; Shaibu, Osman Mohammed
2016-04-01
Attention is a complex cognitive function that is necessary for learning, for following social norms of behaviour and for effective performance of responsibilities and duties. It is especially important in sensitive occupations requiring sustained attention. Improvement of dietary diversity (DD) is recognised as an important factor in health promotion, but its association with sustained attention is unknown. The aim of this study was to determine the association between auditory and visual sustained attention and DD. A cross-sectional study was carried out on 400 women aged 20-50 years who attended sports clubs at Tehran Municipality. Sustained attention was evaluated on the basis of the Integrated Visual and Auditory Continuous Performance Test using Integrated Visual and Auditory software. A single 24-h dietary recall questionnaire was used for DD assessment. Dietary diversity scores (DDS) were determined using the FAO guidelines. The mean visual and auditory sustained attention scores were 40·2 (sd 35·2) and 42·5 (sd 38), respectively. The mean DDS was 4·7 (sd 1·5). After adjusting for age, education years, physical activity, energy intake and BMI, mean visual and auditory sustained attention showed a significant increase as the quartiles of DDS increased (P=0·001). In addition, the mean subscales of attention, including auditory consistency and vigilance, visual persistence, visual and auditory focus, speed, comprehension and full attention, increased significantly with increasing DDS (P<0·05). In conclusion, higher DDS is associated with better visual and auditory sustained attention.
ERIC Educational Resources Information Center
Frings, Christian; Amendt, Anna; Spence, Charles
2011-01-01
Negative priming (NP) refers to the finding that people's responses to probe targets previously presented as prime distractors are usually slower than to unrepeated stimuli. Intriguingly, the effect sizes of tactile NP were much larger than the effect sizes for visual NP. We analyzed whether the large tactile NP effect is just a side effect of the…
Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk
2017-02-01
Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Identification of Vibrotactile Patterns Encoding Obstacle Distance Information.
Kim, Yeongmi; Harders, Matthias; Gassert, Roger
2015-01-01
Delivering distance information of nearby obstacles from sensors embedded in a white cane-in addition to the intrinsic mechanical feedback from the cane-can aid the visually impaired in ambulating independently. Haptics is a common modality for conveying such information to cane users, typically in the form of vibrotactile signals. In this context, we investigated the effect of tactile rendering methods, tactile feedback configurations and directions of tactile flow on the identification of obstacle distance. Three tactile rendering methods with temporal variation only, spatio-temporal variation and spatial/temporal/intensity variation were investigated for two vibration feedback configurations. Results showed a significant interaction between tactile rendering method and feedback configuration. Spatio-temporal variation generally resulted in high correct identification rates for both feedback configurations. In the case of the four-finger vibration, tactile rendering with spatial/temporal/intensity variation also resulted in high distance identification rate. Further, participants expressed their preference for the four-finger vibration over the single-finger vibration in a survey. Both preferred rendering methods with spatio-temporal variation and spatial/temporal/intensity variation for the four-finger vibration could convey obstacle distance information with low workload. Overall, the presented findings provide valuable insights and guidance for the design of haptic displays for electronic travel aids for the visually impaired.
Burnham, Denis; Dodd, Barbara
2004-12-01
The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.
Contextual modulation of primary visual cortex by auditory signals.
Petro, L S; Paton, A T; Muckli, L
2017-02-19
Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Contextual modulation of primary visual cortex by auditory signals
Paton, A. T.
2017-01-01
Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015
The Tactile Vision Substitution System: Applications in Education and Employment
ERIC Educational Resources Information Center
Scadden, Lawrence A.
1974-01-01
The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)
Enhanced tactile encoding and memory recognition in congenital blindness.
D'Angiulli, Amedeo; Waraich, Paul
2002-06-01
Several behavioural studies have shown that early-blind persons possess superior tactile skills. Since neurophysiological data show that early-blind persons recruit visual as well as somatosensory cortex to carry out tactile processing (cross-modal plasticity), blind persons' sharper tactile skills may be related to cortical re-organisation resulting from loss of vision early in their life. To examine the nature of blind individuals' tactile superiority and its implications for cross-modal plasticity, we compared the tactile performance of congenitally totally blind, low-vision and sighted children on raised-line picture identification test and re-test, assessing effects of task familiarity, exploratory strategy and memory recognition. What distinguished the blind from the other children was higher memory recognition and higher tactile encoding associated with efficient exploration. These results suggest that enhanced perceptual encoding and recognition memory may be two cognitive correlates of cross-modal plasticity in congenital blindness.
Whitwell, Robert L.; Ganel, Tzvi; Byrne, Caitlin M.; Goodale, Melvyn A.
2015-01-01
Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. “Natural” prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object (“haptics-based object information”) once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets (“grip scaling”) when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF’s grip scaling slopes. In the second experiment, we examined an “unnatural” grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts. PMID:25999834
Whitwell, Robert L; Ganel, Tzvi; Byrne, Caitlin M; Goodale, Melvyn A
2015-01-01
Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. "Natural" prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object ("haptics-based object information") once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets ("grip scaling") when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF's grip scaling slopes. In the second experiment, we examined an "unnatural" grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts.
Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.
Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel
2015-08-15
When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.
A Perceptuo-Cognitive-Motor Approach to the Special Child.
ERIC Educational Resources Information Center
Kornblum, Rena Beth
A movement therapist reviews ways in which a perceptuo-cognitive approach can help handicapped children in learning and in social adjustment. She identifies specific auditory problems (hearing loss, sound-ground confusion, auditory discrimination, auditory localization, auditory memory, auditory sequencing), visual problems (visual acuity,…
Visual activity predicts auditory recovery from deafness after adult cochlear implantation.
Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal
2013-12-01
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
ERIC Educational Resources Information Center
Nieuwenhuis, Sander; Elzinga, Bernet M.; Ras, Priscilla H.; Berends, Floris; Duijs, Peter; Samara, Zoe; Slagter, Heleen A.
2013-01-01
Recent research has shown superior memory retrieval when participants make a series of horizontal saccadic eye movements between the memory encoding phase and the retrieval phase compared to participants who do not move their eyes or move their eyes vertically. It has been hypothesized that the rapidly alternating activation of the two hemispheres…
The Play Behavior and Play Materials of Blind and Sighted Infants and Preschoolers.
ERIC Educational Resources Information Center
Troster, H.; Brambring, M.
1994-01-01
Analysis of questionnaires completed by parents of 91 young blind children and 74 matched sighted children indicated that sighted children engaged in more complex levels of play at an earlier age; blind children interacted less frequently with other children than did sighted children; blind children preferred tactile-auditory games and toys; and…
ERIC Educational Resources Information Center
Grimes, Sue K.
1995-01-01
A diagnostic, prescriptive model was utilized (n=394) in identification of learning styles and learning-study strategies of diverse student groups and in the analysis of prescriptive methods to address their specific needs. High-risk groups demonstrated auditory, tactile concrete, and group learning style preferences and were weaker on cognitive,…
Test of a motor theory of long-term auditory memory
Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer
2012-01-01
Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75–80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve. PMID:22511719
Test of a motor theory of long-term auditory memory.
Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer
2012-05-01
Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75-80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve.
Judging hardness of an object from the sounds of tapping created by a white cane.
Nunokawa, K; Seki, Y; Ino, S; Doi, K
2014-01-01
The white cane plays a vital role in the independent mobility support of the visually impaired. Allowing the recognition of target attributes through the contact of a white cane is an important function. We have conducted research to obtain fundamental knowledge concerning the exploration methods used to perceive the hardness of an object through contact with a white cane. This research has allowed us to examine methods that enhance accuracy in the perception of objects as well as the materials and structures of a white cane. Previous research suggest considering the roles of both auditory and tactile information from the white cane in determining objects' hardness is necessary. This experimental study examined the ability of people to perceive the hardness of an object solely through the tapping sounds of a white cane (i.e., auditory information) using a method of magnitude estimation. Two types of sounds were used to estimate hardness: 1) the playback of recorded tapping sounds and 2) the sounds produced on-site by tapping. Three types of handgrips were used to create different sounds of tapping on an object with a cane. The participants of this experiment were five sighted university students wearing eye masks and two totally blind students who walk independently with a white cane. The results showed that both sighted university students and totally blind participants were able to accurately judge the hardness of an object solely by using auditory information from a white cane. For the blind participants, different handgrips significantly influenced the accuracy of their estimation of an object's hardness.
Lin, Hung-Yu; Hsieh, Hsieh-Chun; Lee, Posen; Hong, Fu-Yuan; Chang, Wen-Dien; Liu, Kuo-Cheng
2017-08-01
This study explored auditory and visual attention in children with ADHD. In a randomized, two-period crossover design, 50 children with ADHD and 50 age- and sex-matched typically developing peers were measured with the Test of Various Attention (TOVA). The deficiency of visual attention is more serious than that of auditory attention in children with ADHD. On the auditory modality, only the deficit of attentional inconsistency is sufficient to explain most cases of ADHD; however, most of the children with ADHD suffered from deficits of sustained attention, response inhibition, and attentional inconsistency on the visual modality. Our results also showed that the deficit of attentional inconsistency is the most important indicator in diagnosing and intervening in ADHD when both auditory and visual modalities are considered. The findings provide strong evidence that the deficits of auditory attention are different from those of visual attention in children with ADHD.
Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness
Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.
2014-01-01
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752
On the dependence of response inhibition processes on sensory modality.
Bodmer, Benjamin; Beste, Christian
2017-04-01
The ability to inhibit responses is a central sensorimotor function but only recently the importance of sensory processes for motor inhibition mechanisms went more into the research focus. In this regard it is elusive, whether there are differences between sensory modalities to trigger response inhibition processes. Due to functional neuroanatomical considerations strong differences may exist, for example, between the visual and the tactile modality. In the current study we examine what neurophysiological mechanisms as well as functional neuroanatomical networks are modulated during response inhibition. Therefore, a Go/NoGo-paradigm employing a novel combination of visual, tactile, and visuotactile stimuli was used. The data show that the tactile modality is more powerful than the visual modality to trigger response inhibition processes. However, the tactile modality loses its efficacy to trigger response inhibition processes when being combined with the visual modality. This may be due to competitive mechanisms leading to a suppression of certain sensory stimuli and the response selection level. Variations in sensory modalities specifically affected conflict monitoring processes during response inhibition by modulating activity in a frontal parietal network including the right inferior frontal gyrus, anterior cingulate cortex and the temporoparietal junction. Attentional selection processes are not modulated. The results suggest that the functional neuroanatomical networks involved in response inhibition critically depends on the nature of the sensory input. Hum Brain Mapp 38:1941-1951, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Job, Xavier E; de Fockert, Jan W; van Velzen, José
2016-08-01
Behavioural and electrophysiological evidence has demonstrated that preparation of goal-directed actions modulates sensory perception at the goal location before the action is executed. However, previous studies have focused on sensory perception in areas of peripersonal space. The present study investigated visual and tactile sensory processing at the goal location of upcoming movements towards the body, much of which is not visible, as well as visible peripersonal space. A motor task cued participants to prepare a reaching movement towards goals either in peripersonal space in front of them or personal space on the upper chest. In order to assess modulations of sensory perception during movement preparation, event-related potentials (ERPs) were recorded in response to task-irrelevant visual and tactile probe stimuli delivered randomly at one of the goal locations of the movements. In line with previous neurophysiological findings, movement preparation modulated visual processing at the goal of a movement in peripersonal space. Movement preparation also modulated somatosensory processing at the movement goal in personal space. The findings demonstrate that tactile perception in personal space is subject to similar top-down sensory modulation by motor preparation as observed for visual stimuli presented in peripersonal space. These findings show for the first time that the principles and mechanisms underlying adaptive modulation of sensory processing in the context of action extend to tactile perception in unseen personal space. Copyright © 2016 Elsevier Ltd. All rights reserved.
Caruso, Valeria C; Pages, Daniel S; Sommer, Marc A; Groh, Jennifer M
2016-06-01
Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway. Copyright © 2016 the American Physiological Society.
Odour discrimination and identification are improved in early blindness.
Cuevas, Isabel; Plaza, Paula; Rombaux, Philippe; De Volder, Anne G; Renier, Laurent
2009-12-01
Previous studies showed that early blind humans develop superior abilities in the use of their remaining senses, hypothetically due to a functional reorganization of the deprived visual brain areas. While auditory and tactile functions have been investigated for long, little is known about the effects of early visual deprivation on olfactory processing. However, blind humans make an extensive use of olfactory information in their daily life. Here we investigated olfactory discrimination and identification abilities in early blind subjects and age-matched sighted controls. Three levels of cuing were used in the identification task, i.e., free-identification (no cue), categorization (semantic cues) and multiple choice (semantic and phonological cues). Early blind subjects significantly outperformed the controls in odour discrimination, free-identification and categorization. In addition, the larger group difference was observed in the free-identification as compared to the categorization and the multiple choice conditions. This indicated that a better access to the semantic information from odour perception accounted for part of the improved olfactory performances in odour identification in the blind. We concluded that early blind subjects have both improved perceptual abilities and a better access to the information stored in semantic memory than sighted subjects.
Audio-Tactile Integration in Congenitally and Late Deaf Cochlear Implant Users
Nava, Elena; Bottari, Davide; Villwock, Agnes; Fengler, Ineke; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte
2014-01-01
Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be “rewired” through auditory reafferentation. PMID:24918766
Audio-tactile integration in congenitally and late deaf cochlear implant users.
Nava, Elena; Bottari, Davide; Villwock, Agnes; Fengler, Ineke; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte
2014-01-01
Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be "rewired" through auditory reafferentation.
Infants' Visual Localization of Visual and Auditory Targets.
ERIC Educational Resources Information Center
Bechtold, A. Gordon; And Others
This study is an investigation of 2-month-old infants' abilities to visually localize visual and auditory peripheral stimuli. Each subject (N=40) was presented with 50 trials; 25 of these visual and 25 auditory. The infant was placed in a semi-upright infant seat positioned 122 cm from the center speaker of an arc formed by five loudspeakers. At…
Meyerhoff, Hauke S; Huff, Markus
2016-04-01
Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.
Modality-dependent effect of motion information in sensory-motor synchronised tapping.
Ono, Kentaro
2018-05-14
Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.
Zupan, Barbra; Sussman, Joan E
2009-01-01
Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.
Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel
2017-04-01
Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.
Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath
2018-05-24
Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier
2016-11-21
Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.
The neural basis of visual dominance in the context of audio-visual object processing.
Schmid, Carmen; Büchel, Christian; Rose, Michael
2011-03-01
Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.
Three-Dimensional Models for Teaching Neuroanatomy to Blind Students.
ERIC Educational Resources Information Center
Pietsch, Paul
1980-01-01
An audio/tactile course enables blind college students to understand the anatomy of the human brain. Models were designed which allow tactile exploration of the visual fields, retina, optic nerves, and the subdivisions of the tracts and radiations in the brain. (Author/PHR)
NASA Astrophysics Data System (ADS)
Grice, Noreen A.; Mutchler, M.
2010-01-01
Astronomy was once considered a science restricted to fully sighted participants. But in the past two decades, accessible books with large print/Braille and touchable pictures have brought astronomy and space science to the hands and mind's eye of students, regardless of their visual ability. A new universally-designed tactile image featuring the Hubble mosaic of the Carina Nebula is being presented at this conference. The original dataset was obtained with Hubble's Advanced Camera for Surveys (ACS) hydrogen-alpha filter in 2005. It became an instant icon after being infused with additional color information from ground-based CTIO data, and released as Hubble's 17th anniversary image. Our tactile Carina Nebula promotes multi-mode learning about the entire life-cycle of stars, which is dramatically illustrated in this Hubble mosaic. When combined with descriptive text in print and Braille, the visual and tactile components seamlessly reach both sighted and blind populations. Specific touchable features of the tactile image identify the shapes and orientations of objects in the Carina Nebula that include star-forming regions, jets, pillars, dark and light globules, star clusters, shocks/bubbles, the Keyhole Nebula, and stellar death (Eta Carinae). Visit our poster paper to touch the Carina Nebula!
Bellis, Teri James; Ross, Jody
2011-09-01
It has been suggested that, in order to validate a diagnosis of (C)APD (central auditory processing disorder), testing using direct cross-modal analogs should be performed to demonstrate that deficits exist solely or primarily in the auditory modality (McFarland and Cacace, 1995; Cacace and McFarland, 2005). This modality-specific viewpoint is controversial and not universally accepted (American Speech-Language-Hearing Association [ASHA], 2005; Musiek et al, 2005). Further, no such analogs have been developed to date, and neither the feasibility of such testing in normally functioning individuals nor the concurrent validity of cross-modal analogs has been established. The purpose of this study was to investigate the feasibility of cross-modal testing by examining the performance of normal adults and children on four tests of central auditory function and their corresponding visual analogs. In addition, this study investigated the degree to which concurrent validity of auditory and visual versions of these tests could be demonstrated. An experimental repeated measures design was employed. Participants consisted of two groups (adults, n=10; children, n=10) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of auditory/otologic, language, learning, neurologic, or related disorders. Visual analogs of four tests in common clinical use for the diagnosis of (C)APD were developed (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; Duration Patterns [Pinheiro and Musiek, 1985]; and the Random Gap Detection Test [RGDT; Keith, 2000]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANOVAs (analyses of variance) were used to examine effects of group, modality, and laterality (for the Dichotic/Dichoptic Digits tests) or response condition (for the auditory and visual Frequency Patterns and Duration Patterns tests). Pearson product-moment correlations were used to investigate relationships between auditory and visual performance. Adults performed significantly better than children on the Dichotic/Dichoptic Digits tests. Results also revealed a significant effect of modality, with auditory better than visual, and a significant modality×laterality interaction, with a right-ear advantage seen for the auditory task and a left-visual-field advantage seen for the visual task. For the Frequency Patterns test and its visual analog, results revealed a significant modality×response condition interaction, with humming better than labeling for the auditory version but the reversed effect for the visual version. For Duration Patterns testing, visual performance was significantly poorer than auditory performance. Due to poor test-retest reliability and ceiling effects for the auditory and visual gap-detection tasks, analyses could not be performed. No cross-modal correlations were observed for any test. Results demonstrated that cross-modal testing is at least feasible using easily accessible computer hardware and software. The lack of any cross-modal correlations suggests independent processing mechanisms for auditory and visual versions of each task. Examination of performance in individuals with central auditory and pan-sensory disorders is needed to determine the utility of cross-modal analogs in the differential diagnosis of (C)APD. American Academy of Audiology.
TMS of the occipital cortex induces tactile sensations in the fingers of blind Braille readers.
Ptito, M; Fumal, A; de Noordhout, A Martens; Schoenen, J; Gjedde, A; Kupers, R
2008-01-01
Various non-visual inputs produce cross-modal responses in the visual cortex of early blind subjects. In order to determine the qualitative experience associated with these occipital activations, we systematically stimulated the entire occipital cortex using single pulse transcranial magnetic stimulation (TMS) in early blind subjects and in blindfolded seeing controls. Whereas blindfolded seeing controls reported only phosphenes following occipital cortex stimulation, some of the blind subjects reported tactile sensations in the fingers that were somatotopically organized onto the visual cortex. The number of cortical sites inducing tactile sensations appeared to be related to the number of hours of Braille reading per day, Braille reading speed and dexterity. These data, taken in conjunction with previous anatomical, behavioural and functional imaging results, suggest the presence of a polysynaptic cortical pathway between the somatosensory cortex and the visual cortex in early blind subjects. These results also add new evidence that the activity of the occipital lobe in the blind takes its qualitative expression from the character of its new input source, therefore supporting the cortical deference hypothesis.
Chen, Yi-Chuan; Lewis, Terri L; Shore, David I; Maurer, Daphne
2017-02-20
Temporal simultaneity provides an essential cue for integrating multisensory signals into a unified perception. Early visual deprivation, in both animals and humans, leads to abnormal neural responses to audiovisual signals in subcortical and cortical areas [1-5]. Behavioral deficits in integrating complex audiovisual stimuli in humans are also observed [6, 7]. It remains unclear whether early visual deprivation affects visuotactile perception similarly to audiovisual perception and whether the consequences for either pairing differ after monocular versus binocular deprivation [8-11]. Here, we evaluated the impact of early visual deprivation on the perception of simultaneity for audiovisual and visuotactile stimuli in humans. We tested patients born with dense cataracts in one or both eyes that blocked all patterned visual input until the cataractous lenses were removed and the affected eyes fitted with compensatory contact lenses (mean duration of deprivation = 4.4 months; range = 0.3-28.8 months). Both monocularly and binocularly deprived patients demonstrated lower precision in judging audiovisual simultaneity. However, qualitatively different outcomes were observed for the two patient groups: the performance of monocularly deprived patients matched that of young children at immature stages, whereas that of binocularly deprived patients did not match any stage in typical development. Surprisingly, patients performed normally in judging visuotactile simultaneity after either monocular or binocular deprivation. Therefore, early binocular input is necessary to develop normal neural substrates for simultaneity perception of visual and auditory events but not visual and tactile events. Copyright © 2017 Elsevier Ltd. All rights reserved.
Learning and recognition of tactile temporal sequences by mice and humans
Bale, Michael R; Bitzidou, Malamati; Pitas, Anna; Brebner, Leonie S; Khazim, Lina; Anagnou, Stavros T; Stevenson, Caitlin D; Maravall, Miguel
2017-01-01
The world around us is replete with stimuli that unfold over time. When we hear an auditory stream like music or speech or scan a texture with our fingertip, physical features in the stimulus are concatenated in a particular order. This temporal patterning is critical to interpreting the stimulus. To explore the capacity of mice and humans to learn tactile sequences, we developed a task in which subjects had to recognise a continuous modulated noise sequence delivered to whiskers or fingertips, defined by its temporal patterning over hundreds of milliseconds. GO and NO-GO sequences differed only in that the order of their constituent noise modulation segments was temporally scrambled. Both mice and humans efficiently learned tactile sequences. Mouse sequence recognition depended on detecting transitions in noise amplitude; animals could base their decision on the earliest information available. Humans appeared to use additional cues, including the duration of noise modulation segments. DOI: http://dx.doi.org/10.7554/eLife.27333.001 PMID:28812976
Pure associative tactile agnosia for the left hand: clinical and anatomo-functional correlations.
Veronelli, Laura; Ginex, Valeria; Dinacci, Daria; Cappa, Stefano F; Corbo, Massimo
2014-09-01
Associative tactile agnosia (TA) is defined as the inability to associate information about object sensory properties derived through tactile modality with previously acquired knowledge about object identity. The impairment is often described after a lesion involving the parietal cortex (Caselli, 1997; Platz, 1996). We report the case of SA, a right-handed 61-year-old man affected by first ever right hemispheric hemorrhagic stroke. The neurological examination was normal, excluding major somaesthetic and motor impairment; a brain magnetic resonance imaging (MRI) confirmed the presence of a right subacute hemorrhagic lesion limited to the post-central and supra-marginal gyri. A comprehensive neuropsychological evaluation detected a selective inability to name objects when handled with the left hand in the absence of other cognitive deficits. A series of experiments were conducted in order to assess each stage of tactile recognition processing using the same stimulus sets: materials, 3D geometrical shapes, real objects and letters. SA and seven matched controls underwent the same experimental tasks during four sessions in consecutive days. Tactile discrimination, recognition, pantomime, drawing after haptic exploration out of vision and tactile-visual matching abilities were assessed. In addition, we looked for the presence of a supra-modal impairment of spatial perception and of specific difficulties in programming exploratory movements during recognition. Tactile discrimination was intact for all the stimuli tested. In contrast, SA was able neither to recognize nor to pantomime real objects manipulated with the left hand out of vision, while he identified them with the right hand without hesitations. Tactile-visual matching was intact. Furthermore, SA was able to grossly reproduce the global shape in drawings but failed to extract details of objects after left-hand manipulation, and he could not identify objects after looking at his own drawings. This case confirms the existence of selective associative TA as a left hand-specific deficit in recognizing objects. This deficit is not related to spatial perception or to the programming of exploratory movements. The cross-modal transfer of information via visual perception permits the activation of a partially degraded image, which alone does not allow the proper recognition of the initial tactile stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.
Focused and shifting attention in children with heavy prenatal alcohol exposure.
Mattson, Sarah N; Calarco, Katherine E; Lang, Aimée R
2006-05-01
Attention deficits are a hallmark of the teratogenic effects of alcohol. However, characterization of these deficits remains inconclusive. Children with heavy prenatal alcohol exposure and nonexposed controls were evaluated using a paradigm consisting of three conditions: visual focus, auditory focus, and auditory-visual shift of attention. For the focus conditions, participants responded manually to visual or auditory targets. For the shift condition, participants alternated responses between visual targets and auditory targets. For the visual focus condition, alcohol-exposed children had lower accuracy and slower reaction time for all intertarget intervals (ITIs), while on the auditory focus condition, alcohol-exposed children were less accurate but displayed slower reaction time only on the longest ITI. Finally, for the shift condition, the alcohol-exposed group was accurate but had slowed reaction times. These results indicate that children with heavy prenatal alcohol exposure have pervasive deficits in visual focused attention and deficits in maintaining auditory attention over time. However, no deficits were noted in the ability to disengage and reengage attention when required to shift attention between visual and auditory stimuli, although reaction times to shift were slower. Copyright (c) 2006 APA, all rights reserved.
Barone, Pascal; Chambaudie, Laure; Strelnikov, Kuzma; Fraysse, Bernard; Marx, Mathieu; Belin, Pascal; Deguine, Olivier
2016-10-01
Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face-voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Braille in the Sighted: Teaching Tactile Reading to Sighted Adults.
Bola, Łukasz; Siuda-Krzywicka, Katarzyna; Paplińska, Małgorzata; Sumera, Ewa; Hańczur, Paweł; Szwed, Marcin
2016-01-01
Blind people are known to have superior perceptual abilities in their remaining senses. Several studies suggest that these enhancements are dependent on the specific experience of blind individuals, who use those remaining senses more than sighted subjects. In line with this view, sighted subjects, when trained, are able to significantly progress in relatively simple tactile tasks. However, the case of complex tactile tasks is less obvious, as some studies suggest that visual deprivation itself could confer large advantages in learning them. It remains unclear to what extent those complex skills, such as braille reading, can be learnt by sighted subjects. Here we enrolled twenty-nine sighted adults, mostly braille teachers and educators, in a 9-month braille reading course. At the beginning of the course, all subjects were naive in tactile braille reading. After the course, almost all were able to read whole braille words at a mean speed of 6 words-per-minute. Subjects with low tactile acuity did not differ significantly in braille reading speed from the rest of the group, indicating that low tactile acuity is not a limiting factor for learning braille, at least at this early stage of learning. Our study shows that most sighted adults can learn whole-word braille reading, given the right method and a considerable amount of motivation. The adult sensorimotor system can thus adapt, to some level, to very complex tactile tasks without visual deprivation. The pace of learning in our group was comparable to congenitally and early blind children learning braille in primary school, which suggests that the blind's mastery of complex tactile tasks can, to a large extent, be explained by experience-dependent mechanisms.
Braille in the Sighted: Teaching Tactile Reading to Sighted Adults
Bola, Łukasz; Siuda-Krzywicka, Katarzyna; Paplińska, Małgorzata; Sumera, Ewa; Hańczur, Paweł; Szwed, Marcin
2016-01-01
Blind people are known to have superior perceptual abilities in their remaining senses. Several studies suggest that these enhancements are dependent on the specific experience of blind individuals, who use those remaining senses more than sighted subjects. In line with this view, sighted subjects, when trained, are able to significantly progress in relatively simple tactile tasks. However, the case of complex tactile tasks is less obvious, as some studies suggest that visual deprivation itself could confer large advantages in learning them. It remains unclear to what extent those complex skills, such as braille reading, can be learnt by sighted subjects. Here we enrolled twenty-nine sighted adults, mostly braille teachers and educators, in a 9-month braille reading course. At the beginning of the course, all subjects were naive in tactile braille reading. After the course, almost all were able to read whole braille words at a mean speed of 6 words-per-minute. Subjects with low tactile acuity did not differ significantly in braille reading speed from the rest of the group, indicating that low tactile acuity is not a limiting factor for learning braille, at least at this early stage of learning. Our study shows that most sighted adults can learn whole-word braille reading, given the right method and a considerable amount of motivation. The adult sensorimotor system can thus adapt, to some level, to very complex tactile tasks without visual deprivation. The pace of learning in our group was comparable to congenitally and early blind children learning braille in primary school, which suggests that the blind’s mastery of complex tactile tasks can, to a large extent, be explained by experience-dependent mechanisms. PMID:27187496
Premotor cortex is sensitive to auditory-visual congruence for biological motion.
Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F
2012-03-01
The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.
Stone, Scott A; Tata, Matthew S
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality
Tata, Matthew S.
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518
1980-02-01
ADOAA82 342 OKLAHOMA UNIV NORMAN COLL OF EDUCATION F/B 5/9 TASK ANALYSIS SCHEMA BASED ON COGNITIVE STYLE AND SUPPLANFATION--ETC(U) FEB GO F B AUSBURN...separately- perceived fragments) 6. Tasks requiring use of a. Visual/haptic (pre- kinesthetic or tactile ference for kinesthetic stimuli stimuli; ability...to transform kinesthetic stimuli into visual images; ability to learn directly from tactile or kinesthet - ic impressions) b. Field independence/de
1998-01-01
consisted of a videomicroscopy system and a tactile stimulator system. By using this setup, real-time images from the contact region as wvell as the... Videomicroscopy system . 4.3.2 Tactile stimulator svsteln . 4.3.3 Real-time imaging setup. 4.3.4 Active and passive touch experiments. 4.3.5...contact process is an important step. In this study, therefore, a videomicroscopy system was built’to visualize the contact re- gion of the fingerpad
Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong
2013-10-11
Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Ostrand, Rachel; Blumstein, Sheila E.; Ferreira, Victor S.; Morgan, James L.
2016-01-01
Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another. PMID:27011021
2011-01-01
Background The electrical signals measuring method is recommended to examine the relationship between neuronal activities and measure with the event related potentials (ERPs) during an auditory and a visual oddball paradigm between schizophrenic patients and normal subjects. The aim of this study is to discriminate the activation changes of different stimulations evoked by auditory and visual ERPs between schizophrenic patients and normal subjects. Methods Forty-three schizophrenic patients were selected as experimental group patients, and 40 healthy subjects with no medical history of any kind of psychiatric diseases, neurological diseases, or drug abuse, were recruited as a control group. Auditory and visual ERPs were studied with an oddball paradigm. All the data were analyzed by SPSS statistical software version 10.0. Results In the comparative study of auditory and visual ERPs between the schizophrenic and healthy patients, P300 amplitude at Fz, Cz, and Pz and N100, N200, and P200 latencies at Fz, Cz, and Pz were shown significantly different. The cognitive processing reflected by the auditory and the visual P300 latency to rare target stimuli was probably an indicator of the cognitive function in schizophrenic patients. Conclusions This study shows the methodology of application of auditory and visual oddball paradigm identifies task-relevant sources of activity and allows separation of regions that have different response properties. Our study indicates that there may be slowness of automatic cognitive processing and controlled cognitive processing of visual ERPs compared to auditory ERPs in schizophrenic patients. The activation changes of visual evoked potentials are more regionally specific than auditory evoked potentials. PMID:21542917
Distortions of Subjective Time Perception Within and Across Senses
van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan
2008-01-01
Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248
Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.
Morrill, Ryan J; Hasenstaub, Andrea R
2018-03-14
The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.
Tactile perceptual learning: learning curves and transfer to the contralateral finger.
Kaas, Amanda L; van de Ven, Vincent; Reithler, Joel; Goebel, Rainer
2013-02-01
Tactile perceptual learning has been shown to improve performance on tactile tasks, but there is no agreement about the extent of transfer to untrained skin locations. The lack of such transfer is often seen as a behavioral index of the contribution of early somatosensory brain regions. Moreover, the time course of improvements has never been described explicitly. Sixteen subjects were trained on the Ludvigh task (a tactile vernier task) on four subsequent days. On the fifth day, transfer of learning to the non-trained contralateral hand was tested. In five subjects, we explored to what extent training effects were retained approximately 1.5 years after the final training session, expecting to find long-term retention of learning effects after training. Results showed that tactile perceptual learning mainly occurred offline, between sessions. Training effects did not transfer initially, but became fully available to the untrained contralateral hand after a few additional training runs. After 1.5 years, training effects were not fully washed out and could be recuperated within a single training session. Interpreted in the light of theories of visual perceptual learning, these results suggest that tactile perceptual learning is not fundamentally different from visual perceptual learning, but might proceed at a slower pace due to procedural and task differences, thus explaining the apparent divergence in the amount of transfer and long-term retention.
The singular nature of auditory and visual scene analysis in autism
Lin, I.-Fan; Shirama, Aya; Kato, Nobumasa
2017-01-01
Individuals with autism spectrum disorder often have difficulty acquiring relevant auditory and visual information in daily environments, despite not being diagnosed as hearing impaired or having low vision. Resent psychophysical and neurophysiological studies have shown that autistic individuals have highly specific individual differences at various levels of information processing, including feature extraction, automatic grouping and top-down modulation in auditory and visual scene analysis. Comparison of the characteristics of scene analysis between auditory and visual modalities reveals some essential commonalities, which could provide clues about the underlying neural mechanisms. Further progress in this line of research may suggest effective methods for diagnosing and supporting autistic individuals. This article is part of the themed issue ‘Auditory and visual scene analysis'. PMID:28044025
Filling-in visual motion with sounds.
Väljamäe, A; Soto-Faraco, S
2008-10-01
Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.
Most, Tova; Aviner, Chen
2009-01-01
This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust. The emotional content was placed upon the same neutral sentence. The stimuli were presented in auditory, visual, and combined auditory-visual modes. The results revealed better auditory identification by the participants with NH in comparison to all groups of participants with hearing loss (HL). No differences were found among the groups with HL in each of the 3 modes. Although auditory-visual perception was better than visual-only perception for the participants with NH, no such differentiation was found among the participants with HL. The results question the efficiency of some currently used CIs in providing the acoustic cues required to identify the speaker's emotional state.
Ayoub, Hadeel M; Newcomb, Tara L; McCombs, Gayle B; Bonnie, Marshall
2015-02-01
This study compared the effectiveness of the VELscope® Vx versus visual and tactile intraoral examination in detecting oral lesions in an adult, high risk population. The pilot study compared the intra oral findings between 2 examination types. The sample was comprised of 30 participants who were addicted to either cigarettes or a dual addiction (cigarettes plus hookah). High risk population was defined as males who were current cigarette smokers or had a dual addiction. Two trained and experienced licensed dental hygienists conducted all examinations. Throughout the study, all visual and tactile intraoral examinations were conducted first by one dental hygienist first, followed by the VELscope® Vx fluorescence examinations by the second dental hygienist. All subjects received an inspection of the lips, labial and buccal mucosa, floor of the mouth, dorsal, ventral and lateral sides of the tongue, hard and soft palate, and visual inspection of the oropharynx and uvula. Both evaluations took place in 1 visit in the Dental Hygiene Research Center at Old Dominion University and external sites. All participants received oral cancer screening information, recommendations, referrals for tobacco cessation programs and brochures on the 2 types of examinations conducted. Participants were considered high risk based on demographics (current smokers and mostly males). Neither visual and tactile intraoral examination nor the VELscope® Vx examination showed positive lesions. No lesions were detected; therefore, no referrals were made. Data indicated the duration of tobacco use was significantly higher in cigarette smokers (14.1 years) than dual addiction smokers (5 years) (p>0.005). The average numbers of cigarettes smoked per day were 13.5 compared to 14.2 cigarettes for dual addiction smokers. Results from this study suggest the visual and tactile intraoral examination produced comparative results to the VELscope® Vx examination. Findings from this study support that the VELscope® Vx is still considered an adjunct technology and cannot be used exclusively for oral cancer screening. Copyright © 2015 The American Dental Hygienists’ Association.
Auditory, visual, and bimodal data link displays and how they support pilot performance.
Steelman, Kelly S; Talleur, Donald; Carbonari, Ronald; Yamani, Yusuke; Nunes, Ashley; McCarley, Jason S
2013-06-01
The design of data link messaging systems to ensure optimal pilot performance requires empirical guidance. The current study examined the effects of display format (auditory, visual, or bimodal) and visual display position (adjacent to instrument panel or mounted on console) on pilot performance. Subjects performed five 20-min simulated single-pilot flights. During each flight, subjects received messages from a simulated air traffic controller. Messages were delivered visually, auditorily, or bimodally. Subjects were asked to read back each message aloud and then perform the instructed maneuver. Visual and bimodal displays engendered lower subjective workload and better altitude tracking than auditory displays. Readback times were shorter with the two unimodal visual formats than with any of the other three formats. Advantages for the unimodal visual format ranged in size from 2.8 s to 3.8 s relative to the bimodal upper left and auditory formats, respectively. Auditory displays allowed slightly more head-up time (3 to 3.5 seconds per minute) than either visual or bimodal displays. Position of the visual display had only modest effects on any measure. Combined with the results from previous studies by Helleberg and Wickens and Lancaster and Casali the current data favor visual and bimodal displays over auditory displays; unimodal auditory displays were favored by only one measure, head-up time, and only very modestly. Data evinced no statistically significant effects of visual display position on performance, suggesting that, contrary to expectations, the placement of a visual data link display may be of relatively little consequence to performance.
Harris, Jill; Kamke, Marc R
2014-11-01
Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.
Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte
2016-02-03
Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.
Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi
2017-10-01
Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.
Blindness enhances tactile acuity and haptic 3-D shape discrimination.
Norman, J Farley; Bartholomew, Ashley N
2011-10-01
This study compared the sensory and perceptual abilities of the blind and sighted. The 32 participants were required to perform two tasks: tactile grating orientation discrimination (to determine tactile acuity) and haptic three-dimensional (3-D) shape discrimination. The results indicated that the blind outperformed their sighted counterparts (individually matched for both age and sex) on both tactile tasks. The improvements in tactile acuity that accompanied blindness occurred for all blind groups (congenital, early, and late). However, the improvements in haptic 3-D shape discrimination only occurred for the early-onset and late-onset blindness groups; the performance of the congenitally blind was no better than that of the sighted controls. The results of the present study demonstrate that blindness does lead to an enhancement of tactile abilities, but they also suggest that early visual experience may play a role in facilitating haptic 3-D shape discrimination.
Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf
Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao
2016-01-01
Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461
Sensory Substitution and Multimodal Mental Imagery.
Nanay, Bence
2017-09-01
Many philosophers use findings about sensory substitution devices in the grand debate about how we should individuate the senses. The big question is this: Is "vision" assisted by (tactile) sensory substitution really vision? Or is it tactile perception? Or some sui generis novel form of perception? My claim is that sensory substitution assisted "vision" is neither vision nor tactile perception, because it is not perception at all. It is mental imagery: visual mental imagery triggered by tactile sensory stimulation. But it is a special form of mental imagery that is triggered by corresponding sensory stimulation in a different sense modality, which I call "multimodal mental imagery."
Prototype tactile feedback system for examination by skin touch.
Lee, O; Lee, K; Oh, C; Kim, K; Kim, M
2014-08-01
Diagnosis of conditions such as psoriasis and atopic dermatitis, in the case of induration, involves palpating the infected area via hands and then selecting a ratings score. However, the score is determined based on the tester's experience and standards, making it subjective. To provide tactile feedback on the skin, we developed a prototype tactile feedback system to simulate skin wrinkles with PHANToM OMNI. To provide the user with tactile feedback on skin wrinkles, a visual and haptic Augmented Reality system was developed. First, a pair of stereo skin images obtained by a stereo camera generates a disparity map of skin wrinkles. Second, the generated disparity map is sent to an implemented tactile rendering algorithm that computes a reaction force according to the user's interaction with the skin image. We first obtained a stereo image of skin wrinkles from the in vivo stereo imaging system, which has a baseline of 50.8 μm, and obtained the disparity map with a graph cuts algorithm. The left image is displayed on the monitor to enable the user to recognize the location visually. The disparity map of the skin wrinkle image sends skin wrinkle information as a tactile response to the user through a haptic device. We successfully developed a tactile feedback system for virtual skin wrinkle simulation by means of a commercialized haptic device that provides the user with a single point of contact to feel the surface roughness of a virtual skin sample. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Headphone and Head-Mounted Visual Displays for Virtual Environments
NASA Technical Reports Server (NTRS)
Begault, Duran R.; Ellis, Stephen R.; Wenzel, Elizabeth M.; Trejo, Leonard J. (Technical Monitor)
1998-01-01
A realistic auditory environment can contribute to both the overall subjective sense of presence in a virtual display, and to a quantitative metric predicting human performance. Here, the role of audio in a virtual display and the importance of auditory-visual interaction are examined. Conjectures are proposed regarding the effectiveness of audio compared to visual information for creating a sensation of immersion, the frame of reference within a virtual display, and the compensation of visual fidelity by supplying auditory information. Future areas of research are outlined for improving simulations of virtual visual and acoustic spaces. This paper will describe some of the intersensory phenomena that arise during operator interaction within combined visual and auditory virtual environments. Conjectures regarding audio-visual interaction will be proposed.
Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.
Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale
2015-10-01
Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.
Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias
2018-01-01
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637
Investigating the role of visual and auditory search in reading and developmental dyslexia
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a “serial” search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d′) strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in “serial” search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills. PMID:24093014
Investigating the role of visual and auditory search in reading and developmental dyslexia.
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a "serial" search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d') strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in "serial" search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.
A Test of Tactile Concentration and Short-Term Memory.
ERIC Educational Resources Information Center
Kainthola, S. D.; Singh, T. B.
1992-01-01
Twenty students and 45 adults with visual impairments or blindness were administered a test of tactile concentration and short-term memory involving the reproduction of the order of finger stimulation using the Finger Knocking Box. Reliability and validity scores indicated encouraging results with use of the instrument. (JDD)
Talk From the VI Teachers' Lounge.
ERIC Educational Resources Information Center
Hurst, Judith
Gathered from teachers around the country, this collection of teaching ideas and lesson plans is designed to provide teachers with activities and strategies for educating students with visual impairments. Tips and information are provided on: making tactile teddy bears; memory strategies; making tactile books; creating art kits; using magnifiers;…
Multisensory Interference in Early Deaf Adults
ERIC Educational Resources Information Center
Heimler, Benedetta; Baruffaldi, Francesca; Bonmassar, Claudia; Venturini, Marta; Pavani, Francesco
2017-01-01
Multisensory interactions in deaf cognition are largely unexplored. Unisensory studies suggest that behavioral/neural changes may be more prominent for visual compared to tactile processing in early deaf adults. Here we test whether such an asymmetry results in increased saliency of vision over touch during visuo-tactile interactions. About 23…
White-Traut, R C; Rankin, K M; Yoder, J C; Liu, L; Vasa, R; Geraldo, V; Norr, K F
2015-08-01
To examine whether premature infants receiving the maternally administered H-HOPE (Hospital to Home Transition-Optimizing Premature Infant's Environment) intervention had more rapid weight gain and growth, improved feeding progression and reduced length of hospital stay, compared with controls. Premature infants born at 29-34 weeks gestational age and their mothers with at least two social-environmental risk factors were randomly assigned to H-HOPE intervention (n=88) or an attention control (n=94) groups. H-HOPE consists of a 15-min multisensory intervention (Auditory, Tactile, Visual and Vestibular stimuli) performed twice daily prior to feeding plus maternal participatory guidance on preterm infant behavioral cues. H-HOPE group infants gained weight more rapidly over time than infants in the control group and grew in length more rapidly than control infants, especially during the latter part of the hospital stay. For healthy preterm infants, the H-HOPE intervention appears to improve weight gain and length over time from birth to hospital discharge.
Sun, Peijian Paul; Teng, Lin Sophie
2017-12-01
This study revisited Reid's (1987) perceptual learning style preference questionnaire (PLSPQ) in an attempt to answer whether the PLSPQ fits in the Chinese-as-a-second-language (CSL) context. If not, what are CSL learners' learning styles drawing on the PLSPQ? The PLSPQ was first re-examined through reliability analysis and confirmatory factor analysis (CFA) with 224 CSL learners. The results showed that Reid's six-factor PLSPQ could not satisfactorily explain the CSL learners' learning styles. Exploratory factor analyses were, therefore, performed to explore the dimensionality of the PLSPQ in the CSL context. A four-factor PLSPQ was successfully constructed including auditory/visual, kinaesthetic/tactile, group, and individual styles. Such a measurement model was cross-validated through CFAs with 118 CSL learners. The study not only lends evidence to the literature that Reid's PLSPQ lacks construct validity, but also provides CSL teachers and learners with insightful and practical guidance concerning learning styles. Implications and limitations of the present study are discussed.
Kompanje, E J O
2008-12-01
Hypnagogic and hypnopompic hallucinations are visual, tactile, auditory or other sensory events, usually brief but sometimes prolonged, that occur at the transition from wakefulness to sleep (hypnagogic) or from sleep to wakefulness (hypnopompic). Hypnagogic and hypnopompic hallucinations are often associated with sleep paralysis. Sleep paralysis occurs immediately prior to falling asleep (hypnagogic paralysis) or upon waking (hypnopompic paralysis). In 1664, the Dutch physician Isbrand Van Diemerbroeck (1609-1674) published a collection of case histories. One history with the title 'Of the Night-Mare' describes the nightly experiences of the 50-year-old woman. This case report is subject of this article. The experiences in this case could without doubt be diagnosed as sleep paralysis accompanied by hypnagogic hallucinations. This case from 1664 should be cited as the earliest detailed account of sleep paralysis associated with hypnagogic illusions and as the first observation that sleep paralysis and hypnagogic experiences occur more often in supine position of the body.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097
Impact of Language on Development of Auditory-Visual Speech Perception
ERIC Educational Resources Information Center
Sekiyama, Kaoru; Burnham, Denis
2008-01-01
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various…
Auditory white noise reduces age-related fluctuations in balance.
Ross, J M; Will, O J; McGann, Z; Balasubramaniam, R
2016-09-06
Fall prevention technologies have the potential to improve the lives of older adults. Because of the multisensory nature of human balance control, sensory therapies, including some involving tactile and auditory noise, are being explored that might reduce increased balance variability due to typical age-related sensory declines. Auditory white noise has previously been shown to reduce postural sway variability in healthy young adults. In the present experiment, we examined this treatment in young adults and typically aging older adults. We measured postural sway of healthy young adults and adults over the age of 65 years during silence and auditory white noise, with and without vision. Our results show reduced postural sway variability in young and older adults with auditory noise, even in the absence of vision. We show that vision and noise can reduce sway variability for both feedback-based and exploratory balance processes. In addition, we show changes with auditory noise in nonlinear patterns of sway in older adults that reflect what is more typical of young adults, and these changes did not interfere with the typical random walk behavior of sway. Our results suggest that auditory noise might be valuable for therapeutic and rehabilitative purposes in older adults with typical age-related balance variability. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Valente, Daniel L.; Braasch, Jonas; Myrbeck, Shane A.
2012-01-01
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene. PMID:22280585
An Experimental Analysis of Memory Processing
Wright, Anthony A
2007-01-01
Rhesus monkeys were trained and tested in visual and auditory list-memory tasks with sequences of four travel pictures or four natural/environmental sounds followed by single test items. Acquisitions of the visual list-memory task are presented. Visual recency (last item) memory diminished with retention delay, and primacy (first item) memory strengthened. Capuchin monkeys, pigeons, and humans showed similar visual-memory changes. Rhesus learned an auditory memory task and showed octave generalization for some lists of notes—tonal, but not atonal, musical passages. In contrast with visual list memory, auditory primacy memory diminished with delay and auditory recency memory strengthened. Manipulations of interitem intervals, list length, and item presentation frequency revealed proactive and retroactive inhibition among items of individual auditory lists. Repeating visual items from prior lists produced interference (on nonmatching tests) revealing how far back memory extended. The possibility of using the interference function to separate familiarity vs. recollective memory processing is discussed. PMID:18047230
Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception
Su, Yi-Huang; Salazar-López, Elvira
2016-01-01
Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900
Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.
Su, Yi-Huang; Salazar-López, Elvira
2016-01-01
Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.
Gestural Communication With Accelerometer-Based Input Devices and Tactile Displays
2008-12-01
and natural terrain obstructions, or concealment often impede visual communication attempts. To overcome some of these issues, “daisy-chaining” or...the intended recipients. Moreover, visual communication demands a focus on the visual modality possibly distracting a receiving soldier’s visual
Semantic-based crossmodal processing during visual suppression.
Cox, Dustin; Hong, Sang Wook
2015-01-01
To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.
Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses
Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli
2015-01-01
Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858
Liu, Wen-Long; Zhao, Xu; Tan, Jian-Hui; Wang, Juan
2014-09-01
To explore the attention characteristics of children with different clinical subtypes of attention deficit hyperactivity disorder (ADHD) and to provide a basis for clinical intervention. A total of 345 children diagnosed with ADHD were selected and the subtypes were identified. Attention assessment was performed by the intermediate visual and auditory continuous performance test at diagnosis, and the visual and auditory attention characteristics were compared between children with different subtypes. A total of 122 normal children were recruited in the control group and their attention characteristics were compared with those of children with ADHD. The scores of full scale attention quotient (AQ) and full scale response control quotient (RCQ) of children with all three subtypes of ADHD were significantly lower than those of normal children (P<0.01). The score of auditory RCQ was significantly lower than that of visual RCQ in children with ADHD-hyperactive/impulsive subtype (P<0.05). The scores of auditory AQ and speed quotient (SQ) were significantly higher than those of visual AQ and SQ in three subtypes of ADHD children (P<0.01), while the score of visual precaution quotient (PQ) was significantly higher than that of auditory PQ (P<0.01). No significant differences in auditory or visual AQ were observed between the three subtypes of ADHD. The attention function of children with ADHD is worse than that of normal children, and the impairment of visual attention function is severer than that of auditory attention function. The degree of functional impairment of visual or auditory attention shows no significant differences between three subtypes of ADHD.
Oba, Sandra I.; Galvin, John J.; Fu, Qian-Jie
2014-01-01
Auditory training has been shown to significantly improve cochlear implant (CI) users’ speech and music perception. However, it is unclear whether post-training gains in performance were due to improved auditory perception or to generally improved attention, memory and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory were assessed in ten CI users before, during, and after training with a non-auditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Post-training gains were much smaller with the non-auditory VDS training than observed in previous auditory training studies with CI users. The results suggest that post-training gains observed in previous studies were not solely attributable to improved attention or memory, and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception. PMID:23516087
Gallagher, Rosemary; Damodaran, Harish; Werner, William G; Powell, Wendy; Deutsch, Judith E
2016-08-19
Evidence based virtual environments (VEs) that incorporate compensatory strategies such as cueing may change motor behavior and increase exercise intensity while also being engaging and motivating. The purpose of this study was to determine if persons with Parkinson's disease and aged matched healthy adults responded to auditory and visual cueing embedded in a bicycling VE as a method to increase exercise intensity. We tested two groups of participants, persons with Parkinson's disease (PD) (n = 15) and age-matched healthy adults (n = 13) as they cycled on a stationary bicycle while interacting with a VE. Participants cycled under two conditions: auditory cueing (provided by a metronome) and visual cueing (represented as central road markers in the VE). The auditory condition had four trials in which auditory cues or the VE were presented alone or in combination. The visual condition had five trials in which the VE and visual cue rate presentation was manipulated. Data were analyzed by condition using factorial RMANOVAs with planned t-tests corrected for multiple comparisons. There were no differences in pedaling rates between groups for both the auditory and visual cueing conditions. Persons with PD increased their pedaling rate in the auditory (F 4.78, p = 0.029) and visual cueing (F 26.48, p < 0.000) conditions. Age-matched healthy adults also increased their pedaling rate in the auditory (F = 24.72, p < 0.000) and visual cueing (F = 40.69, p < 0.000) conditions. Trial-to-trial comparisons in the visual condition in age-matched healthy adults showed a step-wise increase in pedaling rate (p = 0.003 to p < 0.000). In contrast, persons with PD increased their pedaling rate only when explicitly instructed to attend to the visual cues (p < 0.000). An evidenced based cycling VE can modify pedaling rate in persons with PD and age-matched healthy adults. Persons with PD required attention directed to the visual cues in order to obtain an increase in cycling intensity. The combination of the VE and auditory cues was neither additive nor interfering. These data serve as preliminary evidence that embedding auditory and visual cues to alter cycling speed in a VE as method to increase exercise intensity that may promote fitness.
2012-01-01
Background A flexed neck posture leads to non-specific activation of the brain. Sensory evoked cerebral potentials and focal brain blood flow have been used to evaluate the activation of the sensory cortex. We investigated the effects of a flexed neck posture on the cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in the related sensory cortices. Methods Twelve healthy young adults received right visual hemi-field, binaural auditory and left median nerve stimuli while sitting with the neck in a resting and flexed (20° flexion) position. Sensory evoked potentials were recorded from the right occipital region, Cz in accordance with the international 10–20 system, and 2 cm posterior from C4, during visual, auditory and somatosensory stimulations. The oxidative-hemoglobin concentration was measured in the respective sensory cortex using near-infrared spectroscopy. Results Latencies of the late component of all sensory evoked potentials significantly shortened, and the amplitude of auditory evoked potentials increased when the neck was in a flexed position. Oxidative-hemoglobin concentrations in the left and right visual cortices were higher during visual stimulation in the flexed neck position. The left visual cortex is responsible for receiving the visual information. In addition, oxidative-hemoglobin concentrations in the bilateral auditory cortex during auditory stimulation, and in the right somatosensory cortex during somatosensory stimulation, were higher in the flexed neck position. Conclusions Visual, auditory and somatosensory pathways were activated by neck flexion. The sensory cortices were selectively activated, reflecting the modalities in sensory projection to the cerebral cortex and inter-hemispheric connections. PMID:23199306
A neural network model of ventriloquism effect and aftereffect.
Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro
2012-01-01
Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.
Heimbauer, Lisa A; Antworth, Rebecca L; Owren, Michael J
2012-01-01
Nonhuman primates appear to capitalize more effectively on visual cues than corresponding auditory versions. For example, studies of inferential reasoning have shown that monkeys and apes readily respond to seeing that food is present ("positive" cuing) or absent ("negative" cuing). Performance is markedly less effective with auditory cues, with many subjects failing to use this input. Extending recent work, we tested eight captive tufted capuchins (Cebus apella) in locating food using positive and negative cues in visual and auditory domains. The monkeys chose between two opaque cups to receive food contained in one of them. Cup contents were either shown or shaken, providing location cues from both cups, positive cues only from the baited cup, or negative cues from the empty cup. As in previous work, subjects readily used both positive and negative visual cues to secure reward. However, auditory outcomes were both similar to and different from those of earlier studies. Specifically, all subjects came to exploit positive auditory cues, but none responded to negative versions. The animals were also clearly different in visual versus auditory performance. Results indicate that a significant proportion of capuchins may be able to use positive auditory cues, with experience and learning likely playing a critical role. These findings raise the possibility that experience may be significant in visually based performance in this task as well, and highlight that coming to grips with evident differences between visual versus auditory processing may be important for understanding primate cognition more generally.
Seeing sounds and hearing colors: an event-related potential study of auditory-visual synesthesia.
Goller, Aviva I; Otten, Leun J; Ward, Jamie
2009-10-01
In auditory-visual synesthesia, sounds automatically elicit conscious and reliable visual experiences. It is presently unknown whether this reflects early or late processes in the brain. It is also unknown whether adult audiovisual synesthesia resembles auditory-induced visual illusions that can sometimes occur in the general population or whether it resembles the electrophysiological deflection over occipital sites that has been noted in infancy and has been likened to synesthesia. Electrical brain activity was recorded from adult synesthetes and control participants who were played brief tones and required to monitor for an infrequent auditory target. The synesthetes were instructed to attend either to the auditory or to the visual (i.e., synesthetic) dimension of the tone, whereas the controls attended to the auditory dimension alone. There were clear differences between synesthetes and controls that emerged early (100 msec after tone onset). These differences tended to lie in deflections of the auditory-evoked potential (e.g., the auditory N1, P2, and N2) rather than the presence of an additional posterior deflection. The differences occurred irrespective of what the synesthetes attended to (although attention had a late effect). The results suggest that differences between synesthetes and others occur early in time, and that synesthesia is qualitatively different from similar effects found in infants and certain auditory-induced visual illusions in adults. In addition, we report two novel cases of synesthesia in which colors elicit sounds, and vice versa.
ERIC Educational Resources Information Center
Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.
2016-01-01
Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…
PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.
2014-01-01
Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648
Frequency encoded auditory display of the critical tracking task
NASA Technical Reports Server (NTRS)
Stevenson, J.
1984-01-01
The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.
Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J
2013-06-01
Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.
Hasni, Anita A; Adamson, Lauren B; Williamson, Rebecca A; Robins, Diana L
2017-12-01
Theory of mind (ToM) gradually develops during the preschool years. Measures of ToM usually target visual experience, but auditory experiences also provide valuable social information. Given differences between the visual and auditory modalities (e.g., sights persist, sounds fade) and the important role environmental input plays in social-cognitive development, we asked whether modality might influence the progression of ToM development. The current study expands Wellman and Liu's ToM scale (2004) by testing 66 preschoolers using five standard visual ToM tasks and five newly crafted auditory ToM tasks. Age and gender effects were found, with 4- and 5-year-olds demonstrating greater ToM abilities than 3-year-olds and girls passing more tasks than boys; there was no significant effect of modality. Both visual and auditory tasks formed a scalable set. These results indicate that there is considerable consistency in when children are able to use visual and auditory inputs to reason about various aspects of others' mental states. Copyright © 2017 Elsevier Inc. All rights reserved.
Evans, Julia L.; Pollak, Seth D.
2011-01-01
This study examined the electrophysiological correlates of auditory and visual working memory in children with Specific Language Impairments (SLI). Children with SLI and age-matched controls (11;9 – 14;10) completed visual and auditory working memory tasks while event-related potentials (ERPs) were recorded. In the auditory condition, children with SLI performed similarly to controls when the memory load was kept low (1-back memory load). As expected, when demands for auditory working memory were higher, children with SLI showed decreases in accuracy and attenuated P3b responses. However, children with SLI also evinced difficulties in the visual working memory tasks. In both the low (1-back) and high (2-back) memory load conditions, P3b amplitude was significantly lower for the SLI as compared to CA groups. These data suggest a domain-general working memory deficit in SLI that is manifested across auditory and visual modalities. PMID:21316354
Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.
Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria
2016-03-01
Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.
Wilquin, Hélène; Delevoye-Turrell, Yvonne; Dione, Mariama; Giersch, Anne
2018-01-01
Objective: Basic temporal dysfunctions have been described in patients with schizophrenia, which may impact their ability to connect and synchronize with the outer world. The present study was conducted with the aim to distinguish between interval timing and synchronization difficulties and more generally the spatial-temporal organization disturbances for voluntary actions. A new sensorimotor synchronization task was developed to test these abilities. Method: Twenty-four chronic schizophrenia patients matched with 27 controls performed a spatial-tapping task in which finger taps were to be produced in synchrony with a regular metronome to six visual targets presented around a virtual circle on a tactile screen. Isochronous (time intervals of 500 ms) and non-isochronous auditory sequences (alternated time intervals of 300/600 ms) were presented. The capacity to produce time intervals accurately versus the ability to synchronize own actions (tap) with external events (tone) were measured. Results: Patients with schizophrenia were able to produce the tapping patterns of both isochronous and non-isochronous auditory sequences as accurately as controls producing inter-response intervals close to the expected interval of 500 and 900 ms, respectively. However, the synchronization performances revealed significantly more positive asynchrony means (but similar variances) in the patient group than in the control group for both types of auditory sequences. Conclusion: The patterns of results suggest that patients with schizophrenia are able to perceive and produce both simple and complex sequences of time intervals but are impaired in the ability to synchronize their actions with external events. These findings suggest a specific deficit in predictive timing, which may be at the core of early symptoms previously described in schizophrenia.
Affect in Human-Robot Interaction
2014-01-01
is capable of learning and producing a large number of facial expressions based on Ekman’s Facial Action Coding System, FACS (Ekman and Friesen 1978... tactile (pushed, stroked, etc.), auditory (loud sound), temperature and olfactory (alcohol, smoke, etc.). The personality of the robot consists of...robot’s behavior through decision-making, learning , or action selection, a number of researchers used the fuzzy logic approach to emotion generation
Batterman, Jared M; Martin, Vincent F; Yeung, Derek; Walker, Bruce N
2018-01-01
Accessibility of assistive consumer devices is an emerging research area with potential to benefit both users with and without visual impairments. In this article, we discuss the research and evaluation of using a tactile button interface to control an iOS device's native VoiceOver Gesture navigations (Apple Accessibility, 2014). This research effort identified potential safety and accessibility issues for users trying to interact and control their touchscreen mobile iOS devices while traveling independently. Furthermore, this article discusses the participatory design process in creating a solution that aims to solve issues in utilizing a tactile button interface in a novel device. The overall goal of this study is to enable visually impaired white cane users to access their mobile iOS device's capabilities navigation aids more safely and efficiently on the go.
Braille character discrimination in blindfolded human subjects.
Kauffman, Thomas; Théoret, Hugo; Pascual-Leone, Alvaro
2002-04-16
Visual deprivation may lead to enhanced performance in other sensory modalities. Whether this is the case in the tactile modality is controversial and may depend upon specific training and experience. We compared the performance of sighted subjects on a Braille character discrimination task to that of normal individuals blindfolded for a period of five days. Some participants in each group (blindfolded and sighted) received intensive Braille training to offset the effects of experience. Blindfolded subjects performed better than sighted subjects in the Braille discrimination task, irrespective of tactile training. For the left index finger, which had not been used in the formal Braille classes, blindfolding had no effect on performance while subjects who underwent tactile training outperformed non-stimulated participants. These results suggest that visual deprivation speeds up Braille learning and may be associated with behaviorally relevant neuroplastic changes.
A Haptic Glove as a Tactile-Vision Sensory Substitution for Wayfinding.
ERIC Educational Resources Information Center
Zelek, John S.; Bromley, Sam; Asmar, Daniel; Thompson, David
2003-01-01
A device that relays navigational information using a portable tactile glove and a wearable computer and camera system was tested with nine adults with visual impairments. Paths traversed by subjects negotiating an obstacle course were not qualitatively different from paths produced with existing wayfinding devices and hitting probabilities were…
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Psycho acoustical Measures in Individuals with Congenital Visual Impairment.
Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh
2017-12-01
In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.
Sensing through friction: the biomechanics of texture perception in rodents and primates
NASA Astrophysics Data System (ADS)
Debrégeas, Georges; Boubenec, Yves
2015-10-01
Rodents and primates possess an exquisite tactile sensitivity, which allows them to extract a wealth of information about their immediate environment. They can distinguish subtle differences in surface roughness through tactile exploration in a much more precise way than they can do visually. In both sensory systems, tactile information is contained in the sequence of deformation of the tactile organ--the facial hair for rodents (the whiskers), the digital skin for primates -- elicited by active rubbing on the probed surface (Figure 8.1). These deformations, registered by mechanosensitive neurons located in inner tissues, are processed by the central nervous system to produce a sensory representation of the surface...
Crossing the Hands Increases Illusory Self-Touch
Pozeg, Polona; Rognini, Giulio; Salomon, Roy; Blanke, Olaf
2014-01-01
Manipulation of hand posture, such as crossing the hands, has been frequently used to study how the body and its immediately surrounding space are represented in the brain. Abundant data show that crossed arms posture impairs remapping of tactile stimuli from somatotopic to external space reference frame and deteriorates performance on several tactile processing tasks. Here we investigated how impaired tactile remapping affects the illusory self-touch, induced by the non-visual variant of the rubber hand illusion (RHI) paradigm. In this paradigm blindfolded participants (Experiment 1) had their hands either uncrossed or crossed over the body midline. The strength of illusory self-touch was measured with questionnaire ratings and proprioceptive drift. Our results showed that, during synchronous tactile stimulation, the strength of illusory self-touch increased when hands were crossed compared to the uncrossed posture. Follow-up experiments showed that the increase in illusion strength was not related to unfamiliar hand position (Experiment 2) and that it was equally strengthened regardless of where in the peripersonal space the hands were crossed (Experiment 3). However, while the boosting effect of crossing the hands was evident from subjective ratings, the proprioceptive drift was not modulated by crossed posture. Finally, in contrast to the illusion increase in the non-visual RHI, the crossed hand postures did not alter illusory ownership or proprioceptive drift in the classical, visuo-tactile version of RHI (Experiment 4). We argue that the increase in illusory self-touch is related to misalignment of somatotopic and external reference frames and consequently inadequate tactile-proprioceptive integration, leading to re-weighting of the tactile and proprioceptive signals.The present study not only shows that illusory self-touch can be induced by crossing the hands, but importantly, that this posture is associated with a stronger illusion. PMID:24699795
Kim, K; Lee, S
2015-05-01
Diagnosis of skin conditions is dependent on the assessment of skin surface properties that are represented by more tactile properties such as stiffness, roughness, and friction than visual information. Due to this reason, adding tactile feedback to existing vision based diagnosis systems can help dermatologists diagnose skin diseases or disorders more accurately. The goal of our research was therefore to develop a tactile rendering system for skin examinations by dynamic touch. Our development consists of two stages: converting a single image to a 3D haptic surface and rendering the generated haptic surface in real-time. Converting to 3D surfaces from 2D single images was implemented with concerning human perception data collected by a psychophysical experiment that measured human visual and haptic sensibility to 3D skin surface changes. For the second stage, we utilized real skin biomechanical properties found by prior studies. Our tactile rendering system is a standalone system that can be used with any single cameras and haptic feedback devices. We evaluated the performance of our system by conducting an identification experiment with three different skin images with five subjects. The participants had to identify one of the three skin surfaces by using a haptic device (Falcon) only. No visual cue was provided for the experiment. The results indicate that our system provides sufficient performance to render discernable tactile rendering with different skin surfaces. Our system uses only a single skin image and automatically generates a 3D haptic surface based on human haptic perception. Realistic skin interactions can be provided in real-time for the purpose of skin diagnosis, simulations, or training. Our system can also be used for other applications like virtual reality and cosmetic applications. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
The onset of visual experience gates auditory cortex critical periods
Mowery, Todd M.; Kotak, Vibhakar C.; Sanes, Dan H.
2016-01-01
Sensory systems influence one another during development and deprivation can lead to cross-modal plasticity. As auditory function begins before vision, we investigate the effect of manipulating visual experience during auditory cortex critical periods (CPs) by assessing the influence of early, normal and delayed eyelid opening on hearing loss-induced changes to membrane and inhibitory synaptic properties. Early eyelid opening closes the auditory cortex CPs precociously and dark rearing prevents this effect. In contrast, delayed eyelid opening extends the auditory cortex CPs by several additional days. The CP for recovery from hearing loss is also closed prematurely by early eyelid opening and extended by delayed eyelid opening. Furthermore, when coupled with transient hearing loss that animals normally fully recover from, very early visual experience leads to inhibitory deficits that persist into adulthood. Finally, we demonstrate a functional projection from the visual to auditory cortex that could mediate these effects. PMID:26786281
Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne
2016-12-01
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.
Marijuana and Human Performance: An Annotated Bibliography (1970-1975)
1976-03-01
Research 5 6 9 20 22 48 56 61 62 72 73 128 131 132 134 163 Auditory Related Research 22 70 I’l 130 134 169 175 IV MEDICAL COMMENTS AND RESEARCH CRITIQUES... Auditory and visual threshold effects of marihuana in man. Perceptual & Motor Skills, 1969, 29, 755-759. Auditory and visual thresholds were measured...a "high." Results indicated no effect on visual acuity, whereas one of three auditory measurements differentiated between marihuana and control
Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete.
Jarick, Michelle; Stewart, Mark T; Smilek, Daniel; Dixon, Michael J
2013-01-01
Time-space synaesthetes "see" time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred "auditory" viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the "preferred" auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009).
Viewing the body modulates tactile receptive fields.
Haggard, Patrick; Christakou, Anastasia; Serino, Andrea
2007-06-01
Tactile discrimination performance depends on the receptive field (RF) size of somatosensory cortical (SI) neurons. Psychophysical masking effects can reveal the RF of an idealized "virtual" somatosensory neuron. Previous studies show that top-down factors strongly affect tactile discrimination performance. Here, we show that non-informative vision of the touched body part influences tactile discrimination by modulating tactile RFs. Ten subjects performed spatial discrimination between touch locations on the forearm. Performance was improved when subjects saw their forearm compared to viewing a neutral object in the same location. The extent of visual information was relevant, since restricted view of the forearm did not have this enhancing effect. Vibrotactile maskers were placed symmetrically on either side of the tactile target locations, at two different distances. Overall, masking significantly impaired discrimination performance, but the spatial gradient of masking depended on what subjects viewed. Viewing the body reduced the effect of distant maskers, but enhanced the effect of close maskers, as compared to viewing a neutral object. We propose that viewing the body improves functional touch by sharpening tactile RFs in an early somatosensory map. Top-down modulation of lateral inhibition could underlie these effects.
Oryadi Zanjani, Mohammad Majid; Hasanzadeh, Saeid; Rahgozar, Mehdi; Shemshadi, Hashem; Purdy, Suzanne C; Mahmudi Bakhtiari, Behrooz; Vahab, Maryam
2013-09-01
Since the introduction of cochlear implantation, researchers have considered children's communication and educational success before and after implantation. Therefore, the present study aimed to compare auditory, speech, and language development scores following one-sided cochlear implantation between two groups of prelingual deaf children educated through either auditory-only (unisensory) or auditory-visual (bisensory) modes. A randomized controlled trial with a single-factor experimental design was used. The study was conducted in the Instruction and Rehabilitation Private Centre of Hearing Impaired Children and their Family, called Soroosh in Shiraz, Iran. We assessed 30 Persian deaf children for eligibility and 22 children qualified to enter the study. They were aged between 27 and 66 months old and had been implanted between the ages of 15 and 63 months. The sample of 22 children was randomly assigned to two groups: auditory-only mode and auditory-visual mode; 11 participants in each group were analyzed. In both groups, the development of auditory perception, receptive language, expressive language, speech, and speech intelligibility was assessed pre- and post-intervention by means of instruments which were validated and standardized in the Persian population. No significant differences were found between the two groups. The children with cochlear implants who had been instructed using either the auditory-only or auditory-visual modes acquired auditory, receptive language, expressive language, and speech skills at the same rate. Overall, spoken language significantly developed in both the unisensory group and the bisensory group. Thus, both the auditory-only mode and the auditory-visual mode were effective. Therefore, it is not essential to limit access to the visual modality and to rely solely on the auditory modality when instructing hearing, language, and speech in children with cochlear implants who are exposed to spoken language both at home and at school when communicating with their parents and educators prior to and after implantation. The trial has been registered at IRCT.ir, number IRCT201109267637N1. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Association of blood antioxidants status with visual and auditory sustained attention.
Shiraseb, Farideh; Siassi, Fereydoun; Sotoudeh, Gity; Qorbani, Mostafa; Rostami, Reza; Sadeghi-Firoozabadi, Vahid; Narmaki, Elham
2015-01-01
A low antioxidants status has been shown to result in oxidative stress and cognitive impairment. Because antioxidants can protect the nervous system, it is expected that a better blood antioxidant status might be related to sustained attention. However, the relationship between the blood antioxidant status and visual and auditory sustained attention has not been investigated. The aim of this study was to evaluate the association of fruits and vegetables intake and the blood antioxidant status with visual and auditory sustained attention in women. This cross-sectional study was performed on 400 healthy women (20-50 years) who attended the sports clubs of Tehran Municipality. Sustained attention was evaluated based on the Integrated Visual and Auditory Continuous Performance Test using the Integrated Visual and Auditory (IVA) software. The 24-hour food recall questionnaire was used for estimating fruits and vegetables intake. Serum total antioxidant capacity (TAC), and erythrocyte superoxide dismutase (SOD) and glutathione peroxidase (GPx) activities were measured in 90 participants. After adjusting for energy intake, age, body mass index (BMI), years of education and physical activity, higher reported fruits, and vegetables intake was associated with better visual and auditory sustained attention (P < 0.001). A high intake of some subgroups of fruits and vegetables (i.e. berries, cruciferous vegetables, green leafy vegetables, and other vegetables) was also associated with better sustained attention (P < 0.02). Serum TAC, and erythrocyte SOD and GPx activities increased with the increase in the tertiles of visual and auditory sustained attention after adjusting for age, years of education, physical activity, energy, BMI, and caffeine intake (P < 0.05). Improved visual and auditory sustained attention is associated with a better blood antioxidant status. Therefore, improvement of the antioxidant status through an appropriate dietary intake can possibly enhance sustained attention.
Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann
2011-01-01
During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. Journal of Cognitive Neuroscience, 21, 259-274, 2009]. Using functional magnetic resonance imaging, the present follow-up study aims to further elucidate the topographic distribution of visual-phonological operations and audiovisual (AV) interactions during speech perception. Ambiguous acoustic syllables--disambiguated to /pa/ or /ta/ by the visual channel (speaking face)--served as test materials, concomitant with various control conditions (nonspeech AV signals, visual-only and acoustic-only speech, and nonspeech stimuli). (i) Visual speech yielded an AV-subadditive activation of primary auditory cortex and the anterior superior temporal gyrus (STG), whereas the posterior STG responded both to speech and nonspeech motion. (ii) The inferior frontal and the fusiform gyrus of the right hemisphere showed a strong phonetic/phonological impact (differential effects of visual /pa/ vs. /ta/) upon hemodynamic activation during presentation of speaking faces. Taken together with the previous MEG data, these results point at a dual-pathway model of visual speech information processing: On the one hand, access to the auditory system via the anterior supratemporal “what" path may give rise to direct activation of "auditory objects." On the other hand, visual speech information seems to be represented in a right-hemisphere visual working memory, providing a potential basis for later interactions with auditory information such as the McGurk effect.
Eye closure helps memory by reducing cognitive load and enhancing visualisation.
Vredeveldt, Annelies; Hitch, Graham J; Baddeley, Alan D
2011-10-01
Closing the eyes helps memory. We investigated the mechanisms underlying the eyeclosure effect by exposing 80 eyewitnesses to different types of distraction during the witness interview: blank screen (control), eyes closed, visual distraction, and auditory distraction. We examined the cognitive load hypothesis by comparing any type of distraction (visual or auditory) with minimal distraction (blank screen or eyes closed). We found recall to be significantly better when distraction was minimal, providing evidence that eyeclosure reduces cognitive load. We examined the modality-specific interference hypothesis by comparing the effects of visual and auditory distraction on recall of visual and auditory information. Visual and auditory distraction selectively impaired memory for information presented in the same modality, supporting the role of visualisation in the eyeclosure effect. Analysis of recall in terms of grain size revealed that recall of basic information about the event was robust, whereas recall of specific details was prone to both general and modality-specific disruptions.
Auditory-visual fusion in speech perception in children with cochlear implants
Schorr, Efrat A.; Fox, Nathan A.; van Wassenhove, Virginie; Knudsen, Eric I.
2005-01-01
Speech, for most of us, is a bimodal percept whenever we both hear the voice and see the lip movements of a speaker. Children who are born deaf never have this bimodal experience. We tested children who had been deaf from birth and who subsequently received cochlear implants for their ability to fuse the auditory information provided by their implants with visual information about lip movements for speech perception. For most of the children with implants (92%), perception was dominated by vision when visual and auditory speech information conflicted. For some, bimodal fusion was strong and consistent, demonstrating a remarkable plasticity in their ability to form auditory-visual associations despite the atypical stimulation provided by implants. The likelihood of consistent auditory-visual fusion declined with age at implant beyond 2.5 years, suggesting a sensitive period for bimodal integration in speech perception. PMID:16339316
Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J
2017-01-01
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.
Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.
2017-01-01
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359
Auditory short-term memory activation during score reading.
Simoens, Veerle L; Tervaniemi, Mari
2013-01-01
Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.
Auditory Short-Term Memory Activation during Score Reading
Simoens, Veerle L.; Tervaniemi, Mari
2013-01-01
Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback. PMID:23326487
Pfeiffer, Christian; Lopez, Christophe; Schmutz, Valentin; Duenas, Julio Angel; Martuzzi, Roberto; Blanke, Olaf
2013-01-01
In three experiments we investigated the effects of visuo-tactile and visuo-vestibular conflict about the direction of gravity on three aspects of bodily self-consciousness: self-identification, self-location, and the experienced direction of the first-person perspective. Robotic visuo-tactile stimulation was administered to 78 participants in three experiments. Additionally, we presented participants with a virtual body as seen from an elevated and downward-directed perspective while they were lying supine and were therefore receiving vestibular and postural cues about an upward-directed perspective. Under these conditions, we studied the effects of different degrees of visuo-vestibular conflict, repeated measurements during illusion induction, and the relationship to a classical measure of visuo-vestibular integration. Extending earlier findings on experimentally induced changes in bodily self-consciousness, we show that self-identification does not depend on the experienced direction of the first-person perspective, whereas self-location does. Changes in bodily self-consciousness depend on visual gravitational signals. Individual differences in the experienced direction of first-person perspective correlated with individual differences in visuo-vestibular integration. Our data reveal important contributions of visuo-vestibular gravitational cues to bodily self-consciousness. In particular we show that the experienced direction of the first-person perspective depends on the integration of visual, vestibular, and tactile signals, as well as on individual differences in idiosyncratic visuo-vestibular strategies. PMID:23630611
Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L
2012-04-01
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.
Stekelenburg, Jeroen J; Vroomen, Jean
2012-01-01
In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.
Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.
Shahin, Antoine J; Backer, Kristina C; Rosenblum, Lawrence D; Kerlin, Jess R
2018-02-14
Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ ( illusion-fa ), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ ( illusion-ba ), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba , and a reduced N1 when they perceived illusion-fa , mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex. SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator). Copyright © 2018 the authors 0270-6474/18/381835-15$15.00/0.
Neuropsychology: the touchy, feely side of vision.
Walsh, V
2000-01-13
Some visual attributes, such as colour, are purely visual, but others, such as orientation and movement, can be perceived by touch or audition. A magnetic stimulation study has now shown that the perception of tactile orientation may be influenced by visual Information.
ERIC Educational Resources Information Center
Zupan, Barbra; Sussman, Joan E.
2009-01-01
Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…
ERIC Educational Resources Information Center
Vercillo, Tiziana; Burr, David; Gori, Monica
2016-01-01
A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…
Bharadwaj, Sneha V; Maricle, Denise; Green, Laura; Allman, Tamby
2015-10-01
The objective of the study was to examine short-term memory and working memory through both visual and auditory tasks in school-age children with cochlear implants. The relationship between the performance on these cognitive skills and reading as well as language outcomes were examined in these children. Ten children between the ages of 7 and 11 years with early-onset bilateral severe-profound hearing loss participated in the study. Auditory and visual short-term memory, auditory and visual working memory subtests and verbal knowledge measures were assessed using the Woodcock Johnson III Tests of Cognitive Abilities, the Wechsler Intelligence Scale for Children-IV Integrated and the Kaufman Assessment Battery for Children II. Reading outcomes were assessed using the Woodcock Reading Mastery Test III. Performance on visual short-term memory and visual working memory measures in children with cochlear implants was within the average range when compared to the normative mean. However, auditory short-term memory and auditory working memory measures were below average when compared to the normative mean. Performance was also below average on all verbal knowledge measures. Regarding reading outcomes, children with cochlear implants scored below average for listening and passage comprehension tasks and these measures were positively correlated to visual short-term memory, visual working memory and auditory short-term memory. Performance on auditory working memory subtests was not related to reading or language outcomes. The children with cochlear implants in this study demonstrated better performance in visual (spatial) working memory and short-term memory skills than in auditory working memory and auditory short-term memory skills. Significant positive relationships were found between visual working memory and reading outcomes. The results of the study provide support for the idea that WM capacity is modality specific in children with hearing loss. Based on these findings, reading instruction that capitalizes on the strengths in visual short-term memory and working memory is suggested for young children with early-onset hearing loss. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Artificial tactile sensing in minimally invasive surgery - a new technical approach.
Schostek, Sebastian; Ho, Chi-Nghia; Kalanovic, Daniel; Schurr, Marc O
2006-01-01
The loss of tactile sensation is a commonly known drawback of minimally invasive surgery (MIS). Since the advent of MIS, research activities in providing tactile information to the surgeon are still ongoing, in order to improve patient safety and to extend the indications for MIS. We have designed a tactile sensor system comprising a tactile laparoscopic grasper for surgical palpation. For this purpose, we developed a novel tactile sensor technology which allows the manufacturing of an integrated sensor array within an acceptable price range. The array was integrated into the jaws of a 10mm laparoscopic grasper. The tactile data are transferred wirelessly via Bluetooth and are presented visually to the surgeon. The goal was to be able to obtain information about the shape and consistency of tissue structures by gently compressing the tissue between the jaws of the tactile instrument and thus to be able to recognize and assess anatomical or pathological structures, even if they are hidden in the tissue. With a prototype of the tactile sensor system we have conducted bench-tests as well as in-vitro and in-vivo experiments. The system proved feasibility in an experimental environment, it was easy to use, and the novel tactile sensor array was applicable for both palpation and grasping manoeuvres with forces of up to 60N. The tactile data turned out to be a useful supplement to the minimal amount of haptic feedback that is provided by current endoscopic instruments and the endoscopic image under certain conditions.
Functional mapping of the primate auditory system.
Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer
2003-01-24
Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.
Test of the neurolinguistic programming hypothesis that eye-movements relate to processing imagery.
Wertheim, E H; Habib, C; Cumming, G
1986-04-01
Bandler and Grinder's hypothesis that eye-movements reflect sensory processing was examined. 28 volunteers first memorized and then recalled visual, auditory, and kinesthetic stimuli. Changes in eye-positions during recall were videotaped and categorized by two raters into positions hypothesized by Bandler and Grinder's model to represent visual, auditory, and kinesthetic recall. Planned contrast analyses suggested that visual stimulus items, when recalled, elicited significantly more upward eye-positions and stares than auditory and kinesthetic items. Auditory and kinesthetic items, however, did not elicit more changes in eye-position hypothesized by the model to represent auditory and kinesthetic recall, respectively.
Nikbakht, Nader; Tafreshiha, Azadeh; Zoccolan, Davide; Diamond, Mathew E
2018-02-07
To better understand how object recognition can be triggered independently of the sensory channel through which information is acquired, we devised a task in which rats judged the orientation of a raised, black and white grating. They learned to recognize two categories of orientation: 0° ± 45° ("horizontal") and 90° ± 45° ("vertical"). Each trial required a visual (V), a tactile (T), or a visual-tactile (VT) discrimination; VT performance was better than that predicted by optimal linear combination of V and T signals, indicating synergy between sensory channels. We examined posterior parietal cortex (PPC) and uncovered key neuronal correlates of the behavioral findings: PPC carried both graded information about object orientation and categorical information about the rat's upcoming choice; single neurons exhibited identical responses under the three modality conditions. Finally, a linear classifier of neuronal population firing replicated the behavioral findings. Taken together, these findings suggest that PPC is involved in the supramodal processing of shape. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao
2015-04-15
This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.
States of Awareness I: Subliminal Perception Relationship to Situational Awareness
1993-05-01
one experiment, the visual detection threshold was raised by simultaneous auditory stimulation involving subliminal emotional words. Similar results...an assessment was made of the effects of both subliminal and supraliminal auditory accessory stimulation (white noise) on a visual detection task... stimulation investigation. Both subliminal and supraliminal auditory stimulation were employed to evaluate possible differential effects in visual illusions
ERIC Educational Resources Information Center
Kodak, Tiffany; Clements, Andrea; Paden, Amber R.; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A.
2015-01-01
The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The…
Facilitation of listening comprehension by visual information under noisy listening condition
NASA Astrophysics Data System (ADS)
Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi
2009-02-01
Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.
Hertz, Uri; Amedi, Amir
2015-01-01
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756
Hertz, Uri; Amedi, Amir
2015-08-01
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. © The Author 2014. Published by Oxford University Press.
Elevated audiovisual temporal interaction in patients with migraine without aura
2014-01-01
Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903
ERIC Educational Resources Information Center
Rule, Audrey C.
2011-01-01
New tactile curriculum materials for teaching Earth and planetary science lessons on rotation=revolution, silhouettes of objects from different views, contour maps, impact craters, asteroids, and topographic features of Mars to 11 elementary and middle school students with sight impairments at a week-long residential summer camp are presented…
Monaco, Simona; Gallivan, Jason P; Figley, Teresa D; Singhal, Anthony; Culham, Jody C
2017-11-29
The role of the early visual cortex and higher-order occipitotemporal cortex has been studied extensively for visual recognition and to a lesser degree for haptic recognition and visually guided actions. Using a slow event-related fMRI experiment, we investigated whether tactile and visual exploration of objects recruit the same "visual" areas (and in the case of visual cortex, the same retinotopic zones) and if these areas show reactivation during delayed actions in the dark toward haptically explored objects (and if so, whether this reactivation might be due to imagery). We examined activation during visual or haptic exploration of objects and action execution (grasping or reaching) separated by an 18 s delay. Twenty-nine human volunteers (13 females) participated in this study. Participants had their eyes open and fixated on a point in the dark. The objects were placed below the fixation point and accordingly visual exploration activated the cuneus, which processes retinotopic locations in the lower visual field. Strikingly, the occipital pole (OP), representing foveal locations, showed higher activation for tactile than visual exploration, although the stimulus was unseen and location in the visual field was peripheral. Moreover, the lateral occipital tactile-visual area (LOtv) showed comparable activation for tactile and visual exploration. Psychophysiological interaction analysis indicated that the OP showed stronger functional connectivity with anterior intraparietal sulcus and LOtv during the haptic than visual exploration of shapes in the dark. After the delay, the cuneus, OP, and LOtv showed reactivation that was independent of the sensory modality used to explore the object. These results show that haptic actions not only activate "visual" areas during object touch, but also that this information appears to be used in guiding grasping actions toward targets after a delay. SIGNIFICANCE STATEMENT Visual presentation of an object activates shape-processing areas and retinotopic locations in early visual areas. Moreover, if the object is grasped in the dark after a delay, these areas show "reactivation." Here, we show that these areas are also activated and reactivated for haptic object exploration and haptically guided grasping. Touch-related activity occurs not only in the retinotopic location of the visual stimulus, but also at the occipital pole (OP), corresponding to the foveal representation, even though the stimulus was unseen and located peripherally. That is, the same "visual" regions are implicated in both visual and haptic exploration; however, touch also recruits high-acuity central representation within early visual areas during both haptic exploration of objects and subsequent actions toward them. Functional connectivity analysis shows that the OP is more strongly connected with ventral and dorsal stream areas when participants explore an object in the dark than when they view it. Copyright © 2017 the authors 0270-6474/17/3711572-20$15.00/0.
What colour does that feel? Tactile--visual mapping and the development of cross-modality.
Ludwig, Vera U; Simner, Julia
2013-04-01
Humans share implicit preferences for cross-modal mappings (e.g., low pitch sounds are preferentially paired with darker colours). Individuals with synaesthesia experience cross-modal mappings to a conscious degree (e.g., they may see colours when they hear sounds). The neonatal synaesthesia hypothesis claims that all humans may be born with this explicit cross-modal perception, which dies out in most people through childhood, leaving only implicit associations in the average adult. Although there is evidence for decreasing cross-modality throughout early infancy, it is unclear whether this decline continues to take place throughout childhood and adolescence. This large-scale study had two goals. First, we aimed to establish whether human non-synaesthetes systematically map tactile and visual dimensions - a combination that has rarely been studied. Second, we asked whether tactile-visual associations may be more pronounced in younger compared to older participants. 210 participants between the ages of 5-74 years assigned colours to tactile stimuli. Smoothness, softness and roundness of stimuli positively correlated with luminance of the chosen colour; and smoothness and softness also positively correlated with chroma. Moreover, tactile sensations were associated with specific colours (e.g., softness with pink). There were no age differences for luminance effects. Chroma effects, however, were found exclusively in children and adolescents. Our findings are consistent with the neonatal synaesthesia hypothesis which suggests that all humans are born with strong cross-modal perception which is pruned away or inhibited throughout development. Moreover, the findings suggest that a decline of some forms of cross-modality may take place over a much longer time span than previously assumed. Copyright © 2012 Elsevier Ltd. All rights reserved.
2017-01-01
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. PMID:28179553
Zhu, Lin L; Beauchamp, Michael S
2017-03-08
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. Copyright © 2017 the authors 0270-6474/17/372697-12$15.00/0.
Synchronization to auditory and visual rhythms in hearing and deaf individuals
Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen
2014-01-01
A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395
Haptograph Representation of Real-World Haptic Information by Wideband Force Control
NASA Astrophysics Data System (ADS)
Katsura, Seiichiro; Irie, Kouhei; Ohishi, Kiyoshi
Artificial acquisition and reproduction of human sensations are basic technologies of communication engineering. For example, auditory information is obtained by a microphone, and a speaker reproduces it by artificial means. Furthermore, a video camera and a television make it possible to transmit visual sensation by broadcasting. On the contrary, since tactile or haptic information is subject to the Newton's “law of action and reaction” in the real world, a device which acquires, transmits, and reproduces the information has not been established. From the point of view, real-world haptics is the key technology for future haptic communication engineering. This paper proposes a novel acquisition method of haptic information named “haptograph”. The haptograph visualizes the haptic information like photograph. The proposed haptograph is applied to haptic recognition of the contact environment. A linear motor contacts to the surface of the environment and its reaction force is used to make a haptograph. A robust contact motion and sensor-less sensing of the reaction force are attained by using a disturbance observer. As a result, an encyclopedia of contact environment is attained. Since temporal and spatial analyses are conducted to represent haptic information as the haptograph, it is possible to be recognized and to be evaluated intuitively.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi
2018-05-16
Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.
Non-visual spatial tasks reveal increased interactions with stance postural control.
Woollacott, Marjorie; Vander Velde, Timothy
2008-05-07
The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.
Jacobsen, Leslie K; Slotkin, Theodore A; Mencl, W Einar; Frost, Stephen J; Pugh, Kenneth R
2007-12-01
Prenatal exposure to active maternal tobacco smoking elevates risk of cognitive and auditory processing deficits, and of smoking in offspring. Recent preclinical work has demonstrated a sex-specific pattern of reduction in cortical cholinergic markers following prenatal, adolescent, or combined prenatal and adolescent exposure to nicotine, the primary psychoactive component of tobacco smoke. Given the importance of cortical cholinergic neurotransmission to attentional function, we examined auditory and visual selective and divided attention in 181 male and female adolescent smokers and nonsmokers with and without prenatal exposure to maternal smoking. Groups did not differ in age, educational attainment, symptoms of inattention, or years of parent education. A subset of 63 subjects also underwent functional magnetic resonance imaging while performing an auditory and visual selective and divided attention task. Among females, exposure to tobacco smoke during prenatal or adolescent development was associated with reductions in auditory and visual attention performance accuracy that were greatest in female smokers with prenatal exposure (combined exposure). Among males, combined exposure was associated with marked deficits in auditory attention, suggesting greater vulnerability of neurocircuitry supporting auditory attention to insult stemming from developmental exposure to tobacco smoke in males. Activation of brain regions that support auditory attention was greater in adolescents with prenatal or adolescent exposure to tobacco smoke relative to adolescents with neither prenatal nor adolescent exposure to tobacco smoke. These findings extend earlier preclinical work and suggest that, in humans, prenatal and adolescent exposure to nicotine exerts gender-specific deleterious effects on auditory and visual attention, with concomitant alterations in the efficiency of neurocircuitry supporting auditory attention.