Headphone and Head-Mounted Visual Displays for Virtual Environments
NASA Technical Reports Server (NTRS)
Begault, Duran R.; Ellis, Stephen R.; Wenzel, Elizabeth M.; Trejo, Leonard J. (Technical Monitor)
1998-01-01
A realistic auditory environment can contribute to both the overall subjective sense of presence in a virtual display, and to a quantitative metric predicting human performance. Here, the role of audio in a virtual display and the importance of auditory-visual interaction are examined. Conjectures are proposed regarding the effectiveness of audio compared to visual information for creating a sensation of immersion, the frame of reference within a virtual display, and the compensation of visual fidelity by supplying auditory information. Future areas of research are outlined for improving simulations of virtual visual and acoustic spaces. This paper will describe some of the intersensory phenomena that arise during operator interaction within combined visual and auditory virtual environments. Conjectures regarding audio-visual interaction will be proposed.
Auditory spatial representations of the world are compressed in blind humans.
Kolarik, Andrew J; Pardhan, Shahina; Cirstea, Silvia; Moore, Brian C J
2017-02-01
Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.
NASA Astrophysics Data System (ADS)
Lee, Wendy
The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.
Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina
2013-02-01
Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.
Intelligibility of speech in a virtual 3-D environment.
MacDonald, Justin A; Balakrishnan, J D; Orosz, Michael D; Karplus, Walter J
2002-01-01
In a simulated air traffic control task, improvement in the detection of auditory warnings when using virtual 3-D audio depended on the spatial configuration of the sounds. Performance improved substantially when two of four sources were placed to the left and the remaining two were placed to the right of the participant. Surprisingly, little or no benefits were observed for configurations involving the elevation or transverse (front/back) dimensions of virtual space, suggesting that position on the interaural (left/right) axis is the crucial factor to consider in auditory display design. The relative importance of interaural spacing effects was corroborated in a second, free-field (real space) experiment. Two additional experiments showed that (a) positioning signals to the side of the listener is superior to placing them in front even when two sounds are presented in the same location, and (b) the optimal distance on the interaural axis varies with the amplitude of the sounds. These results are well predicted by the behavior of an ideal observer under the different display conditions. This suggests that guidelines for auditory display design that allow for effective perception of speech information can be developed from an analysis of the physical sound patterns.
Human Exploration of Enclosed Spaces through Echolocation.
Flanagin, Virginia L; Schörnich, Sven; Schranner, Michael; Hummel, Nadine; Wallmeier, Ludwig; Wahlberg, Magnus; Stephan, Thomas; Wiegrebe, Lutz
2017-02-08
Some blind humans have developed echolocation, as a method of navigation in space. Echolocation is a truly active sense because subjects analyze echoes of dedicated, self-generated sounds to assess space around them. Using a special virtual space technique, we assess how humans perceive enclosed spaces through echolocation, thereby revealing the interplay between sensory and vocal-motor neural activity while humans perform this task. Sighted subjects were trained to detect small changes in virtual-room size analyzing real-time generated echoes of their vocalizations. Individual differences in performance were related to the type and number of vocalizations produced. We then asked subjects to estimate virtual-room size with either active or passive sounds while measuring their brain activity with fMRI. Subjects were better at estimating room size when actively vocalizing. This was reflected in the hemodynamic activity of vocal-motor cortices, even after individual motor and sensory components were removed. Activity in these areas also varied with perceived room size, although the vocal-motor output was unchanged. In addition, thalamic and auditory-midbrain activity was correlated with perceived room size; a likely result of top-down auditory pathways for human echolocation, comparable with those described in echolocating bats. Our data provide evidence that human echolocation is supported by active sensing, both behaviorally and in terms of brain activity. The neural sensory-motor coupling complements the fundamental acoustic motor-sensory coupling via the environment in echolocation. SIGNIFICANCE STATEMENT Passive listening is the predominant method for examining brain activity during echolocation, the auditory analysis of self-generated sounds. We show that sighted humans perform better when they actively vocalize than during passive listening. Correspondingly, vocal motor and cerebellar activity is greater during active echolocation than vocalization alone. Motor and subcortical auditory brain activity covaries with the auditory percept, although motor output is unchanged. Our results reveal behaviorally relevant neural sensory-motor coupling during echolocation. Copyright © 2017 the authors 0270-6474/17/371614-14$15.00/0.
Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou
2018-01-01
Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.
NASA Astrophysics Data System (ADS)
Simonnet, Mathieu; Jacobson, Dan; Vieilledent, Stephane; Tisseau, Jacques
Navigating consists of coordinating egocentric and allocentric spatial frames of reference. Virtual environments have afforded researchers in the spatial community with tools to investigate the learning of space. The issue of the transfer between virtual and real situations is not trivial. A central question is the role of frames of reference in mediating spatial knowledge transfer to external surroundings, as is the effect of different sensory modalities accessed in simulated and real worlds. This challenges the capacity of blind people to use virtual reality to explore a scene without graphics. The present experiment involves a haptic and auditory maritime virtual environment. In triangulation tasks, we measure systematic errors and preliminary results show an ability to learn configurational knowledge and to navigate through it without vision. Subjects appeared to take advantage of getting lost in an egocentric “haptic” view in the virtual environment to improve performances in the real environment.
Reaching nearby sources: comparison between real and virtual sound and visual targets
Parseihian, Gaëtan; Jouffrais, Christophe; Katz, Brian F. G.
2014-01-01
Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments. PMID:25228855
Handzel, Ophir; Wang, Haobing; Fiering, Jason; Borenstein, Jeffrey T.; Mescher, Mark J.; Leary Swan, Erin E.; Murphy, Brian A.; Chen, Zhiqiang; Peppi, Marcello; Sewell, William F.; Kujawa, Sharon G.; McKenna, Michael J.
2009-01-01
Temporal bone implants can be used to electrically stimulate the auditory nerve, to amplify sound, to deliver drugs to the inner ear and potentially for other future applications. The implants require storage space and access to the middle or inner ears. The most acceptable space is the cavity created by a canal wall up mastoidectomy. Detailed knowledge of the available space for implantation and pathways to access the middle and inner ears is necessary for the design of implants and successful implantation. Based on temporal bone CT scans a method for three-dimensional reconstruction of a virtual canal wall up mastoidectomy space is described. Using Amira® software the area to be removed during such surgery is marked on axial CT slices, and a three-dimensional model of that space is created. The average volume of 31 reconstructed models is 12.6 cm3 with standard deviation of 3.69 cm3, ranging from 7.97 to 23.25 cm3. Critical distances were measured directly from the model and their averages were calculated: height 3.69 cm, depth 2.43 cm, length above the external auditory canal (EAC) 4.45 cm and length posterior to EAC 3.16 cm. These linear measurements did not correlate well with volume measurements. The shape of the models was variable to a significant extent making the prediction of successful implantation for a given design based on linear and volumetric measurement unreliable. Hence, to assure successful implantation, preoperative assessment should include a virtual fitting of an implant into the intended storage space. The above-mentioned three-dimensional models were exported from Amira to a Solidworks application where virtual fitting was performed. Our results are compared to other temporal bone implant virtual fitting studies. Virtual fitting has been suggested for other human applications. PMID:19372649
Call sign intelligibility improvement using a spatial auditory display
NASA Technical Reports Server (NTRS)
Begault, Durand R.
1993-01-01
A spatial auditory display was used to convolve speech stimuli, consisting of 130 different call signs used in the communications protocol of NASA's John F. Kennedy Space Center, to different virtual auditory positions. An adaptive staircase method was used to determine intelligibility levels of the signal against diotic speech babble, with spatial positions at 30 deg azimuth increments. Non-individualized, minimum-phase approximations of head-related transfer functions were used. The results showed a maximal intelligibility improvement of about 6 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.
The Perception of Auditory Motion
Leung, Johahn
2016-01-01
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
Psychophysical Evaluation of Three-Dimensional Auditory Displays
NASA Technical Reports Server (NTRS)
Wightman, Frederic L. (Principal Investigator)
1995-01-01
This report describes the process made during the first year of a three-year Cooperative Research Agreement (CRA NCC2-542). The CRA proposed a program of applied of psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years. we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners' head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on two of these topics, the role of head movements and the role of echoes and reflections, were reported in the most recent Semi-Annual Pro-ress Report (Appendix A). In the period since the last Progress Report we have been studying a third topic, the localizability of moving sources. The results of this research are described. The fidelity of a virtual auditory display is critically dependent on precise measurement of the listener''s Head-Related Transfer Functions (HRTFs), which are used to produce the virtual auditory images. We continue to explore methods for improving our HRTF measurement technique. During this reporting period we compared HRTFs measured using our standard open-canal probe tube technique and HRTFs measured with the closed-canal insert microphones from the Crystal River Engineering Snapshot system.
Auditory spatial processing in Alzheimer’s disease
Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.
2015-01-01
The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732
Psychophysical Evaluation of Three-Dimensional Auditory Displays
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.
1996-01-01
This report describes the progress made during the second year of a three-year Cooperative Research Agreement. The CRA proposed a program of applied psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years, we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners'head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on one of these topics, the localization of multiple sources, was reported in the most recent Semi-Annual Progress Report (Appendix A). That same progress report described work on two related topics, the influence of a listener's a-priori knowledge of source characteristics and the discriminability of real and virtual sources. In the period since the last Progress Report we have conducted several new studies to evaluate the effectiveness of a new and simpler method for measuring the HRTF's that are used to synthesize virtual sources and have expanded our studies of multiple sources. The results of this research are described below.
Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients.
Golob, Edward J; Winston, Jenna; Mock, Jeffrey R
2017-01-01
Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory.
Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients
Golob, Edward J.; Winston, Jenna; Mock, Jeffrey R.
2017-01-01
Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory. PMID:29218024
The plastic ear and perceptual relearning in auditory spatial perception
Carlile, Simon
2014-01-01
The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497
Nisha, Kavassery Venkateswaran; Kumar, Ajith Uppunda
2017-04-01
Localization involves processing of subtle yet highly enriched monaural and binaural spatial cues. Remediation programs aimed at resolving spatial deficits are surprisingly scanty in literature. The present study is designed to explore the changes that occur in the spatial performance of normal-hearing listeners before and after subjecting them to virtual acoustic space (VAS) training paradigm using behavioral and electrophysiological measures. Ten normal-hearing listeners participated in the study, which was conducted in three phases, including a pre-training, training, and post-training phase. At the pre- and post-training phases both behavioral measures of spatial acuity and electrophysiological P300 were administered. The spatial acuity of the participants in the free field and closed field were measured apart from quantifying their binaural processing abilities. The training phase consisted of 5-8 sessions (20 min each) carried out using a hierarchy of graded VAS stimuli. The results obtained from descriptive statistics were indicative of an improvement in all the spatial acuity measures in the post-training phase. Statistically, significant changes were noted in interaural time difference (ITD) and virtual acoustic space identification scores measured in the post-training phase. Effect sizes (r) for all of these measures were substantially large, indicating the clinical relevance of these measures in documenting the impact of training. However, the same was not reflected in P300. The training protocol used in the present study on a preliminary basis proves to be effective in normal-hearing listeners, and its implications can be extended to other clinical population as well.
Lahav, Orly; Schloerb, David W.; Srinivasan, Mandayam A.
2014-01-01
This paper presents the integration of a virtual environment (BlindAid) in an orientation and mobility rehabilitation program as a training aid for people who are blind. BlindAid allows the users to interact with different virtual structures and objects through auditory and haptic feedback. This research explores if and how use of the BlindAid in conjunction with a rehabilitation program can help people who are blind train themselves in familiar and unfamiliar spaces. The study, focused on nine participants who were congenitally, adventitiously, and newly blind, during their orientation and mobility rehabilitation program at the Carroll Center for the Blind (Newton, Massachusetts, USA). The research was implemented using virtual environment (VE) exploration tasks and orientation tasks in virtual environments and real spaces. The methodology encompassed both qualitative and quantitative methods, including interviews, a questionnaire, videotape recording, and user computer logs. The results demonstrated that the BlindAid training gave participants additional time to explore the virtual environment systematically. Secondly, it helped elucidate several issues concerning the potential strengths of the BlindAid system as a training aid for orientation and mobility for both adults and teenagers who are congenitally, adventitiously, and newly blind. PMID:25284952
Peripersonal space as the space of the bodily self.
Noel, Jean-Paul; Pfeiffer, Christian; Blanke, Olaf; Serino, Andrea
2015-11-01
Bodily self-consciousness (BSC) refers to experience of one's self as located within an owned body (self-identification) and as occupying a specific location in space (self-location). BSC can be altered through multisensory stimulation, as in the Full Body Illusion (FBI). If participants view a virtual body from a distance being stroked, while receiving synchronous tactile stroking on their physical body, they feel as if the virtual body were their own and they experience, subjectively, to drift toward the virtual body. Here we hypothesized that--while normally the experience of the body in space depends on the integration of multisensory body-related signals within a limited space surrounding the body (i.e. peripersonal space, PPS)--during the FBI the boundaries of PPS would shift toward the virtual body, that is, toward the position of experienced self-location. To test this hypothesis, we used synchronous visuo-tactile stroking to induce the FBI, as contrasted with a control condition of asynchronous stroking. Concurrently, we applied an audio-tactile interaction paradigm to estimate the boundaries of PPS. PPS was measured in front of and behind the participants' body as the distance where tactile information interacted with auditory stimuli looming in space toward the participant's physical body. We found that during synchronous stroking, i.e. when participants experienced the FBI, PPS boundaries extended in the front-space, toward the avatar, and concurrently shrunk in the back-space, as compared to the asynchronous stroking control condition, when FBI was induced. These findings support the view that during the FBI, PPS boundaries translate toward the virtual body, such that the PPS representation shifts from being centered at the location of the physical body to being now centered at the subjectively experienced location of the self. Copyright © 2015 Elsevier B.V. All rights reserved.
Peripersonal Space as the space of the Bodily Self
Noel, Jean-Paul; Pfeiffer, Christian; Blanke, Olaf; Serino, Andrea
2016-01-01
Bodily self-consciousness (BSC) refers to experience of our self as located within an owned body (self-identification) and as occupying a specific location in space (self-location). BSC can be altered through multisensory stimulation, as in the Full Body Illusion (FBI). If participants view a virtual body from a distance being stroked, while receiving synchronous tactile stroking on their physical body, they feel such as the virtual body were their own and they experience, subjectively, to drift toward the virtual body. Here we hypothesized that - while normally the experience of the body in space depends on the integration of multisensory body-related signals within a limited space surrounding the body (i.e. peripersonal space, PPS) - during the FBI the boundaries of PPS would shift toward the virtual body, that is toward the position of self-location. To test this hypothesis, we used synchronous visuo-tactile stroking to induce the FBI, as contrasted with a control condition of asynchronous stroking. Concurrently, we applied an audio-tactile interaction paradigm to estimate the boundaries of PPS. PPS was measured in front of and behind the participants' body as the distance where tactile information interacted with auditory stimuli looming in space toward the participant's physical body. We found that during synchronous stroking, i.e. when participants experienced the FBI, PPS boundaries extended in the front-space, toward the avatar, and concurrently shrunk in the back-space, as compared to the asynchronous stroking control condition, where no FBI was induced. These findings support the view that during the FBI, PPS boundaries translate toward the virtual body, such that the PPS representation shifts from being centered at the location of the physical body to being now centered at the subjectively experienced location of the self. PMID:26231086
Wallmeier, Ludwig; Kish, Daniel; Wiegrebe, Lutz; Flanagin, Virginia L
2015-03-01
Some blind humans have developed the remarkable ability to detect and localize objects through the auditory analysis of self-generated tongue clicks. These echolocation experts show a corresponding increase in 'visual' cortex activity when listening to echo-acoustic sounds. Echolocation in real-life settings involves multiple reflections as well as active sound production, neither of which has been systematically addressed. We developed a virtualization technique that allows participants to actively perform such biosonar tasks in virtual echo-acoustic space during magnetic resonance imaging (MRI). Tongue clicks, emitted in the MRI scanner, are picked up by a microphone, convolved in real time with the binaural impulse responses of a virtual space, and presented via headphones as virtual echoes. In this manner, we investigated the brain activity during active echo-acoustic localization tasks. Our data show that, in blind echolocation experts, activations in the calcarine cortex are dramatically enhanced when a single reflector is introduced into otherwise anechoic virtual space. A pattern-classification analysis revealed that, in the blind, calcarine cortex activation patterns could discriminate left-side from right-side reflectors. This was found in both blind experts, but the effect was significant for only one of them. In sighted controls, 'visual' cortex activations were insignificant, but activation patterns in the planum temporale were sufficient to discriminate left-side from right-side reflectors. Our data suggest that blind and echolocation-trained, sighted subjects may recruit different neural substrates for the same active-echolocation task. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
The many facets of auditory display
NASA Technical Reports Server (NTRS)
Blattner, Meera M.
1995-01-01
In this presentation we will examine some of the ways sound can be used in a virtual world. We make the case that many different types of audio experience are available to us. A full range of audio experiences include: music, speech, real-world sounds, auditory displays, and auditory cues or messages. The technology of recreating real-world sounds through physical modeling has advanced in the past few years allowing better simulation of virtual worlds. Three-dimensional audio has further enriched our sensory experiences.
Cogné, Mélanie; Knebel, Jean-François; Klinger, Evelyne; Bindschaedler, Claire; Rapin, Pierre-André; Joseph, Pierre-Alain; Clarke, Stephanie
2018-01-01
Topographical disorientation is a frequent deficit among patients suffering from brain injury. Spatial navigation can be explored in this population using virtual reality environments, even in the presence of motor or sensory disorders. Furthermore, the positive or negative impact of specific stimuli can be investigated. We studied how auditory stimuli influence the performance of brain-injured patients in a navigational task, using the Virtual Action Planning-Supermarket (VAP-S) with the addition of contextual ("sonar effect" and "name of product") and non-contextual ("periodic randomised noises") auditory stimuli. The study included 22 patients with a first unilateral hemispheric brain lesion and 17 healthy age-matched control subjects. After a software familiarisation, all subjects were tested without auditory stimuli, with a sonar effect or periodic random sounds in a random order, and with the stimulus "name of product". Contextual auditory stimuli improved patient performance more than control group performance. Contextual stimuli benefited most patients with severe executive dysfunction or with severe unilateral neglect. These results indicate that contextual auditory stimuli are useful in the assessment of navigational abilities in brain-damaged patients and that they should be used in rehabilitation paradigms.
NASA Astrophysics Data System (ADS)
McMullen, Kyla A.
Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.
Psychophysical evaluation of three-dimensional auditory displays
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.
1991-01-01
Work during this reporting period included the completion of our research on the use of principal components analysis (PCA) to model the acoustical head related transfer functions (HRTFs) that are used to synthesize virtual sources for three dimensional auditory displays. In addition, a series of studies was initiated on the perceptual errors made by listeners when localizing free-field and virtual sources. Previous research has revealed that under certain conditions these perceptual errors, often called 'confusions' or 'reversals', are both large and frequent, thus seriously comprising the utility of a 3-D virtual auditory display. The long-range goal of our work in this area is to elucidate the sources of the confusions and to develop signal-processing strategies to reduce or eliminate them.
Lahav, Orly; Gedalevitz, Hadas; Battersby, Steven; Brown, David; Evett, Lindsay; Merritt, Patrick
2018-05-01
This paper examines the ability of people who are blind to construct a mental map and perform orientation tasks in real space by using Nintendo Wii technologies to explore virtual environments. The participant explores new spaces through haptic and auditory feedback triggered by pointing or walking in the virtual environments and later constructs a mental map, which can be used to navigate in real space. The study included 10 participants who were congenitally or adventitiously blind, divided into experimental and control groups. The research was implemented by using virtual environments exploration and orientation tasks in real spaces, using both qualitative and quantitative methods in its methodology. The results show that the mode of exploration afforded to the experimental group is radically new in orientation and mobility training; as a result 60% of the experimental participants constructed mental maps that were based on map model, compared with only 30% of the control group participants. Using technology that enabled them to explore and to collect spatial information in a way that does not exist in real space influenced the ability of the experimental group to construct a mental map based on the map model. Implications for rehabilitation The virtual cane system for the first time enables people who are blind to explore and collect spatial information via the look-around mode in addition to the walk-around mode. People who are blind prefer to use look-around mode to explore new spaces, as opposed to the walking mode. Although the look-around mode requires users to establish a complex collecting and processing procedure for the spatial data, people who are blind using this mode are able to construct a mental map as a map model. For people who are blind (as for the sighted) construction of a mental map based on map model offers more flexibility in choosing a walking path in a real space, accounting for changes that occur in the space.
Challenges and solutions for realistic room simulation
NASA Astrophysics Data System (ADS)
Begault, Durand R.
2002-05-01
Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Bittner, Rachel M.; Anderson, Mark R.
2012-01-01
Auditory communication displays within the NextGen data link system may use multiple synthetic speech messages replacing traditional ATC and company communications. The design of an interface for selecting amongst multiple incoming messages can impact both performance (time to select, audit and release a message) and preference. Two design factors were evaluated: physical pressure-sensitive switches versus flat panel "virtual switches", and the presence or absence of auditory feedback from switch contact. Performance with stimuli using physical switches was 1.2 s faster than virtual switches (2.0 s vs. 3.2 s); auditory feedback provided a 0.54 s performance advantage (2.33 s vs. 2.87 s). There was no interaction between these variables. Preference data were highly correlated with performance.
Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro
2013-01-01
The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.
1991-01-01
The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.
NASA Astrophysics Data System (ADS)
Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.
1991-03-01
The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.
Education about Hallucinations Using an Internet Virtual Reality System: A Qualitative Survey
ERIC Educational Resources Information Center
Yellowlees, Peter M.; Cook, James N.
2006-01-01
Objective: The authors evaluate an Internet virtual reality technology as an education tool about the hallucinations of psychosis. Method: This is a pilot project using Second Life, an Internet-based virtual reality system, in which a virtual reality environment was constructed to simulate the auditory and visual hallucinations of two patients…
Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness
Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.
2014-01-01
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752
Perceptual effects in auralization of virtual rooms
NASA Astrophysics Data System (ADS)
Kleiner, Mendel; Larsson, Pontus; Vastfjall, Daniel; Torres, Rendell R.
2002-05-01
By using various types of binaural simulation (or ``auralization'') of physical environments, it is now possible to study basic perceptual issues relevant to room acoustics, as well to simulate the acoustic conditions found in concert halls and other auditoria. Binaural simulation of physical spaces in general is also important to virtual reality systems. This presentation will begin with an overview of the issues encountered in the auralization of room and other environments. We will then discuss the influence of various approximations in room modeling, in particular, edge- and surface scattering, on the perceived room response. Finally, we will discuss cross-modal effects, such as the influence of visual cues on the perception of auditory cues, and the influence of cross-modal effects on the judgement of ``perceived presence'' and the rating of room acoustic quality.
Carlile, Simon; Ciccarelli, Gregory; Cockburn, Jane; Diedesch, Anna C.; Finnegan, Megan K.; Hafter, Ervin; Henin, Simon; Kalluri, Sridhar; Kell, Alexander J. E.; Ozmeral, Erol J.; Roark, Casey L.
2017-01-01
Here we report the methods and output of a workshop examining possible futures of speech and hearing science out to 2030. Using a design thinking approach, a range of human-centered problems in communication were identified that could provide the motivation for a wide range of research. Nine main research programs were distilled and are summarized: (a) measuring brain and other physiological parameters, (b) auditory and multimodal displays of information, (c) auditory scene analysis, (d) enabling and understanding shared auditory virtual spaces, (e) holistic approaches to health management and hearing impairment, (f) universal access to evolving and individualized technologies, (g) biological intervention for hearing dysfunction, (h) understanding the psychosocial interactions with technology and other humans as mediated by technology, and (i) the impact of changing models of security and privacy. The design thinking approach attempted to link the judged level of importance of different research areas to the “end in mind” through empathy for the real-life problems embodied in the personas created during the workshop. PMID:29090640
A model of head-related transfer functions based on a state-space analysis
NASA Astrophysics Data System (ADS)
Adams, Norman Herkamp
This dissertation develops and validates a novel state-space method for binaural auditory display. Binaural displays seek to immerse a listener in a 3D virtual auditory scene with a pair of headphones. The challenge for any binaural display is to compute the two signals to supply to the headphones. The present work considers a general framework capable of synthesizing a wide variety of auditory scenes. The framework models collections of head-related transfer functions (HRTFs) simultaneously. This framework improves the flexibility of contemporary displays, but it also compounds the steep computational cost of the display. The cost is reduced dramatically by formulating the collection of HRTFs in the state-space and employing order-reduction techniques to design efficient approximants. Order-reduction techniques based on the Hankel-operator are found to yield accurate low-cost approximants. However, the inter-aural time difference (ITD) of the HRTFs degrades the time-domain response of the approximants. Fortunately, this problem can be circumvented by employing a state-space architecture that allows the ITD to be modeled outside of the state-space. Accordingly, three state-space architectures are considered. Overall, a multiple-input, single-output (MISO) architecture yields the best compromise between performance and flexibility. The state-space approximants are evaluated both empirically and psychoacoustically. An array of truncated FIR filters is used as a pragmatic reference system for comparison. For a fixed cost bound, the state-space systems yield lower approximation error than FIR arrays for D>10, where D is the number of directions in the HRTF collection. A series of headphone listening tests are also performed to validate the state-space approach, and to estimate the minimum order N of indiscriminable approximants. For D = 50, the state-space systems yield order thresholds less than half those of the FIR arrays. Depending upon the stimulus uncertainty, a minimum state-space order of 7≤N≤23 appears to be adequate. In conclusion, the proposed state-space method enables a more flexible and immersive binaural display with low computational cost.
NASA Technical Reports Server (NTRS)
Lehnert, H.; Blauert, Jens; Pompetzki, W.
1991-01-01
In every-day listening the auditory event perceived by a listener is determined not only by the sound signal that a sound emits but also by a variety of environmental parameters. These parameters are the position, orientation and directional characteristics of the sound source, the listener's position and orientation, the geometrical and acoustical properties of surfaces which affect the sound field and the sound propagation properties of the surrounding fluid. A complete set of these parameters can be called an Acoustic Environment. If the auditory event perceived by a listener is manipulated in such a way that the listener is shifted acoustically into a different acoustic environment without moving himself physically, a Virtual Acoustic Environment has been created. Here, we deal with a special technique to set up nearly arbitrary Virtual Acoustic Environments, the Binaural Room Simulation. The purpose of the Binaural Room Simulation is to compute the binaural impulse response related to a virtual acoustic environment taking into account all parameters mentioned above. One possible way to describe a Virtual Acoustic Environment is the concept of the virtual sound sources. Each of the virtual sources emits a certain signal which is correlated but not necessarily identical with the signal emitted by the direct sound source. If source and receiver are non moving, the acoustic environment becomes a linear time-invariant system. Then, the Binaural Impulse Response from the source to a listener' s eardrums contains all relevant auditory information related to the Virtual Acoustic Environment. Listening into the simulated environment can easily be achieved by convolving the Binaural Impulse Response with dry signals and representing the results via headphones.
NASA Astrophysics Data System (ADS)
Rosyidah, T. H.; Firman, H.; Rusyati, L.
2017-02-01
This research was comparing virtual and paper-based test to measure students’ critical thinking based on VAK (Visual-Auditory-Kynesthetic) learning style model. Quasi experiment method with one group post-test only design is applied in this research in order to analyze the data. There was 40 eight grade students at one of public junior high school in Bandung becoming the sample in this research. The quantitative data was obtained through 26 questions about living thing and environment sustainability which is constructed based on the eight elements of critical thinking and be provided in the form of virtual and paper-based test. Based on analysis of the result, it is shown that within visual, auditory, and kinesthetic were not significantly difference in virtual and paper-based test. Besides, all result was supported by quistionnaire about students’ respond on virtual test which shows 3.47 in the scale of 4. Means that student showed positive respond in all aspet measured, which are interest, impression, and expectation.
A virtual display system for conveying three-dimensional acoustic information
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Wightman, Frederic L.; Foster, Scott H.
1988-01-01
The development of a three-dimensional auditory display system is discussed. Theories of human sound localization and techniques for synthesizing various features of auditory spatial perceptions are examined. Psychophysical data validating the system are presented. The human factors applications of the system are considered.
Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.
Rutkowski, Tomasz M
2016-01-01
The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.
Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms
Rutkowski, Tomasz M.
2016-01-01
The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538
Psychophysics of human echolocation.
Schörnich, Sven; Wallmeier, Ludwig; Gessele, Nikodemus; Nagy, Andreas; Schranner, Michael; Kish, Daniel; Wiegrebe, Lutz
2013-01-01
The skills of some blind humans orienting in their environment through the auditory analysis of reflections from self-generated sounds have received only little scientific attention to date. Here we present data from a series of formal psychophysical experiments with sighted subjects trained to evaluate features of a virtual echo-acoustic space, allowing for rigid and fine-grain control of the stimulus parameters. The data show how subjects shape both their vocalisations and auditory analysis of the echoes to serve specific echo-acoustic tasks. First, we show that humans can echo-acoustically discriminate target distances with a resolution of less than 1 m for reference distances above 3.4 m. For a reference distance of 1.7 m, corresponding to an echo delay of only 10 ms, distance JNDs were typically around 0.5 m. Second, we explore the interplay between the precedence effect and echolocation. We show that the strong perceptual asymmetry between lead and lag is weakened during echolocation. Finally, we show that through the auditory analysis of self-generated sounds, subjects discriminate room-size changes as small as 10%.In summary, the current data confirm the practical efficacy of human echolocation, and they provide a rigid psychophysical basis for addressing its neural foundations.
Effect of Blast Injury on Auditory Localization in Military Service Members.
Kubli, Lina R; Brungart, Douglas; Northern, Jerry
Among the many advantages of binaural hearing are the abilities to localize sounds in space and to attend to one sound in the presence of many sounds. Binaural hearing provides benefits for all listeners, but it may be especially critical for military personnel who must maintain situational awareness in complex tactical environments with multiple speech and noise sources. There is concern that Military Service Members who have been exposed to one or more high-intensity blasts during their tour of duty may have difficulty with binaural and spatial ability due to degradation in auditory and cognitive processes. The primary objective of this study was to assess the ability of blast-exposed Military Service Members to localize speech sounds in quiet and in multisource environments with one or two competing talkers. Participants were presented with one, two, or three topic-related (e.g., sports, food, travel) sentences under headphones and required to attend to, and then locate the source of, the sentence pertaining to a prespecified target topic within a virtual space. The listener's head position was monitored by a head-mounted tracking device that continuously updated the apparent spatial location of the target and competing speech sounds as the subject turned within the virtual space. Measurements of auditory localization ability included mean absolute error in locating the source of the target sentence, the time it took to locate the target sentence within 30 degrees, target/competitor confusion errors, response time, and cumulative head motion. Twenty-one blast-exposed Active-Duty or Veteran Military Service Members (blast-exposed group) and 33 non-blast-exposed Service Members and beneficiaries (control group) were evaluated. In general, the blast-exposed group performed as well as the control group if the task involved localizing the source of a single speech target. However, if the task involved two or three simultaneous talkers, localization ability was compromised for some participants in the blast-exposed group. Blast-exposed participants were less accurate in their localization responses and required more exploratory head movements to find the location of the target talker. Results suggest that blast-exposed participants have more difficulty than non-blast-exposed participants in localizing sounds in complex acoustic environments. This apparent deficit in spatial hearing ability highlights the need to develop new diagnostic tests using complex listening tasks that involve multiple sound sources that require speech segregation and comprehension.
ERIC Educational Resources Information Center
Lahav, Orly; Schloerb, David W.; Srinivasan, Mandayam A.
2015-01-01
Introduction: The BlindAid, a virtual system developed for orientation and mobility (O&M) training of people who are blind or have low vision, allows interaction with different virtual components (structures and objects) via auditory and haptic feedback. This research examined if and how the BlindAid that was integrated within an O&M…
Angle-Dependent Distortions in the Perceptual Topology of Acoustic Space
2018-01-01
By moving sounds around the head and asking listeners to report which ones moved more, it was found that sound sources at the side of a listener must move at least twice as much as ones in front to be judged as moving the same amount. A relative expansion of space in the front and compression at the side has consequences for spatial perception of moving sounds by both static and moving listeners. An accompanying prediction that the apparent location of static sound sources ought to also be distorted agrees with previous work and suggests that this is a general perceptual phenomenon that is not limited to moving signals. A mathematical model that mimics the measured expansion of space can be used to successfully capture several previous findings in spatial auditory perception. The inverse of this function could be used alongside individualized head-related transfer functions and motion tracking to produce hyperstable virtual acoustic environments. PMID:29764312
Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.
2012-01-01
We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068
Cogné, Mélanie; Violleau, Marie-Hélène; Klinger, Evelyne; Joseph, Pierre-Alain
2018-01-31
Topographical disorientation is frequent among patients after a stroke and can be well explored with virtual environments (VEs). VEs also allow for the addition of stimuli. A previous study did not find any effect of non-contextual auditory stimuli on navigational performance in the virtual action planning-supermarket (VAP-S) simulating a medium-sized 3D supermarket. However, the perceptual or cognitive load of the sounds used was not high. We investigated how non-contextual auditory stimuli with high load affect navigational performance in the VAP-S for patients who have had a stroke and any correlation between this performance and dysexecutive disorders. Four kinds of stimuli were considered: sounds from living beings, sounds from supermarket objects, beeping sounds and names of other products that were not available in the VAP-S. The condition without auditory stimuli was the control. The Groupe de réflexion pour l'évaluation des fonctions exécutives (GREFEX) battery was used to evaluate executive functions of patients. The study included 40 patients who have had a stroke (n=22 right-hemisphere and n=18 left-hemisphere stroke). Patients' navigational performance was decreased under the 4 conditions with non-contextual auditory stimuli (P<0.05), especially for those with dysexecutive disorders. For the 5 conditions, the lower the performance, the more GREFEX tests were failed. Patients felt significantly disadvantaged by the non-contextual sounds sounds from living beings, sounds from supermarket objects and names of other products as compared with beeping sounds (P<0.01). Patients' verbal recall of the collected objects was significantly lower under the condition with names of other products (P<0.001). Left and right brain-damaged patients did not differ in navigational performance in the VAP-S under the 5 auditory conditions. These non-contextual auditory stimuli could be used in neurorehabilitation paradigms to train patients with dysexecutive disorders to inhibit disruptive stimuli. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Zahorik, Pavel; Carney, Laurel H.; Bishop, Brian B.; Kuwada, Shigeyuki
2015-01-01
Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35–200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. PMID:25834060
Interdependent encoding of pitch, timbre and spatial location in auditory cortex
Bizley, Jennifer K.; Walker, Kerry M. M.; Silverman, Bernard W.; King, Andrew J.; Schnupp, Jan W. H.
2009-01-01
Because we can perceive the pitch, timbre and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from non-spatial attributes. Indeed, recent studies support the existence of anatomically segregated ‘what’ and ‘where’ cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and non-spatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Whilst indicating that neural encoding of pitch, location and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and non-spatial cues at higher cortical levels. Some units exhibited significant non-linear interactions between particular combinations of pitch, timbre and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and non-spatial attributes. Such non-linearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects. PMID:19228960
Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.
2012-01-01
The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505
Hartmeyer, Steffen; Grzeschik, Ramona; Wolbers, Thomas; Wiener, Jan M.
2017-01-01
Route learning is a common navigation task affected by cognitive aging. Here we present a novel experimental paradigm to investigate whether age-related declines in executive control of attention contributes to route learning deficits. A young and an older participant group was repeatedly presented with a route through a virtual maze comprised of 12 decision points (DP) and non-decision points (non-DP). To investigate attentional engagement with the route learning task, participants had to respond to auditory probes at both DP and non-DP. Route knowledge was assessed by showing participants screenshots or landmarks from DPs and non-DPs and asking them to indicate the movement direction required to continue the route. Results demonstrate better performance for DPs than for non-DPs and slower responses to auditory probes at DPs compared to non-DPs. As expected we found slower route learning and slower responses to the auditory probes in the older participant group. Interestingly, differences in response times to the auditory probes between DPs and non-DPs can predict the success of route learning in both age groups and may explain slower knowledge acquisition in the older participant group. PMID:28775689
Listeners' expectation of room acoustical parameters based on visual cues
NASA Astrophysics Data System (ADS)
Valente, Daniel L.
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.
ERIC Educational Resources Information Center
Reinertsen, Gloria M.
A study compared performances on a test of selective auditory attention between students educated in open-space versus closed classroom environments. An open-space classroom environment was defined as having no walls separating it from hallways or other classrooms. It was hypothesized that the incidence of auditory figure-ground (ability to focus…
2009-09-01
Environmental Medicine USN United States Navy VAE Virtual Air Environment VACP Visual, Auditory, Cognitive, Psychomotor (demand) VR Virtual Reality ...0 .5 m/s. Another useful approach to capturing leg, trunk, whole body, or movement tasks comes from virtual reality - based training research and...referred to as semi-automated forces (SAF). From: http://www.sedris.org/glossary.htm#C_grp. Constructive Models Abstractions from the reality to
Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J.; Latorre, José M.; Rodriguez-Jimenez, Roberto
2017-01-01
This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis. PMID:29209193
Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J; Latorre, José M; Rodriguez-Jimenez, Roberto
2017-01-01
This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis.
Beyond the real world: attention debates in auditory mismatch negativity.
Chung, Kyungmi; Park, Jin Young
2018-04-11
The aim of this study was to address the potential for the auditory mismatch negativity (aMMN) to be used in applied event-related potential (ERP) studies by determining whether the aMMN would be an attention-dependent ERP component and could be differently modulated across visual tasks or virtual reality (VR) stimuli with different visual properties and visual complexity levels. A total of 80 participants, aged 19-36 years, were assigned to either a reading-task (21 men and 19 women) or a VR-task (22 men and 18 women) group. Two visual-task groups of healthy young adults were matched in age, sex, and handedness. All participants were instructed to focus only on the given visual tasks and ignore auditory change detection. While participants in the reading-task group read text slides, those in the VR-task group viewed three 360° VR videos in a random order and rated how visually complex the given virtual environment was immediately after each VR video ended. Inconsistent with the finding of a partial significant difference in perceived visual complexity in terms of brightness of virtual environments, both visual properties of distance and brightness showed no significant differences in the modulation of aMMN amplitudes. A further analysis was carried out to compare elicited aMMN amplitudes of a typical MMN task and an applied VR task. No significant difference in the aMMN amplitudes was found across the two groups who completed visual tasks with different visual-task demands. In conclusion, the aMMN is a reliable ERP marker of preattentive cognitive processing for auditory deviance detection.
Auditory spatial processing in the human cortex.
Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C
2012-12-01
The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.
Multisensory Integration in the Virtual Hand Illusion with Active Movement
Satoh, Satoru; Hachimura, Kozaburo
2016-01-01
Improving the sense of immersion is one of the core issues in virtual reality. Perceptual illusions of ownership can be perceived over a virtual body in a multisensory virtual reality environment. Rubber Hand and Virtual Hand Illusions showed that body ownership can be manipulated by applying suitable visual and tactile stimulation. In this study, we investigate the effects of multisensory integration in the Virtual Hand Illusion with active movement. A virtual xylophone playing system which can interactively provide synchronous visual, tactile, and auditory stimulation was constructed. We conducted two experiments regarding different movement conditions and different sensory stimulations. Our results demonstrate that multisensory integration with free active movement can improve the sense of immersion in virtual reality. PMID:27847822
Revolutionizing Education: The Promise of Virtual Reality
ERIC Educational Resources Information Center
Gadelha, Rene
2018-01-01
Virtual reality (VR) has the potential to revolutionize education, as it immerses students in their learning more than any other available medium. By blocking out visual and auditory distractions in the classroom, it has the potential to help students deeply connect with the material they are learning in a way that has never been possible before.…
Attentional Demand of a Virtual Reality-Based Reaching Task in Nondisabled Older Adults.
Chen, Yi-An; Chung, Yu-Chen; Proffitt, Rachel; Wade, Eric; Winstein, Carolee
2015-12-01
Attention during exercise is known to affect performance; however, the attentional demand inherent to virtual reality (VR)-based exercise is not well understood. We used a dual-task paradigm to compare the attentional demands of VR-based and non-VR-based (conventional, real-world) exercise: 22 non-disabled older adults performed a primary reaching task to virtual and real targets in a counterbalanced block order while verbally responding to an unanticipated auditory tone in one third of the trials. The attentional demand of the primary reaching task was inferred from the voice response time (VRT) to the auditory tone. Participants' engagement level and task experience were also obtained using questionnaires. The virtual target condition was more attention demanding (significantly longer VRT) than the real target condition. Secondary analyses revealed a significant interaction between engagement level and target condition on attentional demand. For participants who were highly engaged, attentional demand was high and independent of target condition. However, for those who were less engaged, attentional demand was low and depended on target condition (i.e., virtual > real). These findings add important knowledge to the growing body of research pertaining to the development and application of technology-enhanced exercise for elders and for rehabilitation purposes.
Attentional Demand of a Virtual Reality-Based Reaching Task in Nondisabled Older Adults
Chen, Yi-An; Chung, Yu-Chen; Proffitt, Rachel; Wade, Eric; Winstein, Carolee
2015-01-01
Attention during exercise is known to affect performance; however, the attentional demand inherent to virtual reality (VR)-based exercise is not well understood. We used a dual-task paradigm to compare the attentional demands of VR-based and non-VR-based (conventional, real-world) exercise: 22 non-disabled older adults performed a primary reaching task to virtual and real targets in a counterbalanced block order while verbally responding to an unanticipated auditory tone in one third of the trials. The attentional demand of the primary reaching task was inferred from the voice response time (VRT) to the auditory tone. Participants' engagement level and task experience were also obtained using questionnaires. The virtual target condition was more attention demanding (significantly longer VRT) than the real target condition. Secondary analyses revealed a significant interaction between engagement level and target condition on attentional demand. For participants who were highly engaged, attentional demand was high and independent of target condition. However, for those who were less engaged, attentional demand was low and depended on target condition (i.e., virtual > real). These findings add important knowledge to the growing body of research pertaining to the development and application of technology-enhanced exercise for elders and for rehabilitation purposes. PMID:27004233
A selective impairment of perception of sound motion direction in peripheral space: A case study.
Thaler, Lore; Paciocco, Joseph; Daley, Mark; Lesniak, Gabriella D; Purcell, David W; Fraser, J Alexander; Dutton, Gordon N; Rossit, Stephanie; Goodale, Melvyn A; Culham, Jody C
2016-01-08
It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation
Oliva, Aude
2017-01-01
Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630
NASA Technical Reports Server (NTRS)
Bargar, Robin
1995-01-01
The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.
NASA Technical Reports Server (NTRS)
Begault, Durand R.
2018-01-01
This document reviews non-auditory effects of noise relevant to habitable volume requirements in cislunar space. The non-auditory effects of noise in future long-term space habitats are likely to be impactful on team and individual performance, sleep, and cognitive well-being. This report has provided several recommendations for future standards and procedures for long-term space flight habitats, along with recommendations for NASA's Human Research Program in support of DST mission success.
Evaluation of Domain-Specific Collaboration Interfaces for Team Command and Control Tasks
2012-05-01
Technologies 1.1.1. Virtual Whiteboard Cognitive theories relating the utilization, storage, and retrieval of verbal and spatial information, such as...AE Spatial emergent SE Auditory linguistic AL Spatial positional SP Facial figural FF Spatial quantitative SQ Facial motive FM Tactile figural...driven by the auditory linguistic (AL), short-term memory (STM), spatial attentive (SA), visual temporal (VT), and vocal process (V) subscales. 0
Functional subdivisions in low-frequency primary auditory cortex (AI).
Wallace, M N; Palmer, A R
2009-04-01
We wished to test the hypothesis that there are modules in low-frequency AI that can be identified by their responsiveness to communication calls or particular regions of space. Units were recorded in anaesthetised guinea pig AI and stimulated with conspecific vocalizations and a virtual motion stimulus (binaural beats) presented via a closed sound system. Recording tracks were mainly oriented orthogonally to the cortical surface. Some of these contained units that were all time-locked to the structure of the chutter call (14/22 tracks) and/or the purr call (12/22 tracks) and/or that had a preference for stimuli from a particular region of space (8/20 tracks with four contralateral, two ipsilateral and two midline), or where there was a strong asymmetry in the response to beats of different direction (two tracks). We conclude that about half of low-frequency AI is organized into modules that are consistent with separate "what" and "where" pathways.
Plaze, Marion; Paillère-Martinot, Marie-Laure; Penttilä, Jani; Januel, Dominique; de Beaurepaire, Renaud; Bellivier, Franck; Andoh, Jamila; Galinowski, André; Gallarda, Thierry; Artiges, Eric; Olié, Jean-Pierre; Mangin, Jean-François; Martinot, Jean-Luc; Cachia, Arnaud
2011-01-01
Auditory verbal hallucinations are a cardinal symptom of schizophrenia. Bleuler and Kraepelin distinguished 2 main classes of hallucinations: hallucinations heard outside the head (outer space, or external, hallucinations) and hallucinations heard inside the head (inner space, or internal, hallucinations). This distinction has been confirmed by recent phenomenological studies that identified 3 independent dimensions in auditory hallucinations: language complexity, self-other misattribution, and spatial location. Brain imaging studies in schizophrenia patients with auditory hallucinations have already investigated language complexity and self-other misattribution, but the neural substrate of hallucination spatial location remains unknown. Magnetic resonance images of 45 right-handed patients with schizophrenia and persistent auditory hallucinations and 20 healthy right-handed subjects were acquired. Two homogeneous subgroups of patients were defined based on the hallucination spatial location: patients with only outer space hallucinations (N=12) and patients with only inner space hallucinations (N=15). Between-group differences were then assessed using 2 complementary brain morphometry approaches: voxel-based morphometry and sulcus-based morphometry. Convergent anatomical differences were detected between the patient subgroups in the right temporoparietal junction (rTPJ). In comparison to healthy subjects, opposite deviations in white matter volumes and sulcus displacements were found in patients with inner space hallucination and patients with outer space hallucination. The current results indicate that spatial location of auditory hallucinations is associated with the rTPJ anatomy, a key region of the "where" auditory pathway. The detected tilt in the sulcal junction suggests deviations during early brain maturation, when the superior temporal sulcus and its anterior terminal branch appear and merge.
Auditory peripersonal space in humans.
Farnè, Alessandro; Làdavas, Elisabetta
2002-10-01
In the present study we report neuropsychological evidence of the existence of an auditory peripersonal space representation around the head in humans and its characteristics. In a group of right brain-damaged patients with tactile extinction, we found that a sound delivered near the ipsilesional side of the head (20 cm) strongly extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). By contrast, when an auditory stimulus was presented far from the head (70 cm), cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was most consistently found (i.e., both in the front and back spaces) when a complex sound was presented, like a white noise burst. Pure tones produced spatially specific cross-modal extinction when presented in the back space, but not in the front space. In addition, the most severe cross-modal extinction emerged when sounds came from behind the head, thus showing that the back space is more sensitive than the front space to the sensory interaction of auditory-tactile inputs. Finally, when cross-modal effects were investigated by reversing the spatial arrangement of cross-modal stimuli (i.e., touch on the right and sound on the left), we found that an ipsilesional tactile stimulus, although inducing a small amount of cross-modal tactile-auditory extinction, did not produce any spatial-specific effect. Therefore, the selective aspects of cross-modal interaction found near the head cannot be explained by a competition between a damaged left spatial representation and an intact right spatial representation. Thus, consistent with neurophysiological evidence from monkeys, our findings strongly support the existence, in humans, of an integrated cross-modal system coding auditory and tactile stimuli near the body, that is, in the peripersonal space.
Aging and Sensory Substitution in a Virtual Navigation Task.
Levy-Tzedek, S; Maidenbaum, S; Amedi, A; Lackner, J
2016-01-01
Virtual environments are becoming ubiquitous, and used in a variety of contexts-from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual maze. We added a layer of complexity by conducting the experiment in a rotating room, in order to test the effect of the spatial bias induced by the rotation on performance. Results from 29 participants showed that with the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues. The older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group. There was no effect of room rotation on the performance, nor were there any significant interactions among age, feedback modality and room rotation. We conclude that there is a decline in performance with age, and that while navigation with auditory cues is possible even at an old age, it presents more challenges than visual navigation.
High sensitivity to multisensory conflicts in agoraphobia exhibited by virtual reality.
Viaud-Delmon, Isabelle; Warusfel, Olivier; Seguelas, Angeline; Rio, Emmanuel; Jouvent, Roland
2006-10-01
The primary aim of this study was to evaluate the effect of auditory feedback in a VR system planned for clinical use and to address the different factors that should be taken into account in building a bimodal virtual environment (VE). We conducted an experiment in which we assessed spatial performances in agoraphobic patients and normal subjects comparing two kinds of VEs, visual alone (Vis) and auditory-visual (AVis), during separate sessions. Subjects were equipped with a head-mounted display coupled with an electromagnetic sensor system and immersed in a virtual town. Their task was to locate different landmarks and become familiar with the town. In the AVis condition subjects were equipped with the head-mounted display and headphones, which delivered a soundscape updated in real-time according to their movement in the virtual town. While general performances remained comparable across the conditions, the reported feeling of immersion was more compelling in the AVis environment. However, patients exhibited more cybersickness symptoms in this condition. The result of this study points to the multisensory integration deficit of agoraphobic patients and underline the need for further research on multimodal VR systems for clinical use.
1990-03-01
decided to have three kinds of sessions: invited-paper sessions, panel discussions, and poster sessions. The invited papers were divided into papers...soon followed. Applications in medicine, involving exploration and operation within the human body, are now receiving increased attention . Early... attention toward issues that may be important for the design of auditory interfaces. The importance of appropriate auditory inputs to observers with normal
Gallagher, Rosemary; Damodaran, Harish; Werner, William G; Powell, Wendy; Deutsch, Judith E
2016-08-19
Evidence based virtual environments (VEs) that incorporate compensatory strategies such as cueing may change motor behavior and increase exercise intensity while also being engaging and motivating. The purpose of this study was to determine if persons with Parkinson's disease and aged matched healthy adults responded to auditory and visual cueing embedded in a bicycling VE as a method to increase exercise intensity. We tested two groups of participants, persons with Parkinson's disease (PD) (n = 15) and age-matched healthy adults (n = 13) as they cycled on a stationary bicycle while interacting with a VE. Participants cycled under two conditions: auditory cueing (provided by a metronome) and visual cueing (represented as central road markers in the VE). The auditory condition had four trials in which auditory cues or the VE were presented alone or in combination. The visual condition had five trials in which the VE and visual cue rate presentation was manipulated. Data were analyzed by condition using factorial RMANOVAs with planned t-tests corrected for multiple comparisons. There were no differences in pedaling rates between groups for both the auditory and visual cueing conditions. Persons with PD increased their pedaling rate in the auditory (F 4.78, p = 0.029) and visual cueing (F 26.48, p < 0.000) conditions. Age-matched healthy adults also increased their pedaling rate in the auditory (F = 24.72, p < 0.000) and visual cueing (F = 40.69, p < 0.000) conditions. Trial-to-trial comparisons in the visual condition in age-matched healthy adults showed a step-wise increase in pedaling rate (p = 0.003 to p < 0.000). In contrast, persons with PD increased their pedaling rate only when explicitly instructed to attend to the visual cues (p < 0.000). An evidenced based cycling VE can modify pedaling rate in persons with PD and age-matched healthy adults. Persons with PD required attention directed to the visual cues in order to obtain an increase in cycling intensity. The combination of the VE and auditory cues was neither additive nor interfering. These data serve as preliminary evidence that embedding auditory and visual cues to alter cycling speed in a VE as method to increase exercise intensity that may promote fitness.
NASA Astrophysics Data System (ADS)
Dhingra, Shonali; Sandler, Roman; Rios, Rodrigo; Vuong, Cliff; Mehta, Mayank
All animals naturally perceive the abstract concept of space-time. A brain region called the Hippocampus is known to be important in creating these perceptions, but the underlying mechanisms are unknown. In our lab we employ several experimental and computational techniques from Physics to tackle this fundamental puzzle. Experimentally, we use ideas from Nanoscience and Materials Science to develop techniques to measure the activity of hippocampal neurons, in freely-behaving animals. Computationally, we develop models to study neuronal activity patterns, which are point processes that are highly stochastic and multidimensional. We then apply these techniques to collect and analyze neuronal signals from rodents while they're exploring space in Real World or Virtual Reality with various stimuli. Our findings show that under these conditions neuronal activity depends on various parameters, such as sensory cues including visual and auditory, and behavioral cues including, linear and angular, position and velocity. Further, neuronal networks create internally-generated rhythms, which influence perception of space and time. In totality, these results further our understanding of how the brain develops a cognitive map of our surrounding space, and keep track of time.
2011-11-08
kinesthetic VR stimuli with patient arousal responses. Treatment consisted of 10 sessions (2x/week) for 5 weeks, and a control group received structured...that provided the treatment therapist control over the visual, auditory, and kinesthetic elements experienced by the participant. The experimental...graded presentation of visual, auditory, and kinesthetic stimuli to stimulate memory recall of traumatic combat events in a safe
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1997-01-01
This talk will overview the basic technologies related to the creation of virtual acoustic images, and the potential of including spatial auditory displays in human-machine interfaces. Research into the perceptual error inherent in both natural and virtual spatial hearing is reviewed, since the formation of improved technologies is tied to psychoacoustic research. This includes a discussion of Head Related Transfer Function (HRTF) measurement techniques (the HRTF provides important perceptual cues within a virtual acoustic display). Many commercial applications of virtual acoustics have so far focused on games and entertainment ; in this review, other types of applications are examined, including aeronautic safety, voice communications, virtual reality, and room acoustic simulation. In particular, the notion that realistic simulation is optimized within a virtual acoustic display when head motion and reverberation cues are included within a perceptual model.
Effects of Bone Vibrator Position on Auditory Spatial Perception Tasks.
McBride, Maranda; Tran, Phuong; Pollard, Kimberly A; Letowski, Tomasz; McMillan, Garnett P
2015-12-01
This study assessed listeners' ability to localize spatially differentiated virtual audio signals delivered by bone conduction (BC) vibrators and circumaural air conduction (AC) headphones. Although the skull offers little intracranial sound wave attenuation, previous studies have demonstrated listeners' ability to localize auditory signals delivered by a pair of BC vibrators coupled to the mandibular condyle bones. The current study extended this research to other BC vibrator locations on the skull. Each participant listened to virtual audio signals originating from 16 different horizontal locations using circumaural headphones or BC vibrators placed in front of, above, or behind the listener's ears. The listener's task was to indicate the signal's perceived direction of origin. Localization accuracy with the BC front and BC top positions was comparable to that with the headphones, but responses for the BC back position were less accurate than both the headphones and BC front position. This study supports the conclusion of previous studies that listeners can localize virtual 3D signals equally well using AC and BC transducers. Based on these results, it is apparent that BC devices could be substituted for AC headphones with little to no localization performance degradation. BC headphones can be used when spatial auditory information needs to be delivered without occluding the ears. Although vibrator placement in front of the ears appears optimal from the localization standpoint, the top or back position may be acceptable from an operational standpoint or if the BC system is integrated into headgear. © 2015, Human Factors and Ergonomics Society.
Fusion interfaces for tactical environments: An application of virtual reality technology
NASA Technical Reports Server (NTRS)
Haas, Michael W.
1994-01-01
The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and nonvirtual concepts and devices across the visual, auditory, and haptic sensory modalities. A fusion interface is a multisensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion interface concepts. This new facility, the Fusion Interfaces for Tactical Environments (FITE) Facility is a specialized flight simulator enabling efficient concept development through rapid prototyping and direct experience of new fusion concepts. The FITE Facility also supports evaluation of fusion concepts by operation fighter pilots in an air combat environment. The facility is utilized by a multidisciplinary design team composed of human factors engineers, electronics engineers, computer scientists, experimental psychologists, and oeprational pilots. The FITE computational architecture is composed of twenty-five 80486-based microcomputers operating in real-time. The microcomputers generate out-the-window visuals, in-cockpit and head-mounted visuals, localized auditory presentations, haptic displays on the stick and rudder pedals, as well as executing weapons models, aerodynamic models, and threat models.
Forebrain pathway for auditory space processing in the barn owl.
Cohen, Y E; Miller, G L; Knudsen, E I
1998-02-01
The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.
Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete.
Jarick, Michelle; Stewart, Mark T; Smilek, Daniel; Dixon, Michael J
2013-01-01
Time-space synaesthetes "see" time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred "auditory" viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the "preferred" auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009).
Integration of auditory and vibrotactile stimuli: Effects of frequency
Wilson, E. Courtenay; Reed, Charlotte M.; Braida, Louis D.
2010-01-01
Perceptual integration of vibrotactile and auditory sinusoidal tone pulses was studied in detection experiments as a function of stimulation frequency. Vibrotactile stimuli were delivered through a single channel vibrator to the left middle fingertip. Auditory stimuli were presented diotically through headphones in a background of 50 dB sound pressure level broadband noise. Detection performance for combined auditory-tactile presentations was measured using stimulus levels that yielded 63% to 77% correct unimodal performance. In Experiment 1, the vibrotactile stimulus was 250 Hz and the auditory stimulus varied between 125 and 2000 Hz. In Experiment 2, the auditory stimulus was 250 Hz and the tactile stimulus varied between 50 and 400 Hz. In Experiment 3, the auditory and tactile stimuli were always equal in frequency and ranged from 50 to 400 Hz. The highest rates of detection for the combined-modality stimulus were obtained when stimulating frequencies in the two modalities were equal or closely spaced (and within the Pacinian range). Combined-modality detection for closely spaced frequencies was generally consistent with an algebraic sum model of perceptual integration; wider-frequency spacings were generally better fit by a Pythagorean sum model. Thus, perceptual integration of auditory and tactile stimuli at near-threshold levels appears to depend both on absolute frequency and relative frequency of stimulation within each modality. PMID:21117754
Simultaneous neural and movement recording in large-scale immersive virtual environments.
Snider, Joseph; Plank, Markus; Lee, Dongpyo; Poizner, Howard
2013-10-01
Virtual reality (VR) allows precise control and manipulation of rich, dynamic stimuli that, when coupled with on-line motion capture and neural monitoring, can provide a powerful means both of understanding brain behavioral relations in the high dimensional world and of assessing and treating a variety of neural disorders. Here we present a system that combines state-of-the-art, fully immersive, 3D, multi-modal VR with temporally aligned electroencephalographic (EEG) recordings. The VR system is dynamic and interactive across visual, auditory, and haptic interactions, providing sight, sound, touch, and force. Crucially, it does so with simultaneous EEG recordings while subjects actively move about a 20 × 20 ft² space. The overall end-to-end latency between real movement and its simulated movement in the VR is approximately 40 ms. Spatial precision of the various devices is on the order of millimeters. The temporal alignment with the neural recordings is accurate to within approximately 1 ms. This powerful combination of systems opens up a new window into brain-behavioral relations and a new means of assessment and rehabilitation of individuals with motor and other disorders.
Binaural fusion and the representation of virtual pitch in the human auditory cortex.
Pantev, C; Elbert, T; Ross, B; Eulitz, C; Terhardt, E
1996-10-01
The auditory system derives the pitch of complex tones from the tone's harmonics. Research in psychoacoustics predicted that binaural fusion was an important feature of pitch processing. Based on neuromagnetic human data, the first neurophysiological confirmation of binaural fusion in hearing is presented. The centre of activation within the cortical tonotopic map corresponds to the location of the perceived pitch and not to the locations that are activated when the single frequency constituents are presented. This is also true when the different harmonics of a complex tone are presented dichotically. We conclude that the pitch processor includes binaural fusion to determine the particular pitch location which is activated in the auditory cortex.
Amplitude modulation detection by human listeners in sound fields.
Zahorik, Pavel; Kim, Duck O; Kuwada, Shigeyuki; Anderson, Paul W; Brandewie, Eugene; Srinivasan, Nirmal
2011-10-01
The temporal modulation transfer function (TMTF) approach allows techniques from linear systems analysis to be used to predict how the auditory system will respond to arbitrary patterns of amplitude modulation (AM). Although this approach forms the basis for a standard method of predicting speech intelligibility based on estimates of the acoustical modulation transfer function (MTF) between source and receiver, human sensitivity to AM as characterized by the TMTF has not been extensively studied under realistic listening conditions, such as in reverberant sound fields. Here, TMTFs (octave bands from 2 - 512 Hz) were obtained in 3 listening conditions simulated using virtual auditory space techniques: diotic, anechoic sound field, reverberant room sound field. TMTFs were then related to acoustical MTFs estimated using two different methods in each of the listening conditions. Both diotic and anechoic data were found to be in good agreement with classic results, but AM thresholds in the reverberant room were lower than predictions based on acoustical MTFs. This result suggests that simple linear systems techniques may not be appropriate for predicting TMTFs from acoustical MTFs in reverberant sound fields, and may be suggestive of mechanisms that functionally enhance modulation during reverberant listening.
Auditory Confrontation Naming in Alzheimer’s Disease
Brandt, Jason; Bakker, Arnold; Maroof, David Aaron
2010-01-01
Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer’s disease (AD). We developed an Auditory Naming Task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies, mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the Auditory Naming Task. This task was also more difficult than two versions of a comparable Visual Naming Task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal subjects. Nonetheless, our Auditory Naming Test may prove useful in research and clinical practice, especially with visually-impaired patients. PMID:20981630
NASA Astrophysics Data System (ADS)
West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram
2014-02-01
Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.
ERIC Educational Resources Information Center
Tye-Murray, Nancy; Spehar, Brent; Barcroft, Joe; Sommers, Mitchell
2017-01-01
Purpose: The spacing effect in human memory research refers to situations in which people learn items better when they study items in spaced intervals rather than massed intervals. This investigation was conducted to compare the efficacy of meaning-oriented auditory training when administered with a spaced versus massed practice schedule. Method:…
Call sign intelligibility improvement using a spatial auditory display
NASA Technical Reports Server (NTRS)
Begault, Durand R.
1994-01-01
A spatial auditory display was designed for separating the multiple communication channels usually heard over one ear to different virtual auditory positions. The single 19 foot rack mount device utilizes digital filtering algorithms to separate up to four communication channels. The filters use four different binaural transfer functions, synthesized from actual outer ear measurements, to impose localization cues on the incoming sound. Hardware design features include 'fail-safe' operation in the case of power loss, and microphone/headset interfaces to the mobile launch communication system in use at KSC. An experiment designed to verify the intelligibility advantage of the display used 130 different call signs taken from the communications protocol used at NASA KSC. A 6 to 7 dB intelligibility advantage was found when multiple channels were spatially displayed, compared to monaural listening. The findings suggest that the use of a spatial auditory display could enhance both occupational and operational safety and efficiency of NASA operations.
The effects of auditory and visual cues on timing synchronicity for robotic rehabilitation.
English, Brittney A; Howard, Ayanna M
2017-07-01
In this paper, we explore how the integration of auditory and visual cues can help teach the timing of motor skills for the purpose of motor function rehabilitation. We conducted a study using Amazon's Mechanical Turk in which 106 participants played a virtual therapy game requiring wrist movements. To validate that our results would translate to trends that could also be observed during robotic rehabilitation sessions, we recreated this experiment with 11 participants using a robotic wrist rehabilitation system as means to control the therapy game. During interaction with the therapy game, users were asked to learn and reconstruct a tapping sequence as defined by musical notes flashing on the screen. Participants were divided into 2 test groups: (1) control: participants only received visual cues to prompt them on the timing sequence, and (2) experimental: participants received both visual and auditory cues to prompt them on the timing sequence. To evaluate performance, the timing and length of the sequence were measured. Performance was determined by calculating the number of trials needed before the participant was able to master the specific aspect of the timing task. In the virtual experiment, the group that received visual and auditory cues was able to master all aspects of the timing task faster than the visual cue only group with p-values < 0.05. This trend was also verified for participants using the robotic arm exoskeleton in the physical experiment.
Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete
Jarick, Michelle; Stewart, Mark T.; Smilek, Daniel; Dixon, Michael J.
2013-01-01
Time-space synaesthetes “see” time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred “auditory” viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the “preferred” auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009). PMID:24137140
Sound localization by echolocating bats
NASA Astrophysics Data System (ADS)
Aytekin, Murat
Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.
Ranging in Human Sonar: Effects of Additional Early Reflections and Exploratory Head Movements
Wallmeier, Ludwig; Wiegrebe, Lutz
2014-01-01
Many blind people rely on echoes from self-produced sounds to assess their environment. It has been shown that human subjects can use echolocation for directional localization and orientation in a room, but echo-acoustic distance perception - e.g. to determine one's position in a room - has received little scientific attention, and systematic studies on the influence of additional early reflections and exploratory head movements are lacking. This study investigates echo-acoustic distance discrimination in virtual echo-acoustic space, using the impulse responses of a real corridor. Six blindfolded sighted subjects and a blind echolocation expert had to discriminate between two positions in the virtual corridor, which differed by their distance to the front wall, but not to the lateral walls. To solve this task, participants evaluated echoes that were generated in real time from self-produced vocalizations. Across experimental conditions, we systematically varied the restrictions for head rotations, the subjects' orientation in virtual space and the reference position. Three key results were observed. First, all participants successfully solved the task with discrimination thresholds below 1 m for all reference distances (0.75–4 m). Performance was best for the smallest reference distance of 0.75 m, with thresholds around 20 cm. Second, distance discrimination performance was relatively robust against additional early reflections, compared to other echolocation tasks like directional localization. Third, free head rotations during echolocation can improve distance discrimination performance in complex environmental settings. However, head movements do not necessarily provide a benefit over static echolocation from an optimal single orientation. These results show that accurate distance discrimination through echolocation is possible over a wide range of reference distances and environmental conditions. This is an important functional benefit of human echolocation, which may also play a major role in the calibration of auditory space representations. PMID:25551226
Compression of auditory space during forward self-motion.
Teramoto, Wataru; Sakamoto, Shuichi; Furune, Fumimasa; Gyoba, Jiro; Suzuki, Yôiti
2012-01-01
Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from vestibular system.
Characterizing the audibility of sound field with diffusion in architectural spaces
NASA Astrophysics Data System (ADS)
Utami, Sentagi Sesotya
The significance of diffusion control in room acoustics is that it attempts to avoid echoes by dispersing reflections while removing less valuable sound energy. Some applications place emphasis on the enhancement of late reflections to promote a sense of envelopment, and on methods required to measure the performance of diffusers. What still remains unclear is the impact of diffusion on the audibility quality due to the geometric arrangement of architectural elements. The objective of this research is to characterize the audibility of the sound field with diffusion in architectural space. In order to address this objective, an approach utilizing various methods and new techniques relevant to room acoustics standards was applied. An array of microphones based on beam forming (i.e., an acoustic camera) was utilized for field measurements in a recording studio, classrooms, auditoriums, concert halls and sport arenas. Given the ability to combine a visual image with acoustical data, the impulse responses measured were analyzed to identify the impact of diffusive surfaces on the early, late, and reverberant sound fields. The effects of the room geometry and the proportions of the diffusive and absorptive surfaces were observed by utilizing geometrical room acoustics simulations. The degree of diffuseness in each space was measured by coherences from different measurement positions along with the acoustical conditions predicted by well-known objective parameters such as T30, EDT, C80, and C50. Noticeable differences of the auditory experience were investigated by utilizing computer-based survey techniques, including the use of an immersive virtual environment system, given the current software auralization capabilities. The results based on statistical analysis demonstrate the users' ability to localize the sound and to distinguish the intensity, clarity, and reverberation created within the virtual environment. Impact of architectural elements in diffusion control is evaluated by the design variable interaction, objectively and subjectively. Effectiveness of the diffusive surfaces is determined by the echo reduction and the sense of complete immersion in a given room acoustics volume. Application of such methodology at various stages of design provides the ability to create a better auditory experience by the users. The results based on the cases studied have contributed to the development of new acoustical treatment based on the diffusion characteristics.
Fully Three-Dimensional Virtual-Reality System
NASA Technical Reports Server (NTRS)
Beckman, Brian C.
1994-01-01
Proposed virtual-reality system presents visual displays to simulate free flight in three-dimensional space. System, virtual space pod, is testbed for control and navigation schemes. Unlike most virtual-reality systems, virtual space pod would not depend for orientation on ground plane, which hinders free flight in three dimensions. Space pod provides comfortable seating, convenient controls, and dynamic virtual-space images for virtual traveler. Controls include buttons plus joysticks with six degrees of freedom.
Relative size of auditory pathways in symmetrically and asymmetrically eared owls.
Gutiérrez-Ibáñez, Cristián; Iwaniuk, Andrew N; Wylie, Douglas R
2011-01-01
Owls are highly efficient predators with a specialized auditory system designed to aid in the localization of prey. One of the most unique anatomical features of the owl auditory system is the evolution of vertically asymmetrical ears in some species, which improves their ability to localize the elevational component of a sound stimulus. In the asymmetrically eared barn owl, interaural time differences (ITD) are used to localize sounds in azimuth, whereas interaural level differences (ILD) are used to localize sounds in elevation. These two features are processed independently in two separate neural pathways that converge in the external nucleus of the inferior colliculus to form an auditory map of space. Here, we present a comparison of the relative volume of 11 auditory nuclei in both the ITD and the ILD pathways of 8 species of symmetrically and asymmetrically eared owls in order to investigate evolutionary changes in the auditory pathways in relation to ear asymmetry. Overall, our results indicate that asymmetrically eared owls have much larger auditory nuclei than owls with symmetrical ears. In asymmetrically eared owls we found that both the ITD and ILD pathways are equally enlarged, and other auditory nuclei, not directly involved in binaural comparisons, are also enlarged. We suggest that the hypertrophy of auditory nuclei in asymmetrically eared owls likely reflects both an improved ability to precisely locate sounds in space and an expansion of the hearing range. Additionally, our results suggest that the hypertrophy of nuclei that compute space may have preceded that of the expansion of the hearing range and evolutionary changes in the size of the auditory system occurred independently of phylogeny. Copyright © 2011 S. Karger AG, Basel.
Mohammadi, Alireza; Kargar, Mahmoud; Hesami, Ehsan
2018-03-01
Spatial disorientation is a hallmark of amnestic mild cognitive impairment (aMCI) and Alzheimer's disease. Our aim was to use virtual reality to determine the allocentric and egocentric memory deficits of subjects with single-domain aMCI (aMCIsd) and multiple-domain aMCI (aMCImd). For this purpose, we introduced an advanced virtual reality navigation task (VRNT) to distinguish these deficits in mild Alzheimer's disease (miAD), aMCIsd, and aMCImd. The VRNT performance of 110 subjects, including 20 with miAD, 30 with pure aMCIsd, 30 with pure aMCImd, and 30 cognitively normal controls was compared. Our newly developed VRNT consists of a virtual neighbourhood (allocentric memory) and virtual maze (egocentric memory). Verbal and visuospatial memory impairments were also examined with Rey Auditory-Verbal Learning Test and Rey-Osterrieth Complex Figure Test, respectively. We found that miAD and aMCImd subjects were impaired in both allocentric and egocentric memory, but aMCIsd subjects performed similarly to the normal controls on both tasks. The miAD, aMCImd, and aMCIsd subjects performed worse on finding the target or required more time in the virtual environment than the aMCImd, aMCIsd, and normal controls, respectively. Our findings indicated the aMCImd and miAD subjects, as well as the aMCIsd subjects, were more impaired in egocentric orientation than allocentric orientation. We concluded that VRNT can distinguish aMCImd subjects, but not aMCIsd subjects, from normal elderly subjects. The VRNT, along with the Rey Auditory-Verbal Learning Test and Rey-Osterrieth Complex Figure Test, can be used as a valid diagnostic tool for properly distinguishing different forms of aMCI. © 2018 Japanese Psychogeriatric Society.
Multisensory guidance of orienting behavior.
Maier, Joost X; Groh, Jennifer M
2009-12-01
We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.
Kostopoulos, Penelope; Petrides, Michael
2016-02-16
There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.
Ingham, N J; Thornton, S K; McCrossan, D; Withington, D J
1998-12-01
Neurotransmitter involvement in development and maintenance of the auditory space map in the guinea pig superior colliculus. J. Neurophysiol. 80: 2941-2953, 1998. The mammalian superior colliculus (SC) is a complex area of the midbrain in terms of anatomy, physiology, and neurochemistry. The SC bears representations of the major sensory modalites integrated with a motor output system. It is implicated with saccade generation, in behavioral responses to novel sensory stimuli and receives innervation from diverse regions of the brain using many neurotransmitter classes. Ethylene-vinyl acetate copolymer (Elvax-40W polymer) was used here to deliver chronically neurotransmitter receptor antagonists to the SC of the guinea pig to investigate the potential role played by the major neurotransmitter systems in the collicular representation of auditory space. Slices of polymer containing different drugs were implanted onto the SC of guinea pigs before the development of the SC azimuthal auditory space map, at approximately 20 days after birth (DAB). A further group of animals was exposed to aminophosphonopentanoic acid (AP5) at approximately 250 DAB. Azimuthal spatial tuning properties of deep layer multiunits of anesthetized guinea pigs were examined approximately 20 days after implantation of the Elvax polymer. Broadband noise bursts were presented to the animals under anechoic, free-field conditions. Neuronal responses were used to construct polar plots representative of the auditory spatial multiunit receptive fields (MURFs). Animals exposed to control polymer could develop a map of auditory space in the SC comparable with that seen in unimplanted normal animals. Exposure of the SC of young animals to AP5, 6-cyano-7-nitroquinoxaline-2,3-dione, or atropine, resulted in a reduction in the proportion of spatially tuned responses with an increase in the proportion of broadly tuned responses and a degradation in topographic order. Thus N-methyl--aspartate (NMDA) and non-NMDA glutamate receptors and muscarinic acetylcholine receptors appear to play vital roles in the development of the SC auditory space map. A group of animals exposed to AP5 beginning at approximately 250 DAB produced results very similar to those obtained in the young group exposed to AP5. Thus NMDA glutamate receptors also seem to be involved in the maintenance of the SC representation of auditory space in the adult guinea pig. Exposure of the SC of young guinea pigs to gamma-aminobutyric acid (GABA) receptor blocking agents produced some but not total disruption of the spatial tuning of auditory MURFs. Receptive fields were large compared with controls, but a significant degree of topographical organization was maintained. GABA receptors may play a role in the development of fine tuning and sharpening of auditory spatial responses in the SC but not necessarily in the generation of topographical order of the these responses.
Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel Robert; Namer-Furstenberg, Rinat; Amedi, Amir
2014-01-01
Mobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane's in the real world. We compared performance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navigation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation.
ERIC Educational Resources Information Center
Vercillo, Tiziana; Burr, David; Gori, Monica
2016-01-01
A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…
Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina
2016-02-01
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Jin, Seung-A Annie
2009-12-01
The rapid growth of virtual worlds is one of the most recent Internet trends. Some distinguishing features of virtual environments include the employment of avatars and multimodal communication among avatars. This study examined the effects of the modality (text vs. audio) of message presentation on people's evaluation of spokes-avatar credibility and the informational value of promotional messages in avatar-based advertising inside 3D virtual environments. An experiment was conducted in the virtual Apple retail store inside Second Life, the most popular and fastest growing virtual world. The author designed a two-group (textual advertisement vs. auditory advertisement) comparison experiment by manipulating the modality of conveying advertisement messages. The author also created a spokes-avatar that represents a real-life organization (Apple) and presents promotional messages about its innovative product, the iPhone. Data analyses showed that (a) textual modality (vs. auditory modality) resulted in greater source expertise, informational value of the advertisement message, and social presence; and that (b) high product involvement (vs. low product involvement) resulted in a more positive attitude toward the product, higher buying intention, and a higher level of perceived interactivity. In addition to the main effects of product involvement and modality, results showed significant interaction between involvement and modality. Modality effects were stronger for people with low product involvement than for those with high product involvement, thus confirming the moderating effects of product involvement. Results of a path analysis also showed that social presence mediated the effects of modality on the perceived informational value of the advertisement message.
An assessment of auditory-guided locomotion in an obstacle circumvention task.
Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina
2016-06-01
This study investigated how effectively audition can be used to guide navigation around an obstacle. Ten blindfolded normally sighted participants navigated around a 0.6 × 2 m obstacle while producing self-generated mouth click sounds. Objective movement performance was measured using a Vicon motion capture system. Performance with full vision without generating sound was used as a baseline for comparison. The obstacle's location was varied randomly from trial to trial: it was either straight ahead or 25 cm to the left or right relative to the participant. Although audition provided sufficient information to detect the obstacle and guide participants around it without collision in the majority of trials, buffer space (clearance between the shoulder and obstacle), overall movement times, and number of velocity corrections were significantly (p < 0.05) greater with auditory guidance than visual guidance. Collisions sometime occurred under auditory guidance, suggesting that audition did not always provide an accurate estimate of the space between the participant and obstacle. Unlike visual guidance, participants did not always walk around the side that afforded the most space during auditory guidance. Mean buffer space was 1.8 times higher under auditory than under visual guidance. Results suggest that sound can be used to generate buffer space when vision is unavailable, allowing navigation around an obstacle without collision in the majority of trials.
Spatial Audio on the Web: Or Why Can't I hear Anything Over There?
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Schlickenmaier, Herbert (Technical Monitor); Johnson, Gerald (Technical Monitor); Frey, Mary Anne (Technical Monitor); Schneider, Victor S. (Technical Monitor); Ahunada, Albert J. (Technical Monitor)
1997-01-01
Auditory complexity, freedom of movement and interactivity is not always possible in a "true" virtual environment, much less in web-based audio. However, a lot of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to spatial audio on the web. My talk will discuss some of these engineering constraints and their perceptual consequences, and attempt to relate these issues to implementation on the web.
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion
Smith, Ross T.; Hunter, Estin V.; Davis, Miles G.; Sterling, Michele; Moseley, G. Lorimer
2017-01-01
Background Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can’t be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. Method In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%–200%—the Motor Offset Visual Illusion (MoOVi)—thus simulating more or less movement than that actually occurring. At 50o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360o immersive virtual reality with and without three-dimensional properties, was also investigated. Results Perception of head movement was dependent on visual-kinaesthetic feedback (p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Discussion Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain. PMID:28243537
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion.
Harvie, Daniel S; Smith, Ross T; Hunter, Estin V; Davis, Miles G; Sterling, Michele; Moseley, G Lorimer
2017-01-01
Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can't be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50 o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%-200%-the Motor Offset Visual Illusion (MoOVi)-thus simulating more or less movement than that actually occurring. At 50 o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360 o immersive virtual reality with and without three-dimensional properties, was also investigated. Perception of head movement was dependent on visual-kinaesthetic feedback ( p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain.
Absence of modulatory action on haptic height perception with musical pitch
Geronazzo, Michele; Avanzini, Federico; Grassi, Massimo
2015-01-01
Although acoustic frequency is not a spatial property of physical objects, in common language, pitch, i.e., the psychological correlated of frequency, is often labeled spatially (i.e., “high in pitch” or “low in pitch”). Pitch-height is known to modulate (and interact with) the response of participants when they are asked to judge spatial properties of non-auditory stimuli (e.g., visual) in a variety of behavioral tasks. In the current study we investigated whether the modulatory action of pitch-height extended to the haptic estimation of height of a virtual step. We implemented a HW/SW setup which is able to render virtual 3D objects (stair-steps) haptically through a PHANTOM device, and to provide real-time continuous auditory feedback depending on the user interaction with the object. The haptic exploration was associated with a sinusoidal tone whose pitch varied as a function of the interaction point's height within (i) a narrower and (ii) a wider pitch range, or (iii) a random pitch variation acting as a control audio condition. Explorations were also performed with no sound (haptic only). Participants were instructed to explore the virtual step freely, and to communicate height estimation by opening their thumb and index finger to mimic the step riser height, or verbally by reporting the height in centimeters of the step riser. We analyzed the role of musical expertise by dividing participants into non-musicians and musicians. Results showed no effects of musical pitch on high-realistic haptic feedback. Overall there is no difference between the two groups in the proposed multimodal conditions. Additionally, we observed a different haptic response distribution between musicians and non-musicians when estimations of the auditory conditions are matched with estimations in the no sound condition. PMID:26441745
Sugi, Miho; Hagimoto, Yutaka; Nambu, Isao; Gonzalez, Alejandro; Takei, Yoshinori; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro
2018-01-01
Recently, a brain-computer interface (BCI) using virtual sound sources has been proposed for estimating user intention via electroencephalogram (EEG) in an oddball task. However, its performance is still insufficient for practical use. In this study, we examine the impact that shortening the stimulus onset asynchrony (SOA) has on this auditory BCI. While very short SOA might improve its performance, sound perception and task performance become difficult, and event-related potentials (ERPs) may not be induced if the SOA is too short. Therefore, we carried out behavioral and EEG experiments to determine the optimal SOA. In the experiments, participants were instructed to direct attention to one of six virtual sounds (target direction). We used eight different SOA conditions: 200, 300, 400, 500, 600, 700, 800, and 1,100 ms. In the behavioral experiment, we recorded participant behavioral responses to target direction and evaluated recognition performance of the stimuli. In all SOA conditions, recognition accuracy was over 85%, indicating that participants could recognize the target stimuli correctly. Next, using a silent counting task in the EEG experiment, we found significant differences between target and non-target sound directions in all but the 200-ms SOA condition. When we calculated an identification accuracy using Fisher discriminant analysis (FDA), the SOA could be shortened by 400 ms without decreasing the identification accuracies. Thus, improvements in performance (evaluated by BCI utility) could be achieved. On average, higher BCI utilities were obtained in the 400 and 500-ms SOA conditions. Thus, auditory BCI performance can be optimized for both behavioral and neurophysiological responses by shortening the SOA. PMID:29535602
NASA Astrophysics Data System (ADS)
Shinn-Cunningham, Barbara
2003-04-01
One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.
Filling-in visual motion with sounds.
Väljamäe, A; Soto-Faraco, S
2008-10-01
Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.
AULA-Advanced Virtual Reality Tool for the Assessment of Attention: Normative Study in Spain.
Iriarte, Yahaira; Diaz-Orueta, Unai; Cueto, Eduardo; Irazustabarrena, Paula; Banterla, Flavio; Climent, Gema
2016-06-01
The present study describes the obtention of normative data for the AULA test, a virtual reality tool designed to evaluate attention problems, especially in children and adolescents. The normative sample comprised 1,272 participants (48.2% female) with an age range from 6 to 16 years (M = 10.25, SD = 2.83). The AULA test administered to them shows both visual and auditory stimuli, while randomized distractors of ecological nature appear progressively. Variables provided by AULA were clustered in different categories for their posterior analysis. Differences by age and gender were analyzed, resulting in 14 groups, 7 per sex group. Differences between visual and auditory attention were also obtained. Obtained normative data are relevant for the use of AULA for evaluating attention in Spanish children and adolescents in a more ecological way. Further studies will be needed to determine sensitivity and specificity of AULA to measure attention in different clinical populations. (J. of Att. Dis. 2016; 20(6) 542-568). © The Author(s) 2012.
Emotion modulates activity in the 'what' but not 'where' auditory processing pathway.
Kryklywy, James H; Macpherson, Ewan A; Greening, Steven G; Mitchell, Derek G V
2013-11-15
Auditory cortices can be separated into dissociable processing pathways similar to those observed in the visual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how emotion interacts with dorsal processing streams, and essentially nothing is known about the impact of emotion on auditory stimulus localization. In the current study, we used fMRI in concert with individualized auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization task. Surprisingly, participants were significantly slower to localize emotional relative to neutral sounds. A separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent of emotion. When applied to the main experimental task, a significant main effect of location, but not emotion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of auditory cortex were modulated by sound location; however, additional anterior-lateral areas of auditory cortex demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region resembled areas described in dual pathway models of auditory processing as the 'what' processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the 'what' pathway) independent of location and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the 'what,' but not the 'where,' auditory processing pathway. Copyright © 2013 Elsevier Inc. All rights reserved.
Auditory Space Perception in Left- and Right-Handers
ERIC Educational Resources Information Center
Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg
2010-01-01
Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…
Spatial Hearing with Incongruent Visual or Auditory Room Cues
Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten
2016-01-01
In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli. PMID:27853290
Misperception of exocentric directions in auditory space
Arthur, Joeanna C.; Philbeck, John W.; Sargent, Jesse; Dopkins, Stephen
2008-01-01
Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space. PMID:18555205
Fast localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates.
Subotnik, Joseph E; Dutoi, Anthony D; Head-Gordon, Martin
2005-09-15
We present here an algorithm for computing stable, well-defined localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates. The algorithm is very fast, limited only by diagonalization of two matrices with dimension the size of the number of virtual orbitals. Furthermore, we require no more than quadratic (in the number of electrons) storage. The basic premise behind our algorithm is that one can decompose any given atomic-orbital (AO) vector space as a minimal basis space (which includes the occupied and valence virtual spaces) and a hard-virtual (HV) space (which includes everything else). The valence virtual space localizes easily with standard methods, while the hard-virtual space is constructed to be atom centered and automatically local. The orbitals presented here may be computed almost as quickly as projecting the AO basis onto the virtual space and are almost as local (according to orbital variance), while our orbitals are orthonormal (rather than redundant and nonorthogonal). We expect this algorithm to find use in local-correlation methods.
Strategies for Analyzing Tone Languages
ERIC Educational Resources Information Center
Coupe, Alexander R.
2014-01-01
This paper outlines a method of auditory and acoustic analysis for determining the tonemes of a language starting from scratch, drawing on the author's experience of recording and analyzing tone languages of north-east India. The methodology is applied to a preliminary analysis of tone in the Thang dialect of Khiamniungan, a virtually undocumented…
Visual-Auditory Integration during Speech Imitation in Autism
ERIC Educational Resources Information Center
Williams, Justin H. G.; Massaro, Dominic W.; Peel, Natalie J.; Bosseler, Alexis; Suddendorf, Thomas
2004-01-01
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional "mirror neuron" systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a "virtual" head (Baldi), delivered speech stimuli for…
The Influence of Tactile Cognitive Maps on Auditory Space Perception in Sighted Persons.
Tonelli, Alessia; Gori, Monica; Brayda, Luca
2016-01-01
We have recently shown that vision is important to improve spatial auditory cognition. In this study, we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular, we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people - one experimental and one control group - in an auditory space bisection task. In the first group, the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound propagation.
Virtual fixtures as tools to enhance operator performance in telepresence environments
NASA Astrophysics Data System (ADS)
Rosenberg, Louis B.
1993-12-01
This paper introduces the notion of virtual fixtures for use in telepresence systems and presents an empirical study which demonstrates that such virtual fixtures can greatly enhance operator performance within remote environments. Just as tools and fixtures in the real world can enhance human performance by guiding manual operations, providing localizing references, and reducing the mental processing required to perform a task, virtual fixtures are computer generated percepts overlaid on top of the reflection of a remote workspace which can provide similar benefits. Like a ruler guiding a pencil in a real manipulation task, a virtual fixture overlaid on top of a remote workspace can act to reduce the mental processing required to perform a task, limit the workload of certain sensory modalities, and most of all allow precision and performance to exceed natural human abilities. Because such perceptual overlays are virtual constructions they can be diverse in modality, abstract in form, and custom tailored to individual task or user needs. This study investigates the potential of virtual fixtures by implementing simple combinations of haptic and auditory sensations as perceptual overlays during a standardized telemanipulation task.
Enhanced auditory spatial localization in blind echolocators.
Vercillo, Tiziana; Milne, Jennifer L; Gori, Monica; Goodale, Melvyn A
2015-01-01
Echolocation is the extraordinary ability to represent the external environment by using reflected sound waves from self-generated auditory pulses. Blind human expert echolocators show extremely precise spatial acuity and high accuracy in determining the shape and motion of objects by using echoes. In the current study, we investigated whether or not the use of echolocation would improve the representation of auditory space, which is severely compromised in congenitally blind individuals (Gori et al., 2014). The performance of three blind expert echolocators was compared to that of 6 blind non-echolocators and 11 sighted participants. Two tasks were performed: (1) a space bisection task in which participants judged whether the second of a sequence of three sounds was closer in space to the first or the third sound and (2) a minimum audible angle task in which participants reported which of two sounds presented successively was located more to the right. The blind non-echolocating group showed a severe impairment only in the space bisection task compared to the sighted group. Remarkably, the three blind expert echolocators performed both spatial tasks with similar or even better precision and accuracy than the sighted group. These results suggest that echolocation may improve the general sense of auditory space, most likely through a process of sensory calibration. Copyright © 2014 Elsevier Ltd. All rights reserved.
2017-01-01
Abstract While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space. PMID:28674698
Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity.
Loria, Tristan; de Grosbois, John; Tremblay, Luc
2016-09-01
At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.
Theoretical Limitations on Functional Imaging Resolution in Auditory Cortex
Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.
2010-01-01
Functional imaging can reveal detailed organizational structure in cerebral cortical areas, but neuronal response features and local neural interconnectivity can influence the resulting images, possibly limiting the inferences that can be drawn about neural function. Discerning the fundamental principles of organizational structure in the auditory cortex of multiple species has been somewhat challenging historically both with functional imaging and with electrophysiology. A possible limitation affecting any methodology using pooled neuronal measures may be the relative distribution of response selectivity throughout the population of auditory cortex neurons. One neuronal response type inherited from the cochlea, for example, exhibits a receptive field that increases in size (i.e., decreases in selectivity) at higher stimulus intensities. Even though these neurons appear to represent a minority of auditory cortex neurons, they are likely to contribute disproportionately to the activity detected in functional images, especially if intense sounds are used for stimulation. To evaluate the potential influence of neuronal subpopulations upon functional images of primary auditory cortex, a model array representing cortical neurons was probed with virtual imaging experiments under various assumptions about the local circuit organization. As expected, different neuronal subpopulations were activated preferentially under different stimulus conditions. In fact, stimulus protocols that can preferentially excite selective neurons, resulting in a relatively sparse activation map, have the potential to improve the effective resolution of functional auditory cortical images. These experimental results also make predictions about auditory cortex organization that can be tested with refined functional imaging experiments. PMID:20079343
Lean on Wii: physical rehabilitation with virtual reality Wii peripherals.
Anderson, Fraser; Annett, Michelle; Bischof, Walter F
2010-01-01
In recent years, a growing number of occupational therapists have integrated video game technologies, such as the Nintendo Wii, into rehabilitation programs. 'Wiihabilitation', or the use of the Wii in rehabilitation, has been successful in increasing patients' motivation and encouraging full body movement. The non-rehabilitative focus of Wii applications, however, presents a number of problems: games are too difficult for patients, they mainly target upper-body gross motor functions, and they lack support for task customization, grading, and quantitative measurements. To overcome these problems, we have designed a low-cost, virtual-reality based system. Our system, Virtual Wiihab, records performance and behavioral measurements, allows for activity customization, and uses auditory, visual, and haptic elements to provide extrinsic feedback and motivation to patients.
Influence of Visual Prism Adaptation on Auditory Space Representation.
Pochopien, Klaudia; Fahle, Manfred
2017-01-01
Prisms shifting the visual input sideways produce a mismatch between the visual versus felt position of one's hand. Prism adaptation eliminates this mismatch, realigning hand proprioception with visual input. Whether this realignment concerns exclusively the visuo-(hand)motor system or it generalizes to acoustic inputs is controversial. We here show that there is indeed a slight influence of visual adaptation on the perceived direction of acoustic sources. However, this shift in perceived auditory direction can be fully explained by a subconscious head rotation during prism exposure and by changes in arm proprioception. Hence, prism adaptation does only indirectly generalize to auditory space perception.
Distractibility in Attention/Deficit/ Hyperactivity Disorder (ADHD): the virtual reality classroom.
Adams, Rebecca; Finn, Paul; Moes, Elisabeth; Flannery, Kathleen; Rizzo, Albert Skip
2009-03-01
Nineteen boys aged 8 to 14 with a diagnosis of ADHD and 16 age-matched controls were compared in a virtual reality (VR) classroom version of a continuous performance task (CPT), with a second standard CPT presentation using the same projection display dome system. The Virtual Classroom included simulated "real-world" auditory and visual distracters. Parent ratings of attention, hyperactivity, internalizing problems, and adaptive skills on the Behavior Assessment System for Children (BASC) Monitor for ADHD confirmed that the ADHD children had more problems in these areas than controls. The difference between the ADHD group (who performed worse) and the control group approached significance (p = .05; adjusted p = .02) in the Virtual Classroom presentation, and the classification rate of the Virtual Classroom was better than when the standard CPT was used (87.5% versus 68.8%). Children with ADHD were more affected by distractions in the VR classroom than those without ADHD. Results are discussed in relation to distractibility in ADHD.
Localization of virtual sound at 4 Gz.
Sandor, Patrick M B; McAnally, Ken I; Pellieux, Lionel; Martin, Russell L
2005-02-01
Acceleration directed along the body's z-axis (Gz) leads to misperception of the elevation of visual objects (the "elevator illusion"), most probably as a result of errors in the transformation from eye-centered to head-centered coordinates. We have investigated whether the location of sound sources is misperceived under increased Gz. Visually guided localization responses were made, using a remotely controlled laser pointer, to virtual auditory targets under conditions of 1 and 4 Gz induced in a human centrifuge. As these responses would be expected to be affected by the elevator illusion, we also measured the effect of Gz on the accuracy with which subjects could point to the horizon. Horizon judgments were lower at 4 Gz than at 1 Gz, so sound localization responses at 4 Gz were corrected for this error in the transformation from eye-centered to head-centered coordinates. We found that the accuracy and bias of sound localization are not significantly affected by increased Gz. The auditory modality is likely to provide a reliable means of conveying spatial information to operators in dynamic environments in which Gz can vary.
Demonstrations of simple and complex auditory psychophysics for multiple platforms and environments
NASA Astrophysics Data System (ADS)
Horowitz, Seth S.; Simmons, Andrea M.; Blue, China
2005-09-01
Sound is arguably the most widely perceived and pervasive form of energy in our world, and among the least understood, in part due to the complexity of its underlying principles. A series of interactive displays has been developed which demonstrates that the nature of sound involves the propagation of energy through space, and illustrates the definition of psychoacoustics, which is how listeners map the physical aspects of sound and vibration onto their brains. These displays use auditory illusions and commonly experienced music and sound in novel presentations (using interactive computer algorithms) to show that what you hear is not always what you get. The areas covered in these demonstrations range from simple and complex auditory localization, which illustrate why humans are bad at echolocation but excellent at determining the contents of auditory space, to auditory illusions that manipulate fine phase information and make the listener think their head is changing size. Another demonstration shows how auditory and visual localization coincide and sound can be used to change visual tracking. These demonstrations are designed to run on a wide variety of student accessible platforms including web pages, stand-alone presentations, or even hardware-based systems for museum displays.
An Expanded Role for the Dorsal Auditory Pathway in Sensorimotor Control and Integration
Rauschecker, Josef P.
2010-01-01
The dual-pathway model of auditory cortical processing assumes that two largely segregated processing streams originating in the lateral belt subserve the two main functions of hearing: identification of auditory “objects”, including speech; and localization of sounds in space (Rauschecker and Tian, 2000). Evidence has accumulated, chiefly from work in humans and nonhuman primates, that an antero-ventral pathway supports the former function, whereas a postero-dorsal stream supports the latter, i.e. processing of space and motion-in-space. In addition, the postero-dorsal stream has also been postulated to subserve some functions of speech and language in humans. A recent review (Rauschecker and Scott, 2009) has proposed the possibility that both functions of the postero-dorsal pathway can be subsumed under the same structural forward model: an efference copy sent from prefrontal and premotor cortex provides the basis for “optimal state estimation” in the inferior parietal lobe and in sensory areas of the posterior auditory cortex. The current article corroborates this model by adding and discussing recent evidence. PMID:20850511
Aurally aided visual search performance in a dynamic environment
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.
2008-04-01
Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.
NASA Technical Reports Server (NTRS)
Ishii, Masahiro; Sukanya, P.; Sato, Makoto
1994-01-01
This paper describes the construction of a virtual work space for tasks performed by two handed manipulation. We intend to provide a virtual environment that encourages users to accomplish tasks as they usually act in a real environment. Our approach uses a three dimensional spatial interface device that allows the user to handle virtual objects by hand and be able to feel some physical properties such as contact, weight, etc. We investigated suitable conditions for constructing our virtual work space by simulating some basic assembly work, a face and fit task. We then selected the conditions under which the subjects felt most comfortable in performing this task and set up our virtual work space. Finally, we verified the possibility of performing more complex tasks in this virtual work space by providing simple virtual models and then let the subjects create new models by assembling these components. The subjects can naturally perform assembly operations and accomplish the task. Our evaluation shows that this virtual work space has the potential to be used for performing tasks that require two-handed manipulation or cooperation between both hands in a natural manner.
Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel
2017-04-01
Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.
Optale, Gabriele; Urgesi, Cosimo; Busato, Valentina; Marin, Silvia; Piron, Lamberto; Priftis, Konstantinos; Gamberini, Luciano; Capodieci, Salvatore; Bordin, Adalberto
2010-05-01
Memory decline is a prevalent aspect of aging but may also be the first sign of cognitive pathology. Virtual reality (VR) using immersion and interaction may provide new approaches to the treatment of memory deficits in elderly individuals. The authors implemented a VR training intervention to try to lessen cognitive decline and improve memory functions. The authors randomly assigned 36 elderly residents of a rest care facility (median age 80 years) who were impaired on the Verbal Story Recall Test either to the experimental group (EG) or the control group (CG). The EG underwent 6 months of VR memory training (VRMT) that involved auditory stimulation and VR experiences in path finding. The initial training phase lasted 3 months (3 auditory and 3 VR sessions every 2 weeks), and there was a booster training phase during the following 3 months (1 auditory and 1 VR session per week). The CG underwent equivalent face-to-face training sessions using music therapy. Both groups participated in social and creative and assisted-mobility activities. Neuropsychological and functional evaluations were performed at baseline, after the initial training phase, and after the booster training phase. The EG showed significant improvements in memory tests, especially in long-term recall with an effect size of 0.7 and in several other aspects of cognition. In contrast, the CG showed progressive decline. The authors suggest that VRMT may improve memory function in elderly adults by enhancing focused attention.
Aerospace applications of virtual environment technology.
Loftin, R B
1996-11-01
The uses of virtual environment technology in the space program are examined with emphasis on training for the Hubble Space Telescope Repair and Maintenance Mission in 1993. Project ScienceSpace at the Virtual Environment Technology Lab is discussed.
Cortical mechanisms for the segregation and representation of acoustic textures.
Overath, Tobias; Kumar, Sukhbinder; Stewart, Lauren; von Kriegstein, Katharina; Cusack, Rhodri; Rees, Adrian; Griffiths, Timothy D
2010-02-10
Auditory object analysis requires two fundamental perceptual processes: the definition of the boundaries between objects, and the abstraction and maintenance of an object's characteristic features. Although it is intuitive to assume that the detection of the discontinuities at an object's boundaries precedes the subsequent precise representation of the object, the specific underlying cortical mechanisms for segregating and representing auditory objects within the auditory scene are unknown. We investigated the cortical bases of these two processes for one type of auditory object, an "acoustic texture," composed of multiple frequency-modulated ramps. In these stimuli, we independently manipulated the statistical rules governing (1) the frequency-time space within individual textures (comprising ramps with a given spectrotemporal coherence) and (2) the boundaries between textures (adjacent textures with different spectrotemporal coherences). Using functional magnetic resonance imaging, we show mechanisms defining boundaries between textures with different coherences in primary and association auditory cortices, whereas texture coherence is represented only in association cortex. Furthermore, participants' superior detection of boundaries across which texture coherence increased (as opposed to decreased) was reflected in a greater neural response in auditory association cortex at these boundaries. The results suggest a hierarchical mechanism for processing acoustic textures that is relevant to auditory object analysis: boundaries between objects are first detected as a change in statistical rules over frequency-time space, before a representation that corresponds to the characteristics of the perceived object is formed.
Visual influences on auditory spatial learning
King, Andrew J.
2008-01-01
The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967
Virtual performer: single camera 3D measuring system for interaction in virtual space
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Taneji, Shoto
2006-10-01
The authors developed interaction media systems in the 3D virtual space. In these systems, the musician virtually plays an instrument like the theremin in the virtual space or the performer plays a show using the virtual character such as a puppet. This interactive virtual media system consists of the image capture, measuring performer's position, detecting and recognizing motions and synthesizing video image using the personal computer. In this paper, we propose some applications of interaction media systems; a virtual musical instrument and superimposing CG character. Moreover, this paper describes the measuring method of the positions of the performer, his/her head and both eyes using a single camera.
Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C
2009-01-01
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
Development of the auditory system
Litovsky, Ruth
2015-01-01
Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262
Integration of auditory and kinesthetic information in motion: alterations in Parkinson's disease.
Sabaté, Magdalena; Llanos, Catalina; Rodríguez, Manuel
2008-07-01
The main aim in this work was to study the interaction between auditory and kinesthetic stimuli and its influence on motion control. The study was performed on healthy subjects and patients with Parkinson's disease (PD). Thirty-five right-handed volunteers (young, PD, and age-matched healthy participants, and PD-patients) were studied with three different motor tasks (slow cyclic movements, fast cyclic movements, and slow continuous movements) and under the action of kinesthetic stimuli and sounds at different beat rates. The action of kinesthesia was evaluated by comparing real movements with virtual movements (movements imaged but not executed). The fast cyclic task was accelerated by kinesthetic but not by auditory stimuli. The slow cyclic task changed with the beat rate of sounds but not with kinesthetic stimuli. The slow continuous task showed an integrated response to both sensorial modalities. These data show that the influence of the multisensory integration on motion changes with the motor task and that some motor patterns are modulated by the simultaneous action of auditory and kinesthetic information, a cross-modal integration that was different in PD-patients. PsycINFO Database Record (c) 2008 APA, all rights reserved.
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.
1991-01-01
A 3D auditory display can potentially enhance information transfer by combining directional and iconic information in a quite naturalistic representation of dynamic objects in the interface. Another aspect of auditory spatial clues is that, in conjunction with other modalities, it can act as a potentiator of information in the display. For example, visual and auditory cues together can reinforce the information content of the display and provide a greater sense of presence or realism in a manner not readily achievable by either modality alone. This phenomenon will be particularly useful in telepresence applications, such as advanced teleconferencing environments, shared electronic workspaces, and monitoring telerobotic activities in remote or hazardous situations. Thus, the combination of direct spatial cues with good principles of iconic design could provide an extremely powerful and information-rich display which is also quite easy to use. An alternative approach, recently developed at ARC, generates externalized, 3D sound cues over headphones in realtime using digital signal processing. Here, the synthesis technique involves the digital generation of stimuli using Head-Related Transfer Functions (HRTF's) measured in the two ear-canals of individual subjects. Other similar approaches include an analog system developed by Loomis, et. al., (1990) and digital systems which make use of transforms derived from normative mannikins and simulations of room acoustics. Such an interface also requires the careful psychophysical evaluation of listener's ability to accurately localize the virtual or synthetic sound sources. From an applied standpoint, measurement of each potential listener's HRTF's may not be possible in practice. For experienced listeners, localization performance was only slightly degraded compared to a subject's inherent ability. Alternatively, even inexperienced listeners may be able to adapt to a particular set of HRTF's as long as they provide adequate cues for localization. In general, these data suggest that most listeners can obtain useful directional information from an auditory display without requiring the use of individually-tailored HRTF's.
Marmel, Frederic; Marrufo-Pérez, Miriam I; Heeren, Jan; Ewert, Stephan; Lopez-Poveda, Enrique A
2018-06-14
The detection of high-frequency spectral notches has been shown to be worse at 70-80 dB sound pressure level (SPL) than at higher levels up to 100 dB SPL. The performance improvement at levels higher than 70-80 dB SPL has been related to an 'ideal observer' comparison of population auditory nerve spike trains to stimuli with and without high-frequency spectral notches. Insofar as vertical localization partly relies on information provided by pinna-based high-frequency spectral notches, we hypothesized that localization would be worse at 70-80 dB SPL than at higher levels. Results from a first experiment using a virtual localization set-up and non-individualized head-related transfer functions (HRTFs) were consistent with this hypothesis, but a second experiment using a free-field set-up showed that vertical localization deteriorates monotonically with increasing level up to 100 dB SPL. These results suggest that listeners use different cues when localizing sound sources in virtual and free-field conditions. In addition, they confirm that the worsening in vertical localization with increasing level continues beyond 70-80 dB SPL, the highest levels tested by previous studies. Further, they suggest that vertical localization, unlike high-frequency spectral notch detection, does not rely on an 'ideal observer' analysis of auditory nerve spike trains. Copyright © 2018 Elsevier B.V. All rights reserved.
Song, Min-Ho; Choi, Jung-Woo; Kim, Yang-Hann
2012-02-01
A focused source can provide an auditory illusion of a virtual source placed between the loudspeaker array and the listener. When a focused source is generated by time-reversed acoustic focusing solution, its use as a virtual source is limited due to artifacts caused by convergent waves traveling towards the focusing point. This paper proposes an array activation method to reduce the artifacts for a selected listening point inside an array of arbitrary shape. Results show that energy of convergent waves can be reduced up to 60 dB for a large region including the selected listening point. © 2012 Acoustical Society of America
Cognitive/emotional models for human behavior representation in 3D avatar simulations
NASA Astrophysics Data System (ADS)
Peterson, James K.
2004-08-01
Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.
Babies in traffic: infant vocalizations and listener sex modulate auditory motion perception.
Neuhoff, John G; Hamilton, Grace R; Gittleson, Amanda L; Mejia, Adolfo
2014-04-01
Infant vocalizations and "looming sounds" are classes of environmental stimuli that are critically important to survival but can have dramatically different emotional valences. Here, we simultaneously presented listeners with a stationary infant vocalization and a 3D virtual looming tone for which listeners made auditory time-to-arrival judgments. Negatively valenced infant cries produced more cautious (anticipatory) estimates of auditory arrival time of the tone over a no-vocalization control. Positively valenced laughs had the opposite effect, and across all conditions, men showed smaller anticipatory biases than women. In Experiment 2, vocalization-matched vocoded noise stimuli did not influence concurrent auditory time-to-arrival estimates compared with a control condition. In Experiment 3, listeners estimated the egocentric distance of a looming tone that stopped before arriving. For distant stopping points, women estimated the stopping point as closer when the tone was presented with an infant cry than when it was presented with a laugh. For near stopping points, women showed no differential effect of vocalization type. Men did not show differential effects of vocalization type at either distance. Our results support the idea that both the sex of the listener and the emotional valence of infant vocalizations can influence auditory motion perception and can modulate motor responses to other behaviorally relevant environmental sounds. We also find support for previous work that shows sex differences in emotion processing are diminished under conditions of higher stress.
Semi-Immersive Virtual Turbine Engine Simulation System
NASA Astrophysics Data System (ADS)
Abidi, Mustufa H.; Al-Ahmari, Abdulrahman M.; Ahmad, Ali; Darmoul, Saber; Ameen, Wadea
2018-05-01
The design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.
World Reaction to Virtual Space
NASA Technical Reports Server (NTRS)
1999-01-01
DRaW Computing developed virtual reality software for the International Space Station. Open Worlds, as the software has been named, can be made to support Java scripting and virtual reality hardware devices. Open Worlds permits the use of VRML script nodes to add virtual reality capabilities to the user's applications.
Physiological and behavioral effects of tilt-induced body fluid shifts
NASA Technical Reports Server (NTRS)
Parker, D. E.; Tjernstrom, O.; Ivarsson, A.; Gulledge, W. L.; Poston, R. L.
1983-01-01
This paper addresses the 'fluid shift theory' of space motion sickness. The primary purpose of the research was the development of procedures to assess individual differences in response to rostral body fluid shifts on earth. Experiment I examined inner ear fluid pressure changes during head-down tilt in intact human beings. Tilt produced reliable changes. Differences among subjects and between ears within the same subject were observed. Experiment II examined auditory threshold changes during tilt. Tilt elicited increased auditory thresholds, suggesting that sensory depression may result from increased inner ear fluid pressure. Additional observations on rotation magnitude estimation during head-down tilt, which indicate that rostral fluid shifts may depress semicircular canal activity, are briefly described. The results of this research suggest that the inner ear pressure and auditory threshold shift procedures could be used to assess individual differences among astronauts prior to space flight. Results from the terrestrial observations could be related to reported incidence/severity of motion sickness in space and used to evaluate the fluid shift theory of space motion sickness.
STS-134 crew in Virtual Reality Lab during their MSS/EVAA SUPT2 Team training
2010-08-27
JSC2010-E-121049 (27 Aug. 2010) --- NASA astronaut Andrew Feustel (foreground), STS-134 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of his duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements. Photo credit: NASA or National Aeronautics and Space Administration
STS-133 crew during MSS/EVAA TEAM training in Virtual Reality Lab
2010-10-01
JSC2010-E-170878 (1 Oct. 2010) --- NASA astronaut Michael Barratt, STS-133 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of his duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements. Photo credit: NASA or National Aeronautics and Space Administration
STS-134 crew in Virtual Reality Lab during their MSS/EVAA SUPT2 Team training
2010-08-27
JSC2010-E-121056 (27 Aug. 2010) --- NASA astronaut Gregory H. Johnson, STS-134 pilot, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of his duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements. Photo credit: NASA or National Aeronautics and Space Administration
STS-133 crew during MSS/EVAA TEAM training in Virtual Reality Lab
2010-10-01
JSC2010-E-170888 (1 Oct. 2010) --- NASA astronaut Nicole Stott, STS-133 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of her duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements. Photo credit: NASA or National Aeronautics and Space Administration
STS-133 crew during MSS/EVAA TEAM training in Virtual Reality Lab
2010-10-01
JSC2010-E-170882 (1 Oct. 2010) --- NASA astronaut Nicole Stott, STS-133 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of her duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements. Photo credit: NASA or National Aeronautics and Space Administration
Zenner, Andre; Kruger, Antonio
2017-04-01
We define the concept of Dynamic Passive Haptic Feedback (DPHF) for virtual reality by introducing the weight-shifting physical DPHF proxy object Shifty. This concept combines actuators known from active haptics and physical proxies known from passive haptics to construct proxies that automatically adapt their passive haptic feedback. We describe the concept behind our ungrounded weight-shifting DPHF proxy Shifty and the implementation of our prototype. We then investigate how Shifty can, by automatically changing its internal weight distribution, enhance the user's perception of virtual objects interacted with in two experiments. In a first experiment, we show that Shifty can enhance the perception of virtual objects changing in shape, especially in length and thickness. Here, Shifty was shown to increase the user's fun and perceived realism significantly, compared to an equivalent passive haptic proxy. In a second experiment, Shifty is used to pick up virtual objects of different virtual weights. The results show that Shifty enhances the perception of weight and thus the perceived realism by adapting its kinesthetic feedback to the picked-up virtual object. In the same experiment, we additionally show that specific combinations of haptic, visual and auditory feedback during the pick-up interaction help to compensate for visual-haptic mismatch perceived during the shifting process.
A training system of orientation and mobility for blind people using acoustic virtual reality.
Seki, Yoshikazu; Sato, Tetsuji
2011-02-01
A new auditory orientation training system was developed for blind people using acoustic virtual reality (VR) based on a head-related transfer function (HRTF) simulation. The present training system can reproduce a virtual training environment for orientation and mobility (O&M) instruction, and the trainee can walk through the virtual training environment safely by listening to sounds such as vehicles, stores, ambient noise, etc., three-dimensionally through headphones. The system can reproduce not only sound sources but also sound reflection and insulation, so that the trainee can learn both sound location and obstacle perception skills. The virtual training environment is described in extensible markup language (XML), and the O&M instructor can edit it easily according to the training curriculum. Evaluation experiments were conducted to test the efficiency of some features of the system. Thirty subjects who had not acquired O&M skills attended the experiments. The subjects were separated into three groups: a no-training group, a virtual-training group using the present system, and a real-training group in real environments. The results suggested that virtual-training can reduce "veering" more than real-training and also can reduce stress as much as real training. The subjective technical and anxiety scores also improved.
The sonar aperture and its neural representation in bats.
Heinrich, Melina; Warmbold, Alexander; Hoffmann, Susanne; Firzlaff, Uwe; Wiegrebe, Lutz
2011-10-26
As opposed to visual imaging, biosonar imaging of spatial object properties represents a challenge for the auditory system because its sensory epithelium is not arranged along space axes. For echolocating bats, object width is encoded by the amplitude of its echo (echo intensity) but also by the naturally covarying spread of angles of incidence from which the echoes impinge on the bat's ears (sonar aperture). It is unclear whether bats use the echo intensity and/or the sonar aperture to estimate an object's width. We addressed this question in a combined psychophysical and electrophysiological approach. In three virtual-object playback experiments, bats of the species Phyllostomus discolor had to discriminate simple reflections of their own echolocation calls differing in echo intensity, sonar aperture, or both. Discrimination performance for objects with physically correct covariation of sonar aperture and echo intensity ("object width") did not differ from discrimination performances when only the sonar aperture was varied. Thus, the bats were able to detect changes in object width in the absence of intensity cues. The psychophysical results are reflected in the responses of a population of units in the auditory midbrain and cortex that responded strongest to echoes from objects with a specific sonar aperture, regardless of variations in echo intensity. Neurometric functions obtained from cortical units encoding the sonar aperture are sufficient to explain the behavioral performance of the bats. These current data show that the sonar aperture is a behaviorally relevant and reliably encoded cue for object size in bat sonar.
Discrimination of sound source velocity in human listeners
NASA Astrophysics Data System (ADS)
Carlile, Simon; Best, Virginia
2002-02-01
The ability of six human subjects to discriminate the velocity of moving sound sources was examined using broadband stimuli presented in virtual auditory space. Subjects were presented with two successive stimuli moving in the frontal horizontal plane level with the ears, and were required to judge which moved the fastest. Discrimination thresholds were calculated for reference velocities of 15, 30, and 60 degrees/s under three stimulus conditions. In one condition, stimuli were centered on 0° azimuth and their duration varied randomly to prevent subjects from using displacement as an indicator of velocity. Performance varied between subjects giving median thresholds of 5.5, 9.1, and 14.8 degrees/s for the three reference velocities, respectively. In a second condition, pairs of stimuli were presented for a constant duration and subjects would have been able to use displacement to assist their judgment as faster stimuli traveled further. It was found that thresholds decreased significantly for all velocities (3.8, 7.1, and 9.8 degrees/s), suggesting that the subjects were using the additional displacement cue. The third condition differed from the second in that the stimuli were ``anchored'' on the same starting location rather than centered on the midline, thus doubling the spatial offset between stimulus endpoints. Subjects showed the lowest thresholds in this condition (2.9, 4.0, and 7.0 degrees/s). The results suggested that the auditory system is sensitive to velocity per se, but velocity comparisons are greatly aided if displacement cues are present.
Tools for evaluation of restriction on auditory participation: systematic review of the literature.
Souza, Valquíria Conceição; Lemos, Stela Maris Aguiar
2015-01-01
To systematically review studies that used questionnaires for the evaluation of restriction on auditory participation in adults and the elderly. Studies from the last five years were selected through a bibliographic collection of data in national and international journals in the following electronic databases: ISI Web of Science and Virtual Health Library - BIREME, which includes the LILACS and MEDLINE databases. Studies available fully; published in Portuguese, English, or Spanish; whose participants were adults and/or the elderly and that used questionnaires for the evaluation of restriction on auditory participation. Initially, the studies were selected based on the reading of titles and abstracts. Then, the articles were fully and the information was included in the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist. Three-hundred seventy studies were found in the researched databases; 14 of these studies were excluded because they were found in more than one database. The titles and abstracts of 356 articles were analyzed; 40 of them were selected for full reading, of which 26 articles were finally selected. In the present review, nine instruments were found for the evaluation of restriction on auditory participation. The most used questionnaires for the assessment of the restriction on auditory participation were the Hearing Handicap Inventory for the Elderly (HHIE), Hearing Handicap Inventory for Adults (HHIA), and Hearing Handicap Inventory for the Elderly - Screening (HHIE-S). The use of restriction on auditory participation questionnaires can assist in validating decisions in audiology practices and be useful in the fitting of hearing aids and results of aural rehabilitation.
Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya
2013-09-01
Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can implicitly strengthen automatic change detection from an early stage in a cross-sensory manner, at least in the vision to audition direction.
Navigating Mythic Space in the Digital Age
ERIC Educational Resources Information Center
Foley, Drew Thomas
2012-01-01
In prior ages, alternate worlds are associated with symbolic expressions of storied space, here termed "mythic space." The digital age brings new forms of virtual space that are co-existent with physical space. These virtual spaces may be understood as a contemporary representation of mythic space. This dissertation explores the paths by…
Virtually-augmented interfaces for tactical aircraft.
Haas, M W
1995-05-01
The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and non-virtual concepts and devices across the visual, auditory and haptic sensory modalities. A fusion interface is a multi-sensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion-interface concepts. One of the virtual concepts to be investigated in the Fusion Interfaces for Tactical Environments facility (FITE) is the application of EEG and other physiological measures for virtual control of functions within the flight environment. FITE is a specialized flight simulator which allows efficient concept development through the use of rapid prototyping followed by direct experience of new fusion concepts. The FITE facility also supports evaluation of fusion concepts by operational fighter pilots in a high fidelity simulated air combat environment. The facility was utilized by a multi-disciplinary team composed of operational pilots, human-factors engineers, electronics engineers, computer scientists, and experimental psychologists to prototype and evaluate the first multi-sensory, virtually-augmented cockpit. The cockpit employed LCD-based head-down displays, a helmet-mounted display, three-dimensionally localized audio displays, and a haptic display. This paper will endeavor to describe the FITE facility architecture, some of the characteristics of the FITE virtual display and control devices, and the potential application of EEG and other physiological measures within the FITE facility.
Double dissociation of 'what' and 'where' processing in auditory cortex.
Lomber, Stephen G; Malhotra, Shveta
2008-05-01
Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.
Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher
2017-09-05
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.
McLaughlin, Susan A.; Rinne, Teemu; Stecker, G. Christopher
2017-01-01
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues—particularly interaural time and level differences (ITD and ILD)—that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and—critically—for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues. PMID:28827357
STS-134 crew in Virtual Reality Lab during their MSS/EVAA SUPT2 Team training
2010-08-27
JSC2010-E-121045 (27 Aug. 2010) --- NASA astronaut Andrew Feustel (right), STS-134 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of his duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements. David Homan assisted Feustel. Photo credit: NASA or National Aeronautics and Space Administration
Effect of Auditory Motion Velocity on Reaction Time and Cortical Processes
ERIC Educational Resources Information Center
Getzmann, Stephan
2009-01-01
The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…
Representative Model of the Learning Process in Virtual Spaces Supported by ICT
ERIC Educational Resources Information Center
Capacho, José
2014-01-01
This paper shows the results of research activities for building the representative model of the learning process in virtual spaces (e-Learning). The formal basis of the model are supported in the analysis of models of learning assessment in virtual spaces and specifically in Dembo´s teaching learning model, the systemic approach to evaluating…
Virtual Glovebox (VGX) Aids Astronauts in Pre-Flight Training
NASA Technical Reports Server (NTRS)
2003-01-01
NASA's Virtual Glovebox (VGX) was developed to allow astronauts on Earth to train for complex biology research tasks in space. The astronauts may reach into the virtual environment, naturally manipulating specimens, tools, equipment, and accessories in a simulated microgravity environment as they would do in space. Such virtual reality technology also provides engineers and space operations staff with rapid prototyping, planning, and human performance modeling capabilities. Other Earth based applications being explored for this technology include biomedical procedural training and training for disarming bio-terrorism weapons.
Magnetic resonance imaging of the saccular otolithic mass.
Sbarbati, A; Leclercq, F; Antonakis, K; Osculati, F
1992-01-01
The frog's inner ear was studied in vivo by high spatial resolution magnetic resonance imaging at 7 Tesla. The vestibule, the internal acoustic meatus, and the auditory tube have been identified. The large otolithic mass contained in the vestibule showed a virtual absence of magnetic resonance signal probably due to its composition of closely packed otoconia. Images Fig. 1 Fig. 2 Fig. 3 Fig. 5 PMID:1295875
Measuring Presence in Virtual Environments
1994-10-01
viewpoint to change what they see, or to reposition their head to affect binaural hearing, or to search the environment haptically, they will experience a...increase presence in an alternate environment. For example a head mounted display that isolates the user from the real world may increase the sense...movement interface devices such as treadmills and trampolines , different gloves, and auditory equipment. Even as a low end technological implementation of
ERIC Educational Resources Information Center
Jacob, Laura Beth
2012-01-01
Virtual world environments have evolved from object-oriented, text-based online games to complex three-dimensional immersive social spaces where the lines between reality and computer-generated begin to blur. Educators use virtual worlds to create engaging three-dimensional learning spaces for students, but the impact of virtual worlds in…
Neural Correlates of Sound Localization in Complex Acoustic Environments
Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto
2013-01-01
Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185
Parsons, Thomas D; Courtney, Christopher G
2014-01-30
Numerous studies have demonstrated that the Paced Auditory Serial Addition Test (PASAT) has utility for the detection of cognitive processing deficits. While the PASAT has demonstrated high levels of internal consistency and test-retest reliability, administration of the PASAT has been known to create undue anxiety and frustration in participants. As a result, degradation of performance may be found on the PASAT. The difficult nature of the PASAT may subsequently decrease the probability of their return for follow up testing. This study is a preliminary attempt at assessing the potential of a PASAT embedded in a virtual reality environment. The Virtual Reality PASAT (VR-PASAT) was compared with a paper-and-pencil version of the PASAT as well as other standardized neuropsychological measures. The two modalities of the PASAT were conducted with a sample of 50 healthy university students, between the ages of 19 and 34 years. Equivalent distributions were found for age, gender, education, and computer familiarity. Moderate relationships were found between VR-PASAT and other putative attentional processing measures. The VR-PASAT was unrelated to indices of learning, memory, or visuospatial processing. Comparison of the VR-PASAT with the traditional paper-and-pencil PASAT indicated that both versions require the examinee to sustain attention at an increasingly demanding, externally determined rate. Results offer preliminary support for the construct validity (in a college sample) of the VR-PASAT as an attentional processing measure and suggest that this task may provide some unique information not tapped by traditional attentional processing tasks. Copyright © 2013 Elsevier B.V. All rights reserved.
A Novel Treatment of Fear of Flying Using a Large Virtual Reality System.
Czerniak, Efrat; Caspi, Asaf; Litvin, Michal; Amiaz, Revital; Bahat, Yotam; Baransi, Hani; Sharon, Hanania; Noy, Shlomo; Plotnik, Meir
2016-04-01
Fear of flying (FoF), a common phobia in the developed world, is usually treated with cognitive behavioral therapy, most efficiently when combined with exposure methods, e.g., virtual reality exposure therapy (VRET). We evaluated FoF treatment using VRET in a large motion-based VR system. The treated subjects were seated on a moving platform. The virtual scenery included the interior of an aircraft and a window view to the outside world accompanied by platform movements simulating, e.g., takeoff, landing, and air turbulence. Relevant auditory stimuli were also incorporated. Three male patients with FoF underwent a clinical interview followed by three VRETs in the presence and with the guidance of a therapist. Scores on the Flight Anxiety Situation (FAS) and Flight Anxiety Modality (FAM) questionnaires were obtained on the first and fourth visits. Anxiety levels were assessed using the subjective units of distress (SUDs) scale during the exposure. All three subjects expressed satisfaction regarding the procedure and did not skip or avoid any of its stages. Consistent improvement was seen in the SUDs throughout the VRET session and across sessions, while patients' scores on the FAS and FAM showed inconsistent trends. Two patients participated in actual flights in the months following the treatment, bringing 12 and 16 yr of avoidance to an end. This VR-based treatment includes critical elements for exposure of flying experience beyond visual and auditory stimuli. The current case reports suggest VRET sessions may have a meaningful impact on anxiety levels, yet additional research seems warranted.
Audiovisual temporal recalibration: space-based versus context-based.
Yuan, Xiangyong; Li, Baolin; Bi, Cuihua; Yin, Huazhan; Huang, Xiting
2012-01-01
Recalibration of perceived simultaneity has been widely accepted to minimise delay between multisensory signals owing to different physical and neural conduct times. With concurrent exposure, temporal recalibration is either contextually or spatially based. Context-based recalibration was recently described in detail, but evidence for space-based recalibration is scarce. In addition, the competition between these two reference frames is unclear. Here, we examined participants who watched two distinct blob-and-tone couples that laterally alternated with one asynchronous and the other synchronous and then judged their perceived simultaneity and sequence when they swapped positions and varied in timing. For low-level stimuli with abundant auditory location cues space-based aftereffects were significantly more apparent (8.3%) than context-based aftereffects (4.2%), but without such auditory cues space-based aftereffects were less apparent (4.4%) and were numerically smaller than context-based aftereffects (6.0%). These results suggested that stimulus level and auditory location cues were both determinants of the recalibration frame. Through such joint judgments and the simple reaction time task, our results further revealed that criteria from perceived simultaneity to successiveness profoundly shifted without accompanying perceptual latency changes across adaptations, hence implying that criteria shifts, rather than perceptual latency changes, accounted for space-based and context-based temporal recalibration.
Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.
Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal
2016-01-01
Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.
Audiovisual integration increases the intentional step synchronization of side-by-side walkers.
Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A
2017-12-01
When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.
Lee, Hyung Young; Kim, You Lim; Lee, Suk Min
2015-06-01
[Purpose] This study aimed to investigate the clinical effects of virtual reality-based training and task-oriented training on balance performance in stroke patients. [Subjects and Methods] The subjects were randomly allocated to 2 groups: virtual reality-based training group (n = 12) and task-oriented training group (n = 12). The patients in the virtual reality-based training group used the Nintendo Wii Fit Plus, which provided visual and auditory feedback as well as the movements that enabled shifting of weight to the right and left sides, for 30 min/day, 3 times/week for 6 weeks. The patients in the task-oriented training group practiced additional task-oriented programs for 30 min/day, 3 times/week for 6 weeks. Patients in both groups also underwent conventional physical therapy for 60 min/day, 5 times/week for 6 weeks. [Results] Balance and functional reach test outcomes were examined in both groups. The results showed that the static balance and functional reach test outcomes were significantly higher in the virtual reality-based training group than in the task-oriented training group. [Conclusion] This study suggested that virtual reality-based training might be a more feasible and suitable therapeutic intervention for dynamic balance in stroke patients compared to task-oriented training.
Lee, Hyung Young; Kim, You Lim; Lee, Suk Min
2015-01-01
[Purpose] This study aimed to investigate the clinical effects of virtual reality-based training and task-oriented training on balance performance in stroke patients. [Subjects and Methods] The subjects were randomly allocated to 2 groups: virtual reality-based training group (n = 12) and task-oriented training group (n = 12). The patients in the virtual reality-based training group used the Nintendo Wii Fit Plus, which provided visual and auditory feedback as well as the movements that enabled shifting of weight to the right and left sides, for 30 min/day, 3 times/week for 6 weeks. The patients in the task-oriented training group practiced additional task-oriented programs for 30 min/day, 3 times/week for 6 weeks. Patients in both groups also underwent conventional physical therapy for 60 min/day, 5 times/week for 6 weeks. [Results] Balance and functional reach test outcomes were examined in both groups. The results showed that the static balance and functional reach test outcomes were significantly higher in the virtual reality-based training group than in the task-oriented training group. [Conclusion] This study suggested that virtual reality-based training might be a more feasible and suitable therapeutic intervention for dynamic balance in stroke patients compared to task-oriented training. PMID:26180341
2003-06-01
NASA’s Virtual Glovebox (VGX) was developed to allow astronauts on Earth to train for complex biology research tasks in space. The astronauts may reach into the virtual environment, naturally manipulating specimens, tools, equipment, and accessories in a simulated microgravity environment as they would do in space. Such virtual reality technology also provides engineers and space operations staff with rapid prototyping, planning, and human performance modeling capabilities. Other Earth based applications being explored for this technology include biomedical procedural training and training for disarming bio-terrorism weapons.
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.; Jenison, Rick
1995-01-01
All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.
Development of Virtual Auditory Interfaces
2001-03-01
reference to compare the sound in the VE with the real 4. Lessons from the Entertainment Industry world experience. The entertainment industry has...systems are currently being evaluated. even though we have the technology to create astounding The first system uses a portable Sony TCD-D8 DAT audio...data set created a system called "Fantasound" which wrapped the including sound recordings and sound measurements musical compositions and sound
Xu, Long-Chun; Zhang, Gang; Zou, Yue; Zhang, Min-Feng; Zhang, Dong-Sheng; Ma, Hua; Zhao, Wen-Bo; Zhang, Guang-Yu
2017-10-13
The objective of the study is to provide some implications for rehabilitation of hearing impairment by investigating changes of neural activities of directional brain networks in patients with long-term bilateral hearing loss. Firstly, we implemented neuropsychological tests of 21 subjects (11 patients with long-term bilateral hearing loss, and 10 subjects with normal hearing), and these tests revealed significant differences between the deaf group and the controls. Then we constructed the individual specific virtual brain based on functional magnetic resonance data of participants by utilizing effective connectivity and multivariate regression methods. We exerted the stimulating signal to the primary auditory cortices of the virtual brain and observed the brain region activations. We found that patients with long-term bilateral hearing loss presented weaker brain region activations in the auditory and language networks, but enhanced neural activities in the default mode network as compared with normally hearing subjects. Especially, the right cerebral hemisphere presented more changes than the left. Additionally, weaker neural activities in the primary auditor cortices were also strongly associated with poorer cognitive performance. Finally, causal analysis revealed several interactional circuits among activated brain regions, and these interregional causal interactions implied that abnormal neural activities of the directional brain networks in the deaf patients impacted cognitive function.
1998-03-01
Research Laboratory’s Virtual Reality Responsive Workbench (VRRWB) and Dragon software system which together address the problem of battle space...and describe the lessons which have been learned. Interactive graphics, workbench, battle space visualization, virtual reality , user interface.
Interaction Design and Usability of Learning Spaces in 3D Multi-user Virtual Worlds
NASA Astrophysics Data System (ADS)
Minocha, Shailey; Reeves, Ahmad John
Three-dimensional virtual worlds are multimedia, simulated environments, often managed over the Web, which users can 'inhabit' and interact via their own graphical, self-representations known as 'avatars'. 3D virtual worlds are being used in many applications: education/training, gaming, social networking, marketing and commerce. Second Life is the most widely used 3D virtual world in education. However, problems associated with usability, navigation and way finding in 3D virtual worlds may impact on student learning and engagement. Based on empirical investigations of learning spaces in Second Life, this paper presents design guidelines to improve the usability and ease of navigation in 3D spaces. Methods of data collection include semi-structured interviews with Second Life students, educators and designers. The findings have revealed that design principles from the fields of urban planning, Human- Computer Interaction, Web usability, geography and psychology can influence the design of spaces in 3D multi-user virtual environments.
Effects of sensory cueing in virtual motor rehabilitation. A review.
Palacios-Navarro, Guillermo; Albiol-Pérez, Sergio; García-Magariño García, Iván
2016-04-01
To critically identify studies that evaluate the effects of cueing in virtual motor rehabilitation in patients having different neurological disorders and to make recommendations for future studies. Data from MEDLINE®, IEEExplore, Science Direct, Cochrane library and Web of Science was searched until February 2015. We included studies that investigate the effects of cueing in virtual motor rehabilitation related to interventions for upper or lower extremities using auditory, visual, and tactile cues on motor performance in non-immersive, semi-immersive, or fully immersive virtual environments. These studies compared virtual cueing with an alternative or no intervention. Ten studies with a total number of 153 patients were included in the review. All of them refer to the impact of cueing in virtual motor rehabilitation, regardless of the pathological condition. After selecting the articles, the following variables were extracted: year of publication, sample size, study design, type of cueing, intervention procedures, outcome measures, and main findings. The outcome evaluation was done at baseline and end of the treatment in most of the studies. All of studies except one showed improvements in some or all outcomes after intervention, or, in some cases, in favor of the virtual rehabilitation group compared to the control group. Virtual cueing seems to be a promising approach to improve motor learning, providing a channel for non-pharmacological therapeutic intervention in different neurological disorders. However, further studies using larger and more homogeneous groups of patients are required to confirm these findings. Copyright © 2016 Elsevier Inc. All rights reserved.
Defense applications of the CAVE (CAVE automatic virtual environment)
NASA Astrophysics Data System (ADS)
Isabelle, Scott K.; Gilkey, Robert H.; Kenyon, Robert V.; Valentino, George; Flach, John M.; Spenny, Curtis H.; Anderson, Timothy R.
1997-07-01
The CAVE is a multi-person, room-sized, high-resolution, 3D video and auditory environment, which can be used to present very immersive virtual environment experiences. This paper describes the CAVE technology and the capability of the CAVE system as originally developed at the Electronics Visualization Laboratory of the University of Illinois- Chicago and as more recently implemented by Wright State University (WSU) in the Armstrong Laboratory at Wright- Patterson Air Force Base (WPAFB). One planned use of the WSU/WPAFB CAVE is research addressing the appropriate design of display and control interfaces for controlling uninhabited aerial vehicles. The WSU/WPAFB CAVE has a number of features that make it well-suited to this work: (1) 360 degrees surround, plus floor, high resolution visual displays, (2) virtual spatialized audio, (3) the ability to integrate real and virtual objects, and (4) rapid and flexible reconfiguration. However, even though the CAVE is likely to have broad utility for military applications, it does have certain limitations that may make it less well- suited to applications that require 'natural' haptic feedback, vestibular stimulation, or an ability to interact with close detailed objects.
Reliance on auditory feedback in children with childhood apraxia of speech.
Iuzzini-Seigel, Jenya; Hogan, Tiffany P; Guarino, Anthony J; Green, Jordan R
2015-01-01
Children with childhood apraxia of speech (CAS) have been hypothesized to continuously monitor their speech through auditory feedback to minimize speech errors. We used an auditory masking paradigm to determine the effect of attenuating auditory feedback on speech in 30 children: 9 with CAS, 10 with speech delay, and 11 with typical development. The masking only affected the speech of children with CAS as measured by voice onset time and vowel space area. These findings provide preliminary support for greater reliance on auditory feedback among children with CAS. Readers of this article should be able to (i) describe the motivation for investigating the role of auditory feedback in children with CAS; (ii) report the effects of feedback attenuation on speech production in children with CAS, speech delay, and typical development, and (iii) understand how the current findings may support a feedforward program deficit in children with CAS. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Moore, Brian C. J.
Psychoacoustics
Music and learning-induced cortical plasticity.
Pantev, Christo; Ross, Bernhard; Fujioka, Takkao; Trainor, Laurel J; Schulte, Michael; Schulz, Matthias
2003-11-01
Auditory stimuli are encoded by frequency-tuned neurons in the auditory cortex. There are a number of tonotopic maps, indicating that there are multiple representations, as in a mosaic. However, the cortical organization is not fixed due to the brain's capacity to adapt to current requirements of the environment. Several experiments on cerebral cortical organization in musicians demonstrate an astonishing plasticity. We used the MEG technique in a number of studies to investigate the changes that occur in the human auditory cortex when a skill is acquired, such as when learning to play a musical instrument. We found enlarged cortical representation of tones of the musical scale as compared to pure tones in skilled musicians. Enlargement was correlated with the age at which musicians began to practice. We also investigated cortical representations for notes of different timbre (violin and trumpet) and found that they are enhanced in violinists and trumpeters, preferentially for the timbre of the instrument on which the musician was trained. In recent studies we extended these findings in three ways. First, we show that we can use MEG to measure the effects of relatively short-term laboratory training involving learning to perceive virtual instead of spectral pitch and that the switch to perceiving virtual pitch is manifested in the gamma band frequency. Second, we show that there is cross-modal plasticity in that when the lips of trumpet players are stimulated (trumpet players assess their auditory performance by monitoring the position and pressure of their lips touching the mouthpiece of their instrument) at the same time as a trumpet tone, activation in the somatosensory cortex is increased more than it is during the sum of the separate lip and trumpet tone stimulation. Third, we show that musicians' automatic encoding and discrimination of pitch contour and interval information in melodies are specifically enhanced compared to those in nonmusicians in that musicians show larger functional mismatch negativity (MMNm) responses to occasional changes in melodic contour or interval, but that the two groups show similar MMNm responses to changes in the frequency of a pure tone.
Suma, Evan A; Lipps, Zachary; Finkelstein, Samantha; Krum, David M; Bolas, Mark
2012-04-01
Walking is only possible within immersive virtual environments that fit inside the boundaries of the user's physical workspace. To reduce the severity of the restrictions imposed by limited physical area, we introduce "impossible spaces," a new design mechanic for virtual environments that wish to maximize the size of the virtual environment that can be explored with natural locomotion. Such environments make use of self-overlapping architectural layouts, effectively compressing comparatively large interior environments into smaller physical areas. We conducted two formal user studies to explore the perception and experience of impossible spaces. In the first experiment, we showed that reasonably small virtual rooms may overlap by as much as 56% before users begin to detect that they are in an impossible space, and that the larger virtual rooms that expanded to maximally fill our available 9.14 m x 9.14 m workspace may overlap by up to 31%. Our results also demonstrate that users perceive distances to objects in adjacent overlapping rooms as if the overall space was uncompressed, even at overlap levels that were overtly noticeable. In our second experiment, we combined several well-known redirection techniques to string together a chain of impossible spaces in an expansive outdoor scene. We then conducted an exploratory analysis of users' verbal feedback during exploration, which indicated that impossible spaces provide an even more powerful illusion when users are naive to the manipulation.
Very large virtual compound spaces: construction, storage and utility in drug discovery.
Peng, Zhengwei
2013-09-01
Recent activities in the construction, storage and exploration of very large virtual compound spaces are reviewed by this report. As expected, the systematic exploration of compound spaces at the highest resolution (individual atoms and bonds) is intrinsically intractable. By contrast, by staying within a finite number of reactions and a finite number of reactants or fragments, several virtual compound spaces have been constructed in a combinatorial fashion with sizes ranging from 10(11)11 to 10(20)20 compounds. Multiple search methods have been developed to perform searches (e.g. similarity, exact and substructure) into those compound spaces without the need for full enumeration. The up-front investment spent on synthetic feasibility during the construction of some of those virtual compound spaces enables a wider adoption by medicinal chemists to design and synthesize important compounds for drug discovery. Recent activities in the area of exploring virtual compound spaces via the evolutionary approach based on Genetic Algorithm also suggests a positive shift of focus from method development to workflow, integration and ease of use, all of which are required for this approach to be widely adopted by medicinal chemists.
Sengül, Ali; van Elk, Michiel; Rognini, Giulio; Aspell, Jane Elizabeth; Bleuler, Hannes; Blanke, Olaf
2012-01-01
The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience. PMID:23227142
Sengül, Ali; van Elk, Michiel; Rognini, Giulio; Aspell, Jane Elizabeth; Bleuler, Hannes; Blanke, Olaf
2012-01-01
The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience.
Keeping Timbre in Mind: Working Memory for Complex Sounds that Can't Be Verbalized
ERIC Educational Resources Information Center
Golubock, Jason L.; Janata, Petr
2013-01-01
Properties of auditory working memory for sounds that lack strong semantic associations and are not readily verbalized or sung are poorly understood. We investigated auditory working memory capacity for lists containing 2-6 easily discriminable abstract sounds synthesized within a constrained timbral space, at delays of 1-6 s (Experiment 1), and…
Auditory Sequential Organization among Children with and without a Hearing Loss.
ERIC Educational Resources Information Center
Jutras, Benoit; Gagne, Jean-Pierre
1999-01-01
Forty-eight children, either with or without a sensorineural hearing loss and either young (6 and 7 years old) or older (9 and 10 years old) reproduced sequences of acoustic stimuli that varied in number, temporal spacing, and type. Results suggested that the poorer performance of the hearing-impaired children was due to auditory processing…
Sonic morphology: Aesthetic dimensional auditory spatial awareness
NASA Astrophysics Data System (ADS)
Whitehouse, Martha M.
The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.
Adaptation to stimulus statistics in the perception and neural representation of auditory space.
Dahmen, Johannes C; Keating, Peter; Nodal, Fernando R; Schulz, Andreas L; King, Andrew J
2010-06-24
Sensory systems are known to adapt their coding strategies to the statistics of their environment, but little is still known about the perceptual implications of such adjustments. We investigated how auditory spatial processing adapts to stimulus statistics by presenting human listeners and anesthetized ferrets with noise sequences in which interaural level differences (ILD) rapidly fluctuated according to a Gaussian distribution. The mean of the distribution biased the perceived laterality of a subsequent stimulus, whereas the distribution's variance changed the listeners' spatial sensitivity. The responses of neurons in the inferior colliculus changed in line with these perceptual phenomena. Their ILD preference adjusted to match the stimulus distribution mean, resulting in large shifts in rate-ILD functions, while their gain adapted to the stimulus variance, producing pronounced changes in neural sensitivity. Our findings suggest that processing of auditory space is geared toward emphasizing relative spatial differences rather than the accurate representation of absolute position.
Data communications in a parallel active messaging interface of a parallel computer
Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.
2014-09-02
Eager send data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints that specify a client, a context, and a task, including receiving an eager send data communications instruction with transfer data disposed in a send buffer characterized by a read/write send buffer memory address in a read/write virtual address space of the origin endpoint; determining for the send buffer a read-only send buffer memory address in a read-only virtual address space, the read-only virtual address space shared by both the origin endpoint and the target endpoint, with all frames of physical memory mapped to pages of virtual memory in the read-only virtual address space; and communicating by the origin endpoint to the target endpoint an eager send message header that includes the read-only send buffer memory address.
Data communications in a parallel active messaging interface of a parallel computer
Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.
2014-09-16
Eager send data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints that specify a client, a context, and a task, including receiving an eager send data communications instruction with transfer data disposed in a send buffer characterized by a read/write send buffer memory address in a read/write virtual address space of the origin endpoint; determining for the send buffer a read-only send buffer memory address in a read-only virtual address space, the read-only virtual address space shared by both the origin endpoint and the target endpoint, with all frames of physical memory mapped to pages of virtual memory in the read-only virtual address space; and communicating by the origin endpoint to the target endpoint an eager send message header that includes the read-only send buffer memory address.
ERIC Educational Resources Information Center
Foreman, Nigel
2009-01-01
The benefits of using virtual environments (VEs) in psychology arise from the fact that movements in virtual space, and accompanying perceptual changes, are treated by the brain in much the same way as those in equivalent real space. The research benefits of using VEs, in areas of psychology such as spatial learning and cognition, include…
NASA Technical Reports Server (NTRS)
Lehtonen, Ken
1999-01-01
This is a report to the Third Annual International Virtual Company Conference, on The Development of a Virtual Company to Support the Reengineering of the NASA/Goddard Hubble Space Telescope (HST) Control Center System. It begins with a HST Science "Commercial": Brief Tour of Our Universe showing various pictures taken from the Hubble Space Telescope. The presentation then reviews the project background and goals. Evolution of the Control Center System ("CCS Inc.") is then reviewed. Topics of Interest to "virtual companies" are reviewed: (1) "How To Choose A Team" (2) "Organizational Model" (3) "The Human Component" (4) "'Virtual Trust' Among Teaming Companies" (5) "Unique Challenges to Working Horizontally" (6) "The Cultural Impact" (7) "Lessons Learned".
Short-term memory stores organized by information domain.
Noyce, Abigail L; Cestero, Nishmar; Shinn-Cunningham, Barbara G; Somers, David C
2016-04-01
Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.
Adaptive space warping to enhance passive haptics in an arthroscopy surgical simulator.
Spillmann, Jonas; Tuchschmid, Stefan; Harders, Matthias
2013-04-01
Passive haptics, also known as tactile augmentation, denotes the use of a physical counterpart to a virtual environment to provide tactile feedback. Employing passive haptics can result in more realistic touch sensations than those from active force feedback, especially for rigid contacts. However, changes in the virtual environment would necessitate modifications of the physical counterparts. In recent work space warping has been proposed as one solution to overcome this limitation. In this technique virtual space is distorted such that a variety of virtual models can be mapped onto one single physical object. In this paper, we propose as an extension adaptive space warping; we show how this technique can be employed in a mixed-reality surgical training simulator in order to map different virtual patients onto one physical anatomical model. We developed methods to warp different organ geometries onto one physical mock-up, to handle different mechanical behaviors of the virtual patients, and to allow interactive modifications of the virtual structures, while the physical counterparts remain unchanged. Various practical examples underline the wide applicability of our approach. To the best of our knowledge this is the first practical usage of such a technique in the specific context of interactive medical training.
Harris, Robert; de Jong, Bauke M
2015-10-22
Using fMRI, cerebral activations were studied in 24 classically-trained keyboard performers and 12 musically unskilled control subjects. Two groups of musicians were recruited: improvising (n=12) and score-dependent (non-improvising) musicians (n=12). While listening to both familiar and unfamiliar music, subjects either (covertly) appraised the presented music performance or imagined they were playing the music themselves. We hypothesized that improvising musicians would exhibit enhanced efficiency of audiomotor transformation reflected by stronger ventral premotor activation. Statistical Parametric Mapping revealed that, while virtually 'playing along׳ with the music, improvising musicians exhibited activation of a right-hemisphere distribution of cerebral areas including posterior-superior parietal and dorsal premotor cortex. Involvement of these right-hemisphere dorsal stream areas suggests that improvising musicians recruited an amodal spatial processing system subserving pitch-to-space transformations to facilitate their virtual motor performance. Score-dependent musicians recruited a primarily left-hemisphere pattern of motor areas together with the posterior part of the right superior temporal sulcus, suggesting a relationship between aural discrimination and symbolic representation. Activations in bilateral auditory cortex were significantly larger for improvising musicians than for score-dependent musicians, suggesting enhanced top-down effects on aural perception. Our results suggest that learning to play a music instrument primarily from notation predisposes musicians toward aural identification and discrimination, while learning by improvisation involves audio-spatial-motor transformations, not only during performance, but also perception. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Cross-modal metaphorical mapping of spoken emotion words onto vertical space.
Montoro, Pedro R; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando
2015-01-01
From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a 'positive-up/negative-down' embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis.
Cross-modal metaphorical mapping of spoken emotion words onto vertical space
Montoro, Pedro R.; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando
2015-01-01
From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a ‘positive-up/negative-down’ embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis. PMID:26322007
Wolter, Sibylla; Dudschig, Carolin; Kaup, Barbara
2017-11-01
This study explored differences between pianists and non-musicians during reading of sentences describing high- or low-pitched auditory events. Based on the embodied model of language comprehension, it was hypothesized that the experience of playing the piano encourages a corresponding association between high-pitched sounds and the right and low-pitched sounds and the left. This pitch-space association is assumed to become elicited during understanding of sentences describing either a high- or low-pitched auditory event. In this study, pianists and non-musicians were tested based on the hypothesis that only pianists show a compatibility effect between implied pitch height and horizontal space, because only pianists have the corresponding experience with the piano keyboard. Participants read pitch-related sentences (e.g., the bear growls deeply, the soprano singer sings an aria) and judged whether the sentence was sensible or not by pressing either a left or right response key. The results indicated that only the pianists showed the predicted compatibility effect between implied pitch height and response location. Based on the results, it can be inferred that the experience of playing the piano led to an association between horizontal space and pitch height in pianists, while no such spatial association was elicited in non-musicians.
Virtual Acoustics, Aeronautics and Communications
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1996-01-01
An optimal approach to auditory display design for commercial aircraft would utilize both spatialized ("3-D") audio techniques and active noise cancellation for safer operations. Results from several aircraft simulator studies conducted at NASA Ames Research Center are reviewed, including Traffic alert and Collision Avoidance System (TCAS) warnings, spoken orientation "beacons" for gate identification and collision avoidance on the ground, and hardware for improved speech intelligibility. The implications of hearing loss amongst pilots is also considered.
Virtual acoustics, aeronautics, and communications
NASA Technical Reports Server (NTRS)
Begault, D. R.; Wenzel, E. M. (Principal Investigator)
1998-01-01
An optimal approach to auditory display design for commercial aircraft would utilize both spatialized (3-D) audio techniques and active noise cancellation for safer operations. Results from several aircraft simulator studies conducted at NASA Ames Research Center are reviewed, including Traffic alert and Collision Avoidance System (TCAS) warnings, spoken orientation "beacons" for gate identification and collision avoidance on the ground, and hardware for improved speech intelligibility. The implications of hearing loss among pilots is also considered.
David, Nicole; Skoruppa, Stefan; Gulberti, Alessandro
2016-01-01
The sense of agency describes the ability to experience oneself as the agent of one's own actions. Previous studies of the sense of agency manipulated the predicted sensory feedback related either to movement execution or to the movement’s outcome, for example by delaying the movement of a virtual hand or the onset of a tone that resulted from a button press. Such temporal sensorimotor discrepancies reduce the sense of agency. It remains unclear whether movement-related feedback is processed differently than outcome-related feedback in terms of agency experience, especially if these types of feedback differ with respect to sensory modality. We employed a mixed-reality setup, in which participants tracked their finger movements by means of a virtual hand. They performed a single tap, which elicited a sound. The temporal contingency between the participants’ finger movements and (i) the movement of the virtual hand or (ii) the expected auditory outcome was systematically varied. In a visual control experiment, the tap elicited a visual outcome. For each feedback type and participant, changes in the sense of agency were quantified using a forced-choice paradigm and the Method of Constant Stimuli. Participants were more sensitive to delays of outcome than to delays of movement execution. This effect was very similar for visual or auditory outcome delays. Our results indicate different contributions of movement- versus outcome-related sensory feedback to the sense of agency, irrespective of the modality of the outcome. We propose that this differential sensitivity reflects the behavioral importance of assessing authorship of the outcome of an action. PMID:27536948
Computer Applications and Virtual Environments (CAVE)
NASA Technical Reports Server (NTRS)
1993-01-01
Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. Marshall SPace Flight Center (MSFC) is begirning to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models are used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup is to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provides general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC).
ComputerApplications and Virtual Environments (CAVE)
NASA Technical Reports Server (NTRS)
1993-01-01
Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. The Marshall Space Flight Center (MSFC) in Huntsville, Alabama began to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models were used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup was to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability providedgeneral visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC). The X-34 program was cancelled in 2001.
ComputerApplications and Virtual Environments (CAVE)
NASA Technical Reports Server (NTRS)
1993-01-01
Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. The Marshall Space Flight Centerr (MSFC) in Huntsville, Alabama began to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models were used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup was to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provided general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC). The X-34 program was cancelled in 2001.
Avatars Talking: The Use of Virtual Worlds within Communication Courses
ERIC Educational Resources Information Center
Sarachan, Jeremy; Burk, Nanci; Day, Kenneth; Trevett-Smith, Matthew
2013-01-01
Virtual worlds have become an invaluable space for online learning and the exploration of digital cultures. Communication departments can benefit from using these spaces to educate their students in the logistics of virtual worlds and as a way to better understand how the process of interpersonal and global communication functions in both online…
ERIC Educational Resources Information Center
Nunez Esquer, Gustavo; Sheremetov, Leonid
This paper reports on the results and future research work within the paradigm of Configurable Collaborative Distance Learning, called Espacios Virtuales de Apredizaje (EVA). The paper focuses on: (1) description of the main concepts, including virtual learning spaces for knowledge, collaboration, consulting, and experimentation, a…
Coming down to Earth: Helping Teachers Use 3D Virtual Worlds in Across-Spaces Learning Situations
ERIC Educational Resources Information Center
Muñoz-Cristóbal, Juan A.; Prieto, Luis P.; Asensio-Pérez, Juan I.; Martínez-Monés, Alejandra; Jorrín-Abellán, Iván M.; Dimitriadis, Yannis
2015-01-01
Different approaches have explored how to provide seamless learning across multiple ICT-enabled physical and virtual spaces, including three-dimensional virtual worlds (3DVW). However, these approaches present limitations that may reduce their acceptance in authentic educational practice: The difficulties of authoring and sharing teacher-created…
The NonConforming Virtual Element Method for the Stokes Equations
Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco
2016-01-01
In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less
2006-08-01
Space Administration ( NASA ) Task Load Index ( TLX ...SITREP Questionnaire Example 33 Appendix C. NASA - TLX 35 Appendix D. Demographic Questionnaire 39 Appendix E. Post-Test Questionnaire 41...Mean ratings of physical demand by cue condition using NASA - TLX . ..................... 19 Figure 9. Mean ratings of temporal demand by cue condition
Searching Fragment Spaces with feature trees.
Lessel, Uta; Wellenzohn, Bernd; Lilienthal, Markus; Claussen, Holger
2009-02-01
Virtual combinatorial chemistry easily produces billions of compounds, for which conventional virtual screening cannot be performed even with the fastest methods available. An efficient solution for such a scenario is the generation of Fragment Spaces, which encode huge numbers of virtual compounds by their fragments/reagents and rules of how to combine them. Similarity-based searches can be performed in such spaces without ever fully enumerating all virtual products. Here we describe the generation of a huge Fragment Space encoding about 5 * 10(11) compounds based on established in-house synthesis protocols for combinatorial libraries, i.e., we encode practically evaluated combinatorial chemistry protocols in a machine readable form, rendering them accessible to in silico search methods. We show how such searches in this Fragment Space can be integrated as a first step in an overall workflow. It reduces the extremely huge number of virtual products by several orders of magnitude so that the resulting list of molecules becomes more manageable for further more elaborated and time-consuming analysis steps. Results of a case study are presented and discussed, which lead to some general conclusions for an efficient expansion of the chemical space to be screened in pharmaceutical companies.
VirtualSpace: A vision of a machine-learned virtual space environment
NASA Astrophysics Data System (ADS)
Bortnik, J.; Sarno-Smith, L. K.; Chu, X.; Li, W.; Ma, Q.; Angelopoulos, V.; Thorne, R. M.
2017-12-01
Space borne instrumentation tends to come and go. A typical instrument will go through a phase of design and construction, be deployed on a spacecraft for several years while it collects data, and then be decommissioned and fade into obscurity. The data collected from that instrument will typically receive much attention while it is being collected, perhaps in the form of event studies, conjunctions with other instruments, or a few statistical surveys, but once the instrument or spacecraft is decommissioned, the data will be archived and receive progressively less attention with every passing year. This is the fate of all historical data, and will be the fate of data being collected by instruments even at the present time. But what if those instruments could come alive, and all be simultaneously present at any and every point in time and space? Imagine the scientific insights, and societal gains that could be achieved with a grand (virtual) heliophysical observatory that consists of every current and historical mission ever deployed? We propose that this is not just fantasy but is imminently doable with the data currently available, with the present computational resources, and with currently available algorithms. This project revitalizes existing data resources and lays the groundwork for incorporating data from every future mission to expand the scope and refine the resolution of the virtual observatory. We call this project VirtualSpace: a machine-learned virtual space environment.
Rus-Calafell, M; Garety, P; Sason, E; Craig, T J K; Valmaggia, L R
2018-02-01
Over the last two decades, there has been a rapid increase of studies testing the efficacy and acceptability of virtual reality in the assessment and treatment of mental health problems. This systematic review was carried out to investigate the use of virtual reality in the assessment and the treatment of psychosis. Web of Science, PsychInfo, EMBASE, Scopus, ProQuest and PubMed databases were searched, resulting in the identification of 638 articles potentially eligible for inclusion; of these, 50 studies were included in the review. The main fields of research in virtual reality and psychosis are: safety and acceptability of the technology; neurocognitive evaluation; functional capacity and performance evaluation; assessment of paranoid ideation and auditory hallucinations; and interventions. The studies reviewed indicate that virtual reality offers a valuable method of assessing the presence of symptoms in ecologically valid environments, with the potential to facilitate learning new emotional and behavioural responses. Virtual reality is a promising method to be used in the assessment of neurocognitive deficits and the study of relevant clinical symptoms. Furthermore, preliminary findings suggest that it can be applied to the delivery of cognitive rehabilitation, social skills training interventions and virtual reality-assisted therapies for psychosis. The potential benefits for enhancing treatment are highlighted. Recommendations for future research include demonstrating generalisability to real-life settings, examining potential negative effects, larger sample sizes and long-term follow-up studies. The present review has been registered in the PROSPERO register: CDR 4201507776.
Valente, Daniel L.; Braasch, Jonas; Myrbeck, Shane A.
2012-01-01
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene. PMID:22280585
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Auditory salience using natural soundscapes.
Huang, Nicholas; Elhilali, Mounya
2017-03-01
Salience describes the phenomenon by which an object stands out from a scene. While its underlying processes are extensively studied in vision, mechanisms of auditory salience remain largely unknown. Previous studies have used well-controlled auditory scenes to shed light on some of the acoustic attributes that drive the salience of sound events. Unfortunately, the use of constrained stimuli in addition to a lack of well-established benchmarks of salience judgments hampers the development of comprehensive theories of sensory-driven auditory attention. The present study explores auditory salience in a set of dynamic natural scenes. A behavioral measure of salience is collected by having human volunteers listen to two concurrent scenes and indicate continuously which one attracts their attention. By using natural scenes, the study takes a data-driven rather than experimenter-driven approach to exploring the parameters of auditory salience. The findings indicate that the space of auditory salience is multidimensional (spanning loudness, pitch, spectral shape, as well as other acoustic attributes), nonlinear and highly context-dependent. Importantly, the results indicate that contextual information about the entire scene over both short and long scales needs to be considered in order to properly account for perceptual judgments of salience.
An intelligent control and virtual display system for evolutionary space station workstation design
NASA Technical Reports Server (NTRS)
Feng, Xin; Niederjohn, Russell J.; Mcgreevy, Michael W.
1992-01-01
Research and development of the Advanced Display and Computer Augmented Control System (ADCACS) for the space station Body-Ported Cupola Virtual Workstation (BP/VCWS) were pursued. The potential applications were explored of body ported virtual display and intelligent control technology for the human-system interfacing applications is space station environment. The new system is designed to enable crew members to control and monitor a variety of space operations with greater flexibility and efficiency than existing fixed consoles. The technologies being studied include helmet mounted virtual displays, voice and special command input devices, and microprocessor based intelligent controllers. Several research topics, such as human factors, decision support expert systems, and wide field of view, color displays are being addressed. The study showed the significant advantages of this uniquely integrated display and control system, and its feasibility for human-system interfacing applications in the space station command and control environment.
Centrally managed unified shared virtual address space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilkes, John
Systems, apparatuses, and methods for managing a unified shared virtual address space. A host may execute system software and manage a plurality of nodes coupled to the host. The host may send work tasks to the nodes, and for each node, the host may externally manage the node's view of the system's virtual address space. Each node may have a central processing unit (CPU) style memory management unit (MMU) with an internal translation lookaside buffer (TLB). In one embodiment, the host may be coupled to a given node via an input/output memory management unit (IOMMU) interface, where the IOMMU frontendmore » interface shares the TLB with the given node's MMU. In another embodiment, the host may control the given node's view of virtual address space via memory-mapped control registers.« less
The NASA Goddard Space Flight Center Virtual Science Fair
NASA Technical Reports Server (NTRS)
Bolognese, Jeff; Walden, Harvey; Obenschain, Arthur F. (Technical Monitor)
2002-01-01
This report describes the development of the NASA Goddard Space Flight Center Virtual Science Fair, including its history and outgrowth from the traditional regional science fairs supported by NASA. The results of the 1999 Virtual Science Fair pilot program, the mechanics of running the 2000 Virtual Science Fair and its results, and comments and suggestions for future Virtual Science Fairs are provided. The appendices to the report include the original proposal for this project, the judging criteria, the user's guide and the judge's guide to the Virtual Science Fair Web site, the Fair publicity brochure and the Fair award designs, judges' and students' responses to survey questions about the Virtual Science Fair, and lists of student entries to both the 1999 and 2000 Fairs.
Expedition 15 Crew Members training in the Virtual Reality (VR) Laboratory
2006-09-25
JSC2006-E-41640 (25 Sept. 2006) --- Cosmonaut Fyodor N. Yurchikhin, Expedition 15 commander representing Russia's Federal Space Agency, participates in a camera review training session in the virtual reality lab in the Space Vehicle Mockup Facility at Johnson Space Center.
Expedition 15 Crew Members training in the Virtual Reality (VR) Laboratory
2006-09-25
JSC2006-E-41641 (25 Sept. 2006) --- Cosmonaut Oleg V. Kotov, Expedition 15 flight engineer representing Russia's Federal Space Agency, participates in a camera review training session in the virtual reality lab in the Space Vehicle Mockup Facility at Johnson Space Center.
Vision-based overlay of a virtual object into real scene for designing room interior
NASA Astrophysics Data System (ADS)
Harasaki, Shunsuke; Saito, Hideo
2001-10-01
In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).
Three-dimensional virtual acoustic displays
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.
1991-01-01
The development of an alternative medium for displaying information in complex human-machine interfaces is described. The 3-D virtual acoustic display is a means for accurately transferring information to a human operator using the auditory modality; it combines directional and semantic characteristics to form naturalistic representations of dynamic objects and events in remotely sensed or simulated environments. Although the technology can stand alone, it is envisioned as a component of a larger multisensory environment and will no doubt find its greatest utility in that context. The general philosophy in the design of the display has been that the development of advanced computer interfaces should be driven first by an understanding of human perceptual requirements, and later by technological capabilities or constraints. In expanding on this view, current and potential uses are addressed of virtual acoustic displays, such displays are characterized, and recent approaches to their implementation and application are reviewed, the research project at NASA-Ames is described in detail, and finally some critical research issues for the future are outlined.
STS-133 crew during MSS/EVAA TEAM training in Virtual Reality Lab
2010-10-01
JSC2010-E-170885 (1 Oct. 2010) --- NASA astronauts Alvin Drew (left) and Tim Kopra, both STS-133 mission specialists, use virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of their duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working. Photo credit: NASA or National Aeronautics and Space Administration
STS-133 crew during MSS/EVAA TEAM training in Virtual Reality Lab
2010-10-01
JSC2010-E-170892 (1 Oct. 2010) --- NASA astronaut Alvin Drew, STS-133 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working. Photo credit: NASA or National Aeronautics and Space Administration
STS-133 crew during MSS/EVAA TEAM training in Virtual Reality Lab
2010-10-01
JSC2010-E-170871 (1 Oct. 2010) --- NASA astronaut Tim Kopra, STS-133 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working. Crew trainer David Homan assisted Kopra. Photo credit: NASA or National Aeronautics and Space Administration
STS-133 crew during MSS/EVAA TEAM training in Virtual Reality Lab
2010-10-01
JSC2010-E-170897 (1 Oct. 2010) --- NASA astronaut Tim Kopra, STS-133 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working. Photo credit: NASA or National Aeronautics and Space Administration
STS-133 crew during MSS/EVAA TEAM training in Virtual Reality Lab
2010-10-01
JSC2010-E-170873 (1 Oct. 2010) --- NASA astronaut Tim Kopra, STS-133 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working. Crew trainer David Homan assisted Kopra. Photo credit: NASA or National Aeronautics and Space Administration
STS-134 crew in Virtual Reality Lab during their MSS/EVAA SUPT2 Team training
2010-08-27
JSC2010-E-121053 (27 Aug. 2010) --- NASA astronaut Greg Chamitoff, STS-134 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working. Photo credit: NASA or National Aeronautics and Space Administration
STS-118 Astronaut Dave Williams Trains Using Virtual Reality Hardware
NASA Technical Reports Server (NTRS)
2007-01-01
STS-118 astronaut and mission specialist Dafydd R. 'Dave' Williams, representing the Canadian Space Agency, uses Virtual Reality Hardware in the Space Vehicle Mock Up Facility at the Johnson Space Center to rehearse some of his duties for the upcoming mission. This type of virtual reality training allows the astronauts to wear special gloves and other gear while looking at a computer that displays simulating actual movements around the various locations on the station hardware which with they will be working.
Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa
2015-01-01
Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is supported by computational and behavioral data. PMID:25698947
Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa
2015-01-01
Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is supported by computational and behavioral data.
NASA Technical Reports Server (NTRS)
Trolinger, James D.; Lal, Ravindra B.; Rangel, Roger; Witherow, William; Rogers, Jan
2001-01-01
The IML-1 Spaceflight produced over 1000 holograms of a well-defined particle field in the low g Spacelab environment; each containing as much as 1000 megabytes of information. This project took advantage of these data and the concept of holographic "virtual" spaceflight to advance the understanding of convection in the space shuttle environment, g-jitter effects on crystal growth, and complex transport phenomena in low Reynolds number flows. The first objective of the proposed work was to advance the understanding of microgravity effects on crystal growth. This objective was achieved through the use of existing holographic data recorded during the IML-1 Spaceflight. The second objective was to design a spaceflight experiment that exploits the "virtual space chamber concept" in which holograms of space chambers can provide a virtual access to space. This led to a flight definition project, which is now underway under a separate contract known as SHIVA, Spaceflight Holography Investigation in a Virtual Apparatus.
Cogné, M; Taillade, M; N'Kaoua, B; Tarruella, A; Klinger, E; Larrue, F; Sauzéon, H; Joseph, P-A; Sorita, E
2017-06-01
Spatial navigation, which involves higher cognitive functions, is frequently implemented in daily activities, and is critical to the participation of human beings in mainstream environments. Virtual reality is an expanding tool, which enables on one hand the assessment of the cognitive functions involved in spatial navigation, and on the other the rehabilitation of patients with spatial navigation difficulties. Topographical disorientation is a frequent deficit among patients suffering from neurological diseases. The use of virtual environments enables the information incorporated into the virtual environment to be manipulated empirically. But the impact of manipulations seems differ according to their nature (quantity, occurrence, and characteristics of the stimuli) and the target population. We performed a systematic review of research on virtual spatial navigation covering the period from 2005 to 2015. We focused first on the contribution of virtual spatial navigation for patients with brain injury or schizophrenia, or in the context of ageing and dementia, and then on the impact of visual or auditory stimuli on virtual spatial navigation. On the basis of 6521 abstracts identified in 2 databases (Pubmed and Scopus) with the keywords « navigation » and « virtual », 1103 abstracts were selected by adding the keywords "ageing", "dementia", "brain injury", "stroke", "schizophrenia", "aid", "help", "stimulus" and "cue"; Among these, 63 articles were included in the present qualitative analysis. Unlike pencil-and-paper tests, virtual reality is useful to assess large-scale navigation strategies in patients with brain injury or schizophrenia, or in the context of ageing and dementia. Better knowledge about both the impact of the different aids and the cognitive processes involved is essential for the use of aids in neurorehabilitation. Copyright © 2016. Published by Elsevier Masson SAS.
Spacing and Induction: Application to Exemplars Presented as Auditory and Visual Text
ERIC Educational Resources Information Center
Zulkiply, Norehan; McLean, John; Burt, Jennifer S.; Bath, Debra
2012-01-01
It is an established finding that spacing repetitions generally facilitates memory for the repeated events. However, the effect of spacing of exemplars on inductive learning is not really known. Two experiments using textual material were conducted to investigate the effect of spacing on induction. Experiment 1 and 2 extended the generality of…
Needle in the external auditory canal: an unusual complication of inferior alveolar nerve block.
Ribeiro, Leandro; Ramalho, Sara; Gerós, Sandra; Ferreira, Edite Coimbra; Faria e Almeida, António; Condé, Artur
2014-06-01
Inferior alveolar nerve block is used to anesthetize the ipsilateral mandible. The most commonly used technique is one in which the anesthetic is injected directly into the pterygomandibular space, by an intraoral approach. The fracture of the needle, although uncommon, can lead to potentially serious complications. The needle is usually found in the pterygomandibular space, although it can migrate and damage adjacent structures, with variable consequences. The authors report an unusual case of a fractured needle, migrating to the external auditory canal, as a result of an inferior alveolar nerve block. Copyright © 2014 Elsevier Inc. All rights reserved.
Virtual tour: INL's space battery facility
Johnson, Steve
2018-05-07
This virtual tour shows how INL fuels and tests nuclear power systems for deep space missions. To learn more about INL's contribution to the Mars Science Laboratory, visit http://www.inl.gov/marsrover.
Visual-auditory integration during speech imitation in autism.
Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas
2004-01-01
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.
Malinvaud, D; Londero, A; Niarra, R; Peignard, Ph; Warusfel, O; Viaud-Delmon, I; Chatellier, G; Bonfils, P
2016-03-01
Subjective tinnitus (ST) is a frequent audiologic condition that still requires effective treatment. This study aimed at evaluating two therapeutic approaches: Virtual Reality (VR) immersion in auditory and visual 3D environments and Cognitive Behaviour Therapy (CBT). This open, randomized and therapeutic equivalence trial used bilateral testing of VR versus CBT. Adult patients displaying unilateral or predominantly unilateral ST, and fulfilling inclusion criteria were included after giving their written informed consent. We measured the different therapeutic effect by comparing the mean scores of validated questionnaires and visual analog scales, pre and post protocol. Equivalence was established if both strategies did not differ for more than a predetermined limit. We used univariate and multivariate analysis adjusted on baseline values to assess treatment efficacy. In addition of this trial, purely exploratory comparison to a waiting list group (WL) was provided. Between August, 2009 and November, 2011, 148 of 162 screened patients were enrolled (VR n = 61, CBT n = 58, WL n = 29). These groups did not differ at baseline for demographic data. Three month after the end of the treatment, we didn't find any difference between VR and CBT groups either for tinnitus severity (p = 0.99) or tinnitus handicap (p = 0.36). VR appears to be at least as effective as CBT in unilateral ST patients. Copyright © 2016 Elsevier B.V. All rights reserved.
1993-09-15
Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. Marshall SPace Flight Center (MSFC) is begirning to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models are used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup is to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provides general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC).
Ontological implications of being in immersive virtual environments
NASA Astrophysics Data System (ADS)
Morie, Jacquelyn F.
2008-02-01
The idea of Virtual Reality once conjured up visions of new territories to explore, and expectations of awaiting worlds of wonder. VR has matured to become a practical tool for therapy, medicine and commercial interests, yet artists, in particular, continue to expand the possibilities for the medium. Artistic virtual environments created over the past two decades probe the phenomenological nature of these virtual environments. When we inhabit a fully immersive virtual environment, we have entered into a new form of Being. Not only does our body continue to exist in the real, physical world, we are also embodied within the virtual by means of technology that translates our bodied actions into interactions with the virtual environment. Very few states in human existence allow this bifurcation of our Being, where we can exist simultaneously in two spaces at once, with the possible exception of meta-physical states such as shamanistic trance and out-of-body experiences. This paper discusses the nature of this simultaneous Being, how we enter the virtual space, what forms of persona we can don there, what forms of spaces we can inhabit, and what type of wondrous experiences we can both hope for and expect.
Effects of Team Emotional Authenticity on Virtual Team Performance.
Connelly, Catherine E; Turel, Ofir
2016-01-01
Members of virtual teams lack many of the visual or auditory cues that are usually used as the basis for impressions about fellow team members. We focus on the effects of the impressions formed in this context, and use social exchange theory to understand how these impressions affect team performance. Our pilot study, using content analysis (n = 191 students), suggested that most individuals believe that they can assess others' emotional authenticity in online settings by focusing on the content and tone of the messages. Our quantitative study examined the effects of these assessments. Structural equation modeling (SEM) analysis (n = 81 student teams) suggested that team-level trust and teamwork behaviors mediate the relationship between team emotional authenticity and team performance, and illuminate the importance of team emotional authenticity for team processes and outcomes.
Networked Experiments and Scientific Resource Sharing in Cooperative Knowledge Spaces
ERIC Educational Resources Information Center
Cikic, Sabine; Jeschke, Sabina; Ludwig, Nadine; Sinha, Uwe; Thomsen, Christian
2007-01-01
Cooperative knowledge spaces create new potentials for the experimental fields in natural sciences and engineering because they enhance the accessibility of experimental setups through virtual laboratories and remote technology, opening them for collaborative and distributed usage. A concept for extending existing virtual knowledge spaces for the…
Reynolds, Christopher R; Muggleton, Stephen H; Sternberg, Michael J E
2015-01-01
The use of virtual screening has become increasingly central to the drug development pipeline, with ligand-based virtual screening used to screen databases of compounds to predict their bioactivity against a target. These databases can only represent a small fraction of chemical space, and this paper describes a method of exploring synthetic space by applying virtual reactions to promising compounds within a database, and generating focussed libraries of predicted derivatives. A ligand-based virtual screening tool Investigational Novel Drug Discovery by Example (INDDEx) is used as the basis for a system of virtual reactions. The use of virtual reactions is estimated to open up a potential space of 1.21×1012 potential molecules. A de novo design algorithm known as Partial Logical-Rule Reactant Selection (PLoRRS) is introduced and incorporated into the INDDEx methodology. PLoRRS uses logical rules from the INDDEx model to select reactants for the de novo generation of potentially active products. The PLoRRS method is found to increase significantly the likelihood of retrieving molecules similar to known actives with a p-value of 0.016. Case studies demonstrate that the virtual reactions produce molecules highly similar to known actives, including known blockbuster drugs. PMID:26583052
The NASA Goddard Space Flight Center Virtual Science Fair
NASA Technical Reports Server (NTRS)
Bolognese, Jeff; Walden, Harvey; Obenschain, Arthur F. (Technical Monitor)
2001-01-01
This report describes the development of the NASA Goddard Space Flight Center Virtual Science Fair, including its history and outgrowth from the traditional regional science fairs supported by NASA. The results of the 1999 Virtual Science Fair pilot program, the mechanics of running the 2000 Virtual Science Fair and its results, and comments and suggestions for future Virtual Science Fairs are provided. The appendices to the report contain supporting documentation, including the original proposal for this project, the judging criteria, the user's guide and the judge's guide to the Virtual Science Fair Web site, the Fair publicity brochure and the Fair award designs, judges' and students' responses to survey questions about the Virtual Science Fair, and lists of student entries to both the 1999 and 2000 Fairs.
2005-06-07
JSC2005-E-21191 (7 June 2005) --- Astronaut Steven G. MacLean, STS-115 mission specialist representing the Canadian Space Agency, uses the virtual reality lab at the Johnson Space Center to train for his duties aboard the space shuttle. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.
ERIC Educational Resources Information Center
Nicholls, Jennifer; Philip, Robyn
2012-01-01
This paper explores the design of virtual and physical learning spaces developed for students of drama and theatre studies. What can we learn from the traditional drama workshop that will inform the design of drama and theatre spaces created in technology-mediated learning environments? The authors examine four examples of spaces created for…
A decrease in brain activation associated with driving when listening to someone speak.
Just, Marcel Adam; Keller, Timothy A; Cynkar, Jacquelyn
2008-04-18
Behavioral studies have shown that engaging in a secondary task, such as talking on a cellular telephone, disrupts driving performance. This study used functional magnetic resonance imaging (fMRI) to investigate the impact of concurrent auditory language comprehension on the brain activity associated with a simulated driving task. Participants steered a vehicle along a curving virtual road, either undisturbed or while listening to spoken sentences that they judged as true or false. The dual-task condition produced a significant deterioration in driving accuracy caused by the processing of the auditory sentences. At the same time, the parietal lobe activation associated with spatial processing in the undisturbed driving task decreased by 37% when participants concurrently listened to sentences. The findings show that language comprehension performed concurrently with driving draws mental resources away from the driving and produces deterioration in driving performance, even when it does not require holding or dialing a phone.
A Decrease in Brain Activation Associated with Driving When Listening to Someone Speak
Just, Marcel Adam; Keller, Timothy A.; Cynkar, Jacquelyn
2009-01-01
Behavioral studies have shown that engaging in a secondary task, such as talking on a cellular telephone, disrupts driving performance. This study used functional magnetic resonance imaging (fMRI) to investigate the impact of concurrent auditory language comprehension on the brain activity associated with a simulated driving task. Participants steered a vehicle along a curving virtual road, either undisturbed or while listening to spoken sentences that they judged as true or false. The dual task condition produced a significant deterioration in driving accuracy caused by the processing of the auditory sentences. At the same time, the parietal lobe activation associated with spatial processing in the undisturbed driving task decreased by 37% when participants concurrently listened to sentences. The findings show that language comprehension performed concurrently with driving draws mental resources away from the driving and produces deterioration in driving performance, even when it does not require holding or dialing a phone. PMID:18353285
Characterization of active hair-bundle motility by a mechanical-load clamp
NASA Astrophysics Data System (ADS)
Salvi, Joshua D.; Maoiléidigh, Dáibhid Ó.; Fabella, Brian A.; Tobin, Mélanie; Hudspeth, A. J.
2015-12-01
Active hair-bundle motility endows hair cells with several traits that augment auditory stimuli. The activity of a hair bundle might be controlled by adjusting its mechanical properties. Indeed, the mechanical properties of bundles vary between different organisms and along the tonotopic axis of a single auditory organ. Motivated by these biological differences and a dynamical model of hair-bundle motility, we explore how adjusting the mass, drag, stiffness, and offset force applied to a bundle control its dynamics and response to external perturbations. Utilizing a mechanical-load clamp, we systematically mapped the two-dimensional state diagram of a hair bundle. The clamp system used a real-time processor to tightly control each of the virtual mechanical elements. Increasing the stiffness of a hair bundle advances its operating point from a spontaneously oscillating regime into a quiescent regime. As predicted by a dynamical model of hair-bundle mechanics, this boundary constitutes a Hopf bifurcation.
Auditory memory can be object based.
Dyson, Benjamin J; Ishfaq, Feraz
2008-04-01
Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.
The capture and recreation of 3D auditory scenes
NASA Astrophysics Data System (ADS)
Li, Zhiyun
The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.
Using Virtual Simulations in the Design of 21st Century Space Science Environments
NASA Technical Reports Server (NTRS)
Hutchinson, Sonya L.; Alves, Jeffery R.
1996-01-01
Space Technology has been rapidly increasing in the past decade. This can be attributed to the future construction of the International Space Station (ISS). New innovations must constantly be engineered to make ISS the safest, quality, research facility in space. Since space science must often be gathered by crew members, more attention must be geared to the human's safety and comfort. Virtual simulations are now being used to design environments that crew members can live in for long periods of time without harmful effects to their bodies. This paper gives a few examples of the ergonomic design problems that arise on manned space flights, and design solutions that follow NASA's strategic commitment to customer satisfaction. The conclusions show that virtual simulations are a great asset to 21st century design.
STS-134 crew in Virtual Reality Lab during their MSS/EVAA SUPT2 Team training
2010-08-27
JSC2010-E-121058 (27 Aug. 2010) --- NASA astronauts Michael Fincke (foreground) and Greg Chamitoff, both STS-134 mission specialists, use virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of their duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working. Photo credit: NASA or National Aeronautics and Space Administration
STS-134 crew in Virtual Reality Lab during their MSS/EVAA SUPT2 Team training
2010-08-27
JSC2010-E-121052 (27 Aug. 2010) --- NASA astronauts Michael Fincke (foreground) and Greg Chamitoff, both STS-134 mission specialists, use virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of their duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working. Photo credit: NASA or National Aeronautics and Space Administration
STS-134 crew in Virtual Reality Lab during their MSS/EVAA SUPT2 Team training
2010-08-27
JSC2010-E-121055 (27 Aug. 2010) --- NASA astronauts Michael Fincke (right) and Greg Chamitoff, both STS-134 mission specialists, use virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of their duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working. Photo credit: NASA or National Aeronautics and Space Administration
Movement goals and feedback and feedforward control mechanisms in speech production
Perkell, Joseph S.
2010-01-01
Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences. PMID:22661828
Movement goals and feedback and feedforward control mechanisms in speech production.
Perkell, Joseph S
2012-09-01
Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences.
NASA Technical Reports Server (NTRS)
Sierhuis, Maarten; Clancey, William J.; Damer, Bruce; Brodsky, Boris; vanHoff, Ron
2007-01-01
A virtual worlds presentation technique with embodied, intelligent agents is being developed as an instructional medium suitable to present in situ training on long term space flight. The system combines a behavioral element based on finite state automata, a behavior based reactive architecture also described as subsumption architecture, and a belief-desire-intention agent structure. These three features are being integrated to describe a Brahms virtual environment model of extravehicular crew activity which could become a basis for procedure training during extended space flight.
Młynarski, Wiktor
2014-01-01
To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644
STS-134 crew and Expedition 24/25 crew member Shannon Walker
2010-03-25
JSC2010-E-043667 (25 March 2010) --- NASA astronaut Mark Kelly, STS-134 commander, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of his duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.
STS-120 crew along with Expedition crew members Dan Tani and Sandra Magnus
2007-08-09
JSC2007-E-41540 (9 Aug. 2007) --- Astronauts Pamela A. Melroy, STS-120 commander, and European Space Agency's (ESA) Paolo Nespoli, mission specialist, use the virtual reality lab at Johnson Space Center to train for their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.
STS-126 crew during preflight VR LAB MSS EVA2 training
2008-04-14
JSC2008-E-033771 (14 April 2008) --- Astronaut Eric A. Boe, STS-126 pilot, uses the virtual reality lab in the Space Vehicle Mockup Facility at NASA's Johnson Space Center to train for some of his duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.
STS-133 crew during MSS/EVAA TEAM training in Virtual Reality Lab
2010-10-01
JSC2010-E-170877 (1 Oct. 2010) --- A large monitor is featured in this image during STS-133 crew members? training activities in the virtual reality laboratory in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center. Photo credit: NASA or National Aeronautics and Space Administration
1993-09-15
Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. The Marshall Space Flight Centerr (MSFC) in Huntsville, Alabama began to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models were used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup was to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provided general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC). The X-34 program was cancelled in 2001.
1993-09-15
Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. The Marshall Space Flight Center (MSFC) in Huntsville, Alabama began to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models were used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup was to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability providedgeneral visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC). The X-34 program was cancelled in 2001.
STS-120 crew along with Expedition crew members Dan Tani and Sandra Magnus
2007-08-09
JSC2007-E-41539 (9 Aug. 2007) --- Astronaut Pamela A. Melroy, STS-120 commander, uses the virtual reality lab at Johnson Space Center to train for her duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.
STS-EVA Mass Ops training of the STS-117 EVA crewmembers
2006-11-01
JSC2006-E-47612 (1 Nov. 2006) --- Astronaut Steven R. Swanson, STS-117 mission specialist, uses the virtual reality lab at Johnson Space Center to train for his duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.
STS-120 crew along with Expedition crew members Dan Tani and Sandra Magnus
2007-08-09
JSC2007-E-41532 (9 Aug. 2007) --- Astronaut Stephanie D. Wilson, STS-120 mission specialist, uses the virtual reality lab at Johnson Space Center to train for her duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.
STS-120 crew along with Expedition crew members Dan Tani and Sandra Magnus
2007-08-09
JSC2007-E-41531 (9 Aug. 2007) --- Astronaut Pamela A. Melroy, STS-120 commander, uses the virtual reality lab at Johnson Space Center to train for her duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.
STS-133 crew training in VR Lab with replacement crew member Steve Bowen
2011-01-24
JSC2011-E-006293 (24 Jan. 2011) --- NASA astronaut Michael Barratt, STS-133 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of his duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements. Photo credit: NASA or National Aeronautics and Space Administration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco
In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less
2007-11-14
Artificial intelligence and 4 23 education , Volume 1: Learning environments and tutoring systems. Hillsdale, NJ: Erlbaum. Wickens, C.D. (1984). Processing...and how to use it to best optimize the learning process. Some researchers (see Loftin & Savely, 1991) have proposed adding intelligent systems to the...is experienced as the cognitive centers in an individual’s brain process visual, tactile, kinesthetic , olfactory, proprioceptive, and auditory
An investigation of the relation between sibilant production and somatosensory and auditory acuity
Ghosh, Satrajit S.; Matthies, Melanie L.; Maas, Edwin; Hanson, Alexandra; Tiede, Mark; Ménard, Lucie; Guenther, Frank H.; Lane, Harlan; Perkell, Joseph S.
2010-01-01
The relation between auditory acuity, somatosensory acuity and the magnitude of produced sibilant contrast was investigated with data from 18 participants. To measure auditory acuity, stimuli from a synthetic sibilant continuum ([s]-[ʃ]) were used in a four-interval, two-alternative forced choice adaptive-staircase discrimination task. To measure somatosensory acuity, small plastic domes with grooves of different spacing were pressed against each participant’s tongue tip and the participant was asked to identify one of four possible orientations of the grooves. Sibilant contrast magnitudes were estimated from productions of the words ‘said,’ ‘shed,’ ‘sid,’ and ‘shid’. Multiple linear regression revealed a significant relation indicating that a combination of somatosensory and auditory acuity measures predicts produced acoustic contrast. When the participants were divided into high- and low-acuity groups based on their median somatosensory and auditory acuity measures, separate ANOVA analyses with sibilant contrast as the dependent variable yielded a significant main effect for each acuity group. These results provide evidence that sibilant productions have auditory as well as somatosensory goals and are consistent with prior results and the theoretical framework underlying the DIVA model of speech production. PMID:21110603
Transduction between worlds: using virtual and mixed reality for earth and planetary science
NASA Astrophysics Data System (ADS)
Hedley, N.; Lochhead, I.; Aagesen, S.; Lonergan, C. D.; Benoy, N.
2017-12-01
Virtual reality (VR) and augmented reality (AR) have the potential to transform the way we visualize multidimensional geospatial datasets in support of geoscience research, exploration and analysis. The beauty of virtual environments is that they can be built at any scale, users can view them at many levels of abstraction, move through them in unconventional ways, and experience spatial phenomena as if they had superpowers. Similarly, augmented reality allows you to bring the power of virtual 3D data visualizations into everyday spaces. Spliced together, these interface technologies hold incredible potential to support 21st-century geoscience. In my ongoing research, my team and I have made significant advances to connect data and virtual simulations with real geographic spaces, using virtual environments, geospatial augmented reality and mixed reality. These research efforts have yielded new capabilities to connect users with spatial data and phenomena. These innovations include: geospatial x-ray vision; flexible mixed reality; augmented 3D GIS; situated augmented reality 3D simulations of tsunamis and other phenomena interacting with real geomorphology; augmented visual analytics; and immersive GIS. These new modalities redefine the ways in which we can connect digital spaces of spatial analysis, simulation and geovisualization, with geographic spaces of data collection, fieldwork, interpretation and communication. In a way, we are talking about transduction between real and virtual worlds. Taking a mixed reality approach to this, we can link real and virtual worlds. This paper presents a selection of our 3D geovisual interface projects in terrestrial, coastal, underwater and other environments. Using rigorous applied geoscience data, analyses and simulations, our research aims to transform the novelty of virtual and augmented reality interface technologies into game-changing mixed reality geoscience.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097
STS-132 crew during their MSS/SIMP EVA3 OPS 4 training
2010-01-28
JSC2010-E-014952 (28 Jan. 2010) --- NASA astronauts Michael Good (seated) and Garrett Reisman, both STS-132 mission specialists, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.
STS-134 crew and Expedition 24/25 crew member Shannon Walker
2010-03-25
JSC2010-E-043666 (25 March 2010) --- NASA astronauts Mark Kelly (background), STS-134 commander; and Andrew Feustel, mission specialist, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.
STS-134 crew and Expedition 24/25 crew member Shannon Walker
2010-03-25
JSC2010-E-043668 (25 March 2010) --- NASA astronauts Mark Kelly (background), STS-134 commander; and Andrew Feustel, mission specialist, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.
The Virtual Space Telescope: A New Class of Science Missions
NASA Technical Reports Server (NTRS)
Shah, Neerav; Calhoun, Philip
2016-01-01
Many science investigations proposed by GSFC require two spacecraft alignment across a long distance to form a virtual space telescope. Forming a Virtual Space telescope requires advances in Guidance, Navigation, and Control (GNC) enabling the distribution of monolithic telescopes across multiple space platforms. The capability to align multiple spacecraft to an intertial target is at a low maturity state and we present a roadmap to advance the system-level capability to be flight ready in preparation of various science applications. An engineering proof of concept, called the CANYVAL-X CubeSat MIssion is presented. CANYVAL-X's advancement will decrease risk for a potential starshade mission that would fly with WFIRST.
Embodied collaboration support system for 3D shape evaluation in virtual space
NASA Astrophysics Data System (ADS)
Okubo, Masashi; Watanabe, Tomio
2005-12-01
Collaboration mainly consists of two tasks; one is each partner's task that is performed by the individual, the other is communication with each other. Both of them are very important objectives for all the collaboration support system. In this paper, a collaboration support system for 3D shape evaluation in virtual space is proposed on the basis of both studies in 3D shape evaluation and communication support in virtual space. The proposed system provides the two viewpoints for each task. One is the viewpoint of back side of user's own avatar for the smooth communication. The other is that of avatar's eye for 3D shape evaluation. Switching the viewpoints satisfies the task conditions for 3D shape evaluation and communication. The system basically consists of PC, HMD and magnetic sensors, and users can share the embodied interaction by observing interaction between their avatars in virtual space. However, the HMD and magnetic sensors, which are put on the users, would restrict the nonverbal communication. Then, we have tried to compensate the loss of nodding of partner's avatar by introducing the speech-driven embodied interactive actor InterActor. Sensory evaluation by paired comparison of 3D shapes in the collaborative situation in virtual space and in real space and the questionnaire are performed. The result demonstrates the effectiveness of InterActor's nodding in the collaborative situation.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-22
... office space'' leased by Respondents from a company called Servcorp. See note 4. infra. It is BIS's understanding that other persons also rent ``virtual office space'' at this address. The only current users at... ship those items to Iran through third countries. Respondents use the leased virtual office space in...
Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.
Vercillo, Tiziana; Gori, Monica
2015-01-01
The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.
Młynarski, Wiktor
2015-05-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.
Auditory and Vestibular Issues Related to Human Spaceflight
NASA Technical Reports Server (NTRS)
Danielson, Richard W.; Wood, Scott J.
2009-01-01
Human spaceflight provides unique opportunities to study human vestibular and auditory systems. This session will discuss 1) vestibular adaptive processes reflected by pronounced perceptual and motor coordination problems during, and after, space missions; 2) vestibular diagnostic and rehabilitative techniques (used to promote recovery after living in altered gravity environments) that may be relevant to treatment of vestibular disorders on earth; and 3) unique acoustical challenges to hearing loss prevention and crew performance during spaceflight missions.
NASA Astrophysics Data System (ADS)
Terzopoulos, Demetri; Qureshi, Faisal Z.
Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.
Effects of Team Emotional Authenticity on Virtual Team Performance
Connelly, Catherine E.; Turel, Ofir
2016-01-01
Members of virtual teams lack many of the visual or auditory cues that are usually used as the basis for impressions about fellow team members. We focus on the effects of the impressions formed in this context, and use social exchange theory to understand how these impressions affect team performance. Our pilot study, using content analysis (n = 191 students), suggested that most individuals believe that they can assess others' emotional authenticity in online settings by focusing on the content and tone of the messages. Our quantitative study examined the effects of these assessments. Structural equation modeling (SEM) analysis (n = 81 student teams) suggested that team-level trust and teamwork behaviors mediate the relationship between team emotional authenticity and team performance, and illuminate the importance of team emotional authenticity for team processes and outcomes. PMID:27630605
Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash
2015-01-01
The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490
Psychophysics and Neuronal Bases of Sound Localization in Humans
Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.
2013-01-01
Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen
2002-02-01
In 2004, the European COLUMBUS Module is to be attached to the International Space Station. On the way to the successful planning, deployment and operation of the module, computer generated and animated models are being used to optimize performance. Under contract of the German Space Agency DLR, it has become IRF's task to provide a Projective Virtual Reality System to provide a virtual world built after the planned layout of the COLUMBUS module let astronauts and experimentators practice operational procedures and the handling of experiments. The key features of the system currently being realized comprise the possibility for distributed multi-user access to the virtual lab and the visualization of real-world experiment data. Through the capabilities to share the virtual world, cooperative operations can be practiced easily, but also trainers and trainees can work together more effectively sharing the virtual environment. The capability to visualize real-world data will be used to introduce measured data of experiments into the virtual world online in order to realistically interact with the science-reference model hardware: The user's actions in the virtual world are translated into corresponding changes of the inputs of the science reference model hardware; the measured data is than in turn fed back into the virtual world. During the operation of COLUMBUS, the capabilities for distributed access and the capabilities to visualize measured data through the use of metaphors and augmentations of the virtual world may be used to provide virtual access to the COLUMBUS module, e.g. via Internet. Currently, finishing touches are being put to the system. In November 2001 the virtual world shall be operational, so that besides the design and the key ideas, first experimental results can be presented.
STS-134 crew and Expedition 24/25 crew member Shannon Walker
2010-03-25
JSC2010-E-043673 (25 March 2010) --- NASA astronauts Gregory H. Johnson, STS-134 pilot; and Shannon Walker, Expedition 24/25 flight engineer, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.
STS-134 crew and Expedition 24/25 crew member Shannon Walker
2010-03-25
JSC2010-E-043661 (25 March 2010) --- NASA astronauts Gregory H. Johnson, STS-134 pilot; and Shannon Walker, Expedition 24/25 flight engineer, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.
STS-132 crew during their MSS/SIMP EVA3 OPS 4 training
2010-01-28
JSC2010-E-014953 (28 Jan. 2010) --- NASA astronauts Piers Sellers, STS-132 mission specialist; and Tracy Caldwell Dyson, Expedition 23/24 flight engineer, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.
STS-132 crew during their MSS/SIMP EVA3 OPS 4 training
2010-01-28
JSC2010-E-014949 (28 Jan. 2010) --- NASA astronauts Piers Sellers, STS-132 mission specialist; and Tracy Caldwell Dyson, Expedition 23/24 flight engineer, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.
STS-132 crew during their MSS/SIMP EVA3 OPS 4 training
2010-01-28
JSC2010-E-014956 (28 Jan. 2010) --- NASA astronauts Ken Ham (left foreground), STS-132 commander; Michael Good, mission specialist; and Tony Antonelli (right), pilot, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.
STS-131 crew during VR Lab MSS/EVAB SUPT3 Team 91016 training
2009-09-25
JSC2009-E-214346 (25 Sept. 2009) --- Japan Aerospace Exploration Agency (JAXA) astronaut Naoko Yamazaki, STS-131 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of her duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.
STS-131 crew during VR Lab MSS/EVAB SUPT3 Team 91016 training
2009-09-25
JSC2009-E-214328 (25 Sept. 2009) --- Japan Aerospace Exploration Agency (JAXA) astronaut Naoko Yamazaki, STS-131 mission specialist, uses the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of her duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.
STS-132 crew during their MSS/SIMP EVA3 OPS 4 training
2010-01-28
JSC2010-E-014951 (28 Jan. 2010) --- NASA astronauts Michael Good (seated), Garrett Reisman (right foreground), both STS-132 mission specialists; and Tony Antonelli, pilot, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.
STS-134 crew and Expedition 24/25 crew member Shannon Walker
2010-03-25
JSC2010-E-043662 (25 March 2010) --- NASA astronauts Gregory H. Johnson, STS-134 pilot; and Shannon Walker, Expedition 24/25 flight engineer, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements.
STS-131 crew during VR Lab MSS/EVAB SUPT3 Team 91016 training
2009-09-25
JSC2009-E-214321 (25 Sept. 2009) --- NASA astronauts James P. Dutton Jr., STS-131 pilot; and Stephanie Wilson, mission specialist, use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.
NASA Technical Reports Server (NTRS)
Searcy, Brittani
2017-01-01
Using virtual environments to assess complex large scale human tasks provides timely and cost effective results to evaluate designs and to reduce operational risks during assembly and integration of the Space Launch System (SLS). NASA's Marshall Space Flight Center (MSFC) uses a suite of tools to conduct integrated virtual analysis during the design phase of the SLS Program. Siemens Jack is a simulation tool that allows engineers to analyze human interaction with CAD designs by placing a digital human model into the environment to test different scenarios and assess the design's compliance to human factors requirements. Engineers at MSFC are using Jack in conjunction with motion capture and virtual reality systems in MSFC's Virtual Environments Lab (VEL). The VEL provides additional capability beyond standalone Jack to record and analyze a person performing a planned task to assemble the SLS at Kennedy Space Center (KSC). The VEL integrates Vicon Blade motion capture system, Siemens Jack, Oculus Rift, and other virtual tools to perform human factors assessments. By using motion capture and virtual reality, a more accurate breakdown and understanding of how an operator will perform a task can be gained. By virtual analysis, engineers are able to determine if a specific task is capable of being safely performed by both a 5% (approx. 5ft) female and a 95% (approx. 6'1) male. In addition, the analysis will help identify any tools or other accommodations that may to help complete the task. These assessments are critical for the safety of ground support engineers and keeping launch operations on schedule. Motion capture allows engineers to save and examine human movements on a frame by frame basis, while virtual reality gives the actor (person performing a task in the VEL) an immersive view of the task environment. This presentation will discuss the need of human factors for SLS and the benefits of analyzing tasks in NASA MSFC's VEL.
STS-120 crew along with Expedition crew members Dan Tani and Sandra Magnus
2007-08-09
JSC2007-E-41541 (9 Aug. 2007) --- Astronauts Stephanie Wilson, STS-120 mission specialist, and Dan Tani, Expedition 16 flight engineer, use the virtual reality lab at Johnson Space Center to train for their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.
STS-105 Crew Training in VR Lab
2001-03-15
JSC2001-00751 (15 March 2001) --- Astronaut Scott J. Horowitz, STS-105 mission commander, uses the virtual reality lab at the Johnson Space Center (JSC) to train for his duties aboard the Space Shuttle Discovery. This type of computer interface paired with virtual reality training hardware and software helps to prepare the entire team for dealing with International Space Station (ISS) elements.
STS-105 Crew Training in VR Lab
2001-03-15
JSC2001-00758 (15 March 2001) --- Astronaut Frederick W. Sturckow, STS-105 pilot, uses the virtual reality lab at the Johnson Space Center (JSC) to train for his duties aboard the Space Shuttle Discovery. This type of computer interface paired with virtual reality training hardware and software helps to prepare the entire team for dealing with International Space Station (ISS) elements.
2005-06-07
JSC2005-E-21192 (7 June 2005) --- Astronauts Christopher J. Ferguson (left), STS-115 pilot, and Daniel C. Burbank, mission specialist, use the virtual reality lab at the Johnson Space Center to train for their duties aboard the space shuttle. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.
ERIC Educational Resources Information Center
Chen, Judy F.; Warden, Clyde A.; Tai, David Wen-Shung; Chen, Farn-Shing; Chao, Chich-Yang
2011-01-01
Virtual spaces allow abstract representations of reality that not only encourage student self-directed learning but also reinforce core content of the learning objective through visual metaphors not reproducible in the physical world. One of the advantages of such a space is the ability to escape the restrictions of the physical classroom, yet…
Intelligent Virtual Station (IVS)
NASA Technical Reports Server (NTRS)
2002-01-01
The Intelligent Virtual Station (IVS) is enabling the integration of design, training, and operations capabilities into an intelligent virtual station for the International Space Station (ISS). A viewgraph of the IVS Remote Server is presented.
Human Machine Interfaces for Teleoperators and Virtual Environments Conference
NASA Technical Reports Server (NTRS)
1990-01-01
In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.
Acute Inactivation of Primary Auditory Cortex Causes a Sound Localisation Deficit in Ferrets
Wood, Katherine C.; Town, Stephen M.; Atilgan, Huriye; Jones, Gareth P.
2017-01-01
The objective of this study was to demonstrate the efficacy of acute inactivation of brain areas by cooling in the behaving ferret and to demonstrate that cooling auditory cortex produced a localisation deficit that was specific to auditory stimuli. The effect of cooling on neural activity was measured in anesthetized ferret cortex. The behavioural effect of cooling was determined in a benchmark sound localisation task in which inactivation of primary auditory cortex (A1) is known to impair performance. Cooling strongly suppressed the spontaneous and stimulus-evoked firing rates of cortical neurons when the cooling loop was held at temperatures below 10°C, and this suppression was reversed when the cortical temperature recovered. Cooling of ferret auditory cortex during behavioural testing impaired sound localisation performance, with unilateral cooling producing selective deficits in the hemifield contralateral to cooling, and bilateral cooling producing deficits on both sides of space. The deficit in sound localisation induced by inactivation of A1 was not caused by motivational or locomotor changes since inactivation of A1 did not affect localisation of visual stimuli in the same context. PMID:28099489
NASA Technical Reports Server (NTRS)
Smith, Jeffrey D.; Twombly, I. Alexander; Maese, A. Christopher; Cagle, Yvonne; Boyle, Richard
2003-01-01
The International Space Station demonstrates the greatest capabilities of human ingenuity, international cooperation and technology development. The complexity of this space structure is unprecedented; and training astronaut crews to maintain all its systems, as well as perform a multitude of research experiments, requires the most advanced training tools and techniques. Computer simulation and virtual environments are currently used by astronauts to train for robotic arm manipulations and extravehicular activities; but now, with the latest computer technologies and recent successes in areas of medical simulation, the capability exists to train astronauts for more hands-on research tasks using immersive virtual environments. We have developed a new technology, the Virtual Glovebox (VGX), for simulation of experimental tasks that astronauts will perform aboard the Space Station. The VGX may also be used by crew support teams for design of experiments, testing equipment integration capability and optimizing the procedures astronauts will use. This is done through the 3D, desk-top sized, reach-in virtual environment that can simulate the microgravity environment in space. Additional features of the VGX allow for networking multiple users over the internet and operation of tele-robotic devices through an intuitive user interface. Although the system was developed for astronaut training and assisting support crews, Earth-bound applications, many emphasizing homeland security, have also been identified. Examples include training experts to handle hazardous biological and/or chemical agents in a safe simulation, operation of tele-robotic systems for assessing and diffusing threats such as bombs, and providing remote medical assistance to field personnel through a collaborative virtual environment. Thus, the emerging VGX simulation technology, while developed for space- based applications, can serve a dual use facilitating homeland security here on Earth.
The Effects of Vision-Related Aspects on Noise Perception of Wind Turbines in Quiet Areas
Maffei, Luigi; Iachini, Tina; Masullo, Massimiliano; Aletta, Francesco; Sorrentino, Francesco; Senese, Vincenzo Paolo; Ruotolo, Francesco
2013-01-01
Preserving the soundscape and geographic extension of quiet areas is a great challenge against the wide-spreading of environmental noise. The E.U. Environmental Noise Directive underlines the need to preserve quiet areas as a new aim for the management of noise in European countries. At the same time, due to their low population density, rural areas characterized by suitable wind are considered appropriate locations for installing wind farms. However, despite the fact that wind farms are represented as environmentally friendly projects, these plants are often viewed as visual and audible intruders, that spoil the landscape and generate noise. Even though the correlations are still unclear, it is obvious that visual impacts of wind farms could increase due to their size and coherence with respect to the rural/quiet environment. In this paper, by using the Immersive Virtual Reality technique, some visual and acoustical aspects of the impact of a wind farm on a sample of subjects were assessed and analyzed. The subjects were immersed in a virtual scenario that represented a situation of a typical rural outdoor scenario that they experienced at different distances from the wind turbines. The influence of the number and the colour of wind turbines on global, visual and auditory judgment were investigated. The main results showed that, regarding the number of wind turbines, the visual component has a weak effect on individual reactions, while the colour influences both visual and auditory individual reactions, although in a different way. PMID:23624578
The effects of vision-related aspects on noise perception of wind turbines in quiet areas.
Maffei, Luigi; Iachini, Tina; Masullo, Massimiliano; Aletta, Francesco; Sorrentino, Francesco; Senese, Vincenzo Paolo; Ruotolo, Francesco
2013-04-26
Preserving the soundscape and geographic extension of quiet areas is a great challenge against the wide-spreading of environmental noise. The E.U. Environmental Noise Directive underlines the need to preserve quiet areas as a new aim for the management of noise in European countries. At the same time, due to their low population density, rural areas characterized by suitable wind are considered appropriate locations for installing wind farms. However, despite the fact that wind farms are represented as environmentally friendly projects, these plants are often viewed as visual and audible intruders, that spoil the landscape and generate noise. Even though the correlations are still unclear, it is obvious that visual impacts of wind farms could increase due to their size and coherence with respect to the rural/quiet environment. In this paper, by using the Immersive Virtual Reality technique, some visual and acoustical aspects of the impact of a wind farm on a sample of subjects were assessed and analyzed. The subjects were immersed in a virtual scenario that represented a situation of a typical rural outdoor scenario that they experienced at different distances from the wind turbines. The influence of the number and the colour of wind turbines on global, visual and auditory judgment were investigated. The main results showed that, regarding the number of wind turbines, the visual component has a weak effect on individual reactions, while the colour influences both visual and auditory individual reactions, although in a different way.
NASA Astrophysics Data System (ADS)
Schmeil, Andreas; Eppler, Martin J.
Despite the fact that virtual worlds and other types of multi-user 3D collaboration spaces have long been subjects of research and of application experiences, it still remains unclear how to best benefit from meeting with colleagues and peers in a virtual environment with the aim of working together. Making use of the potential of virtual embodiment, i.e. being immersed in a space as a personal avatar, allows for innovative new forms of collaboration. In this paper, we present a framework that serves as a systematic formalization of collaboration elements in virtual environments. The framework is based on the semiotic distinctions among pragmatic, semantic and syntactic perspectives. It serves as a blueprint to guide users in designing, implementing, and executing virtual collaboration patterns tailored to their needs. We present two team and two community collaboration pattern examples as a result of the application of the framework: Virtual Meeting, Virtual Design Studio, Spatial Group Configuration, and Virtual Knowledge Fair. In conclusion, we also point out future research directions for this emerging domain.
Mulert, C; Juckel, G; Augustin, H; Hegerl, U
2002-10-01
The loudness dependency of the auditory evoked potentials (LDAEP) is used as an indicator of the central serotonergic system and predicts clinical response to serotonin agonists. So far, LDAEP has been typically investigated with dipole source analysis, because with this method the primary and secondary auditory cortex (with a high versus low serotonergic innervation) can be separated at least in parts. We have developed a new analysis procedure that uses an MRI probabilistic map of the primary auditory cortex in Talairach space and analyzed the current density in this region of interest with low resolution electromagnetic tomography (LORETA). LORETA is a tomographic localization method that calculates the current density distribution in Talairach space. In a group of patients with major depression (n=15), this new method can predict the response to an selective serotonin reuptake inhibitor (citalopram) at least to the same degree than the traditional dipole source analysis method (P=0.019 vs. P=0.028). The correlation of the improvement in the Hamilton Scale is significant with the LORETA-LDAEP-values (0.56; P=0.031) but not with the dipole source analysis LDAEP-values (0.43; P=0.11). The new tomographic LDAEP analysis is a promising tool in the analysis of the central serotonergic system.
Synchronous Learning Best Practices: An Action Research Study
ERIC Educational Resources Information Center
Warden, Clyde A.; Stanworth, James O.; Ren, Jian Biao; Warden, Antony R.
2013-01-01
Low cost and significant advances in technology now allow instructors to create their own virtual learning environments. Creating social interactions within a virtual space that emulates the physical classroom remains challenging. While students are familiar with virtual worlds and video meetings, they are inexperienced as virtual learners. Over a…
International Space Station (ISS)
2007-05-21
STS-118 astronaut and mission specialist Dafydd R. “Dave” Williams, representing the Canadian Space Agency, uses Virtual Reality Hardware in the Space Vehicle Mock Up Facility at the Johnson Space Center to rehearse some of his duties for the upcoming mission. This type of virtual reality training allows the astronauts to wear special gloves and other gear while looking at a computer that displays simulating actual movements around the various locations on the station hardware which with they will be working.
STS-120 crew along with Expedition crew members Dan Tani and Sandra Magnus
2007-08-09
JSC2007-E-41533 (9 Aug. 2007) --- Astronauts Stephanie Wilson (left), STS-120 mission specialist; Sandra Magnus, Expedition 17 flight engineer; and Dan Tani, Expedition 16 flight engineer, use the virtual reality lab at Johnson Space Center to train for their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.
Comprehension and Memory of Spatial and Temporal Event Components
2008-01-01
sitting in the leather chair listening to some music . [PROBE LAMP (filler)] He had headphones on, but Mary Agnes could still make out the lyrics. She...representation and processing of virtual spaces results in performance that is essentially identical to real spaces (e.g., Sun, Chan, & Campos , 2004) or with...that people treat virtual spaces in a manner very similar to real spaces (e.g., Sun, Chan, & Campos , 2004; Waller, Loomis, & Haun, 2004). The aim of
NASA Technical Reports Server (NTRS)
Dumas, Joseph D., II
1998-01-01
Several virtual reality I/O peripherals were successfully configured and integrated as part of the author's 1997 Summer Faculty Fellowship work. These devices, which were not supported by the developers of VR software packages, use new software drivers and configuration files developed by the author to allow them to be used with simulations developed using those software packages. The successful integration of these devices has added significant capability to the ANVIL lab at MSFC. In addition, the author was able to complete the integration of a networked virtual reality simulation of the Space Shuttle Remote Manipulator System docking Space Station modules which was begun as part of his 1996 Fellowship. The successful integration of this simulation demonstrates the feasibility of using VR technology for ground-based training as well as on-orbit operations.
An Optimized Trajectory Planning for Welding Robot
NASA Astrophysics Data System (ADS)
Chen, Zhilong; Wang, Jun; Li, Shuting; Ren, Jun; Wang, Quan; Cheng, Qunchao; Li, Wentao
2018-03-01
In order to improve the welding efficiency and quality, this paper studies the combined planning between welding parameters and space trajectory for welding robot and proposes a trajectory planning method with high real-time performance, strong controllability and small welding error. By adding the virtual joint at the end-effector, the appropriate virtual joint model is established and the welding process parameters are represented by the virtual joint variables. The trajectory planning is carried out in the robot joint space, which makes the control of the welding process parameters more intuitive and convenient. By using the virtual joint model combined with the B-spline curve affine invariant, the welding process parameters are indirectly controlled by controlling the motion curve of the real joint. To solve the optimal time solution as the goal, the welding process parameters and joint space trajectory joint planning are optimized.
Iachini, Tina; Coello, Yann; Frassinetti, Francesca; Ruggiero, Gennaro
2014-01-01
Background Do peripersonal space for acting on objects and interpersonal space for interacting with con-specifics share common mechanisms and reflect the social valence of stimuli? To answer this question, we investigated whether these spaces refer to a similar or different physical distance. Methodology Participants provided reachability-distance (for potential action) and comfort-distance (for social processing) judgments towards human and non-human virtual stimuli while standing still (passive) or walking toward stimuli (active). Principal Findings Comfort-distance was larger than other conditions when participants were passive, but reachability and comfort distances were similar when participants were active. Both spaces were modulated by the social valence of stimuli (reduction with virtual females vs males, expansion with cylinder vs robot) and the gender of participants. Conclusions These findings reveal that peripersonal reaching and interpersonal comfort spaces share a common motor nature and are sensitive, at different degrees, to social modulation. Therefore, social processing seems embodied and grounded in the body acting in space. PMID:25405344
Course Crash in Hybrid Space: An Exploration and Recommendations for Virtual Course Space
ERIC Educational Resources Information Center
Gerard, Joseph G.; Gerard, Reena Lederman; Casile, Maureen
2010-01-01
Understanding what hybrid space is, much less understanding what happens in that virtual realm, can raise difficult questions. For example, our campus's question "How do we define hybrid?" has kept us busy and guessing for over a year now. In this article, we offer a few suggestions on how to proceed with hybrid issues, including how to deal with…
Audiomotor Perceptual Training Enhances Speech Intelligibility in Background Noise.
Whitton, Jonathon P; Hancock, Kenneth E; Shannon, Jeffrey M; Polley, Daniel B
2017-11-06
Sensory and motor skills can be improved with training, but learning is often restricted to practice stimuli. As an exception, training on closed-loop (CL) sensorimotor interfaces, such as action video games and musical instruments, can impart a broad spectrum of perceptual benefits. Here we ask whether computerized CL auditory training can enhance speech understanding in levels of background noise that approximate a crowded restaurant. Elderly hearing-impaired subjects trained for 8 weeks on a CL game that, like a musical instrument, challenged them to monitor subtle deviations between predicted and actual auditory feedback as they moved their fingertip through a virtual soundscape. We performed our study as a randomized, double-blind, placebo-controlled trial by training other subjects in an auditory working-memory (WM) task. Subjects in both groups improved at their respective auditory tasks and reported comparable expectations for improved speech processing, thereby controlling for placebo effects. Whereas speech intelligibility was unchanged after WM training, subjects in the CL training group could correctly identify 25% more words in spoken sentences or digit sequences presented in high levels of background noise. Numerically, CL audiomotor training provided more than three times the benefit of our subjects' hearing aids for speech processing in noisy listening conditions. Gains in speech intelligibility could be predicted from gameplay accuracy and baseline inhibitory control. However, benefits did not persist in the absence of continuing practice. These studies employ stringent clinical standards to demonstrate that perceptual learning on a computerized audio game can transfer to "real-world" communication challenges. Copyright © 2017 Elsevier Ltd. All rights reserved.
Students' Experience of Problem-Based Learning in Virtual Space
ERIC Educational Resources Information Center
Gibbings, Peter; Lidstone, John; Bruce, Christine
2015-01-01
This paper reports outcomes of a study focused on discovering qualitatively different ways students experience problem-based learning in virtual space. A well-accepted and documented qualitative research method was adopted for this study. Five qualitatively different conceptions are described, each revealing characteristics of increasingly complex…
EXPLORING ENVIRONMENTAL DATA IN A HIGHLY IMMERSIVE VIRTUAL REALITY ENVIRONMENT
Geography inherently fills a 3D space and yet we struggle with displaying geography using, primaarily, 2D display devices. Virtual environments offer a more realistically-dimensioned display space and this is being realized in the expanding area of research on 3D Geographic Infor...
Embodiment, Virtual Space, Temporality and Interpersonal Relations in Online Writing
ERIC Educational Resources Information Center
Adams, Catherine; van Manen, Max
2006-01-01
In this paper we discuss how online seminar participants experience dimensions of embodiment, virtual space, interpersonal relations, and temporality; and how interacting through reading-writing, by means of online technologies, creates conditions, situations, and actions of pedagogical influence and relational affectivities. We investigate what…
Młynarski, Wiktor
2015-01-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373
Akeroyd, Michael A.; Chambers, John; Bullock, David; Palmer, Alan R.; Summerfield, A. Quentin; Nelson, Philip A.; Gatehouse, Stuart
2013-01-01
Cross-talk cancellation is a method for synthesising virtual auditory space using loudspeakers. One implementation is the “Optimal Source Distribution” technique [T. Takeuchi and P. Nelson, J. Acoust. Soc. Am. 112, 2786-2797 (2002)], in which the audio bandwidth is split across three pairs of loudspeakers, placed at azimuths of ±90°, ±15°, and ±3°, conveying low, mid and high frequencies, respectively. A computational simulation of this system was developed and verified against measurements made on an acoustic system using a manikin. Both the acoustic system and the simulation gave a wideband average cancellation of almost 25 dB. The simulation showed that when there was a mismatch between the head-related transfer functions used to set up the system and those of the final listener, the cancellation was reduced to an average of 13 dB. Moreover, in this case the binaural ITDs and ILDs delivered by the simulation of the OSD system often differed from the target values. It is concluded that only when the OSD system is set up with “matched” head-related transfer functions can it deliver accurate binaural cues. PMID:17348528
[Modification of the retrolabyrinthine approach with hearing preservation in CPA tumors].
Schipper, J; Lohnstein, P; Stummer, W; Knapp, F; Turowski, B; Klenzner, T
2010-02-01
In an anatomical study including a CT scan of the cadaver sections by means of a virtual model analysis the option of a modified retrolabyrinthine passage to the cerebellopontine angle (CPA) preserving the Saccus endolymphaticus and the upper petrosus sinus was analysed. Due to the individual anatomical variations of the petrosus bone the results showed several limitations with regard to the retrolabyrintine passage to the CPA. The smallest distance between the dura of the posterior fossa and the posterior semicircular canal measured in a high resolution CT was of particular importance as to how much room was available for the surgical manipulation in the retrolabyrinthine space. As the back side angle to the petrosus bone is much flatter in a translabyrinthine approach than in a retrosigmoidal approach the internal auditory canal needed to be controlled by using a 30 degree endoscope. In five patients the translabyrinthine approach was modified by temporarily preserving the labyrinth in an effort to remove the CPA tumors. Based on our clinical experience and on the findings of the anatomical and radiological studies we eventually removed the CPA tumors type B2 or C3 in three patients preserving hearing by using a modified retrolabyrinthine approach.
Akeroyd, Michael A; Chambers, John; Bullock, David; Palmer, Alan R; Summerfield, A Quentin; Nelson, Philip A; Gatehouse, Stuart
2007-02-01
Cross-talk cancellation is a method for synthesizing virtual auditory space using loudspeakers. One implementation is the "Optimal Source Distribution" technique [T. Takeuchi and P. Nelson, J. Acoust. Soc. Am. 112, 2786-2797 (2002)], in which the audio bandwidth is split across three pairs of loudspeakers, placed at azimuths of +/-90 degrees, +/-15 degrees, and +/-3 degrees, conveying low, mid, and high frequencies, respectively. A computational simulation of this system was developed and verified against measurements made on an acoustic system using a manikin. Both the acoustic system and the simulation gave a wideband average cancellation of almost 25 dB. The simulation showed that when there was a mismatch between the head-related transfer functions used to set up the system and those of the final listener, the cancellation was reduced to an average of 13 dB. Moreover, in this case the binaural interaural time differences and interaural level differences delivered by the simulation of the optimal source distribution (OSD) system often differed from the target values. It is concluded that only when the OSD system is set up with "matched" head-related transfer functions can it deliver accurate binaural cues.
CANYVAL-X: Enabling a new class of scientific instruments
NASA Astrophysics Data System (ADS)
Shah, Neerav; Calhoun, Philip C.; Park, Sang-young; Keidar, Michael
2016-05-01
Significant new discoveries in space science can be realized by replacing the traditional large monolithic space telescopes with precision formation flying spacecraft to form a “virtual telescope.” Such virtual telescopes will revolutionize occulting imaging systems, provide images of the Sun, accretion disks, and other astronomical objects with unprecedented milli-arcsecond resolution (several orders of magnitude beyond current capability).Since the days of Apollo, NASA and other organizations have been conducting formation flying in space, but not with the precision required for virtual telescopes. These efforts have focused on rendezvous and docking (e.g., crew docking, satellite servicing, etc.) and/or ground-controlled coordinated flight (e.g., EO-1, GRAIL, MMS, etc.). While the TRL of the component level technology for formation flying is high, the capability for the system-level guidance, navigation, and control (GN&C) technology required to align a virtual telescope to an inertial astronomical target with sub-arcsecond precision is not fully developed.The CANYVAL-X (CubeSat Astronomy by NASA and Yonsei using Virtual Telescope Alignment eXperiment) mission is an engineering proof of concept featuring a pair of CubeSats flying as a tandem telescope with a goal of demonstrating the system-level GN&C needed to form a virtual telescope. NASA partnered with the George Washington University and the Yonsei University to design and develop CANYVAL-X. CANYVAL-X will demonstrate key technologies for using virtual telescopes in space, including micro-propulsion using millinewton thrusters, relative position sensing, and communications control between the two spacecraft. CANYVAL-X is scheduled to launch on a Flacon-9 in summer of 2016.
Marshall Engineers Use Virtual Reality
NASA Technical Reports Server (NTRS)
1993-01-01
Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. Marshall Spce Flight Center (MSFC) is begirning to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models are used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup is to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provides general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC).
Three dimensional tracking with misalignment between display and control axes
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Tyler, Mitchell; Kim, Won S.; Stark, Lawrence
1992-01-01
Human operators confronted with misaligned display and control frames of reference performed three dimensional, pursuit tracking in virtual environment and virtual space simulations. Analysis of the components of the tracking errors in the perspective displays presenting virtual space showed that components of the error due to visual motor misalignment may be linearly separated from those associated with the mismatch between display and control coordinate systems. Tracking performance improved with several hours practice despite previous reports that such improvement did not take place.
Virtual Reality Training Environments: Contexts and Concerns.
ERIC Educational Resources Information Center
Harmon, Stephen W.; Kenney, Patrick J.
1994-01-01
Discusses the contexts where virtual reality (VR) training environments might be appropriate; examines the advantages and disadvantages of VR as a training technology; and presents a case study of a VR training environment used at the NASA Johnson Space Center in preparation for the repair of the Hubble Space Telescope. (AEF)
Claiming Unclaimed Spaces: Virtual Spaces for Learning
ERIC Educational Resources Information Center
Miller, Nicole C.
2016-01-01
The purpose of this study was to describe and examine the environments used by teacher candidates in multi-user virtual environments. Secondary data analysis of a case study methodology was employed. Multiple data sources including interviews, surveys, observations, snapshots, course artifacts, and the researcher's journal were used in the initial…
2005-02-03
JSC2005-E-04513 (3 Feb. 2005) --- European Space Agency (ESA) astronaut Christer Fuglesang, STS-116 mission specialist, uses virtual reality hardware in the Space Vehicle Mockup Facility at the Johnson Space Center to rehearse some of his duties on the upcoming mission to the international space station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.
STS-120 crew along with Expedition crew members Dan Tani and Sandra Magnus
2007-08-09
JSC2007-E-41538 (9 Aug. 2007) --- Astronauts Stephanie Wilson, STS-120 mission specialist; Sandra Magnus, Expedition 17 flight engineer; and Dan Tani, Expedition 16 flight engineer, use the virtual reality lab at Johnson Space Center to train for their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements. A computer display is visible in the foreground.
NASA Technical Reports Server (NTRS)
Black, F. O.; Brackmann, D. E.; Hitselberger, W. E.; Purdy, J.
1995-01-01
The outcome of acoustic neuroma (vestibular schwannoma) surgery continues to improve rapidly. Advances can be attributed to several fields, but the most important contributions have arisen from the identification of the genes responsible for the dominant inheritance of neurofibromatosis types 1 (NF1) and 2 (NF2) and the development of magnetic resonance imaging with gadolinium enhancement for the early anatomic confirmation of the pathognomonic, bilateral vestibular schwannomas in NF2. These advances enable early diagnosis and treatment when the tumors are small in virtually all subjects at risk for NF2. The authors suggest that advising young NF2 patients to wait until complications develop, especially hearing loss, before diagnosing and operating for bilateral eighth nerve schwannomas may not always be in the best interest of the patient. To the authors' knowledge, this is the first reported case of preservation of both auditory and vestibular function in a patient after bilateral vestibular schwannoma excision.
Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A; Cao, Jiguo; Nie, Yunlong
2017-01-01
Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers' performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.
Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A.; Cao, Jiguo; Nie, Yunlong
2017-01-01
Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning. PMID:29255435
Designing for Virtual Windows in a Deep Space Habitat
NASA Technical Reports Server (NTRS)
Howe, A. Scott; Howard, Robert L.; Moore, Nathan; Amoroso, Michael
2013-01-01
This paper discusses configurations and test analogs toward the design of a virtual window capability in a Deep Space Habitat. Long-duration space missions will require crews to remain in the confines of a spacecraft for extended periods of time, with possible harmful effects if a crewmember cannot cope with the small habitable volume. Virtual windows expand perceived volume using a minimal amount of image projection equipment and computing resources, and allow a limited immersion in remote environments. Uses for the virtual window include: live or augmented reality views of the external environment; flight deck, piloting, observation, or other participation in remote missions through live transmission of cameras mounted to remote vehicles; pre-recorded background views of nature areas, seasonal occurrences, or cultural events; and pre-recorded events such as birthdays, anniversaries, and other meaningful events prepared by ground support and families of the crewmembers.
The effects of viewpoint on the virtual space of pictures
NASA Technical Reports Server (NTRS)
Sedgwick, H. A.
1989-01-01
Pictorial displays whose primary purpose is to convey accurate information about the 3-D spatial layout of an environment are discussed. How and how well, pictures can convey such information is discussed. It is suggested that picture perception is not best approached as a unitary, indivisible process. Rather, it is a complex process depending on multiple, partially redundant, interacting sources of visual information for both the real surface of the picture and the virtual space beyond. Each picture must be assessed for the particular information that it makes available. This will determine how accurately the virtual space represented by the picture is seen, as well as how it is distorted when seen from the wrong viewpoint.
Virtual Reality: Developing a VR space for Academic activities
NASA Astrophysics Data System (ADS)
Kaimaris, D.; Stylianidis, E.; Karanikolas, N.
2014-05-01
Virtual reality (VR) is extensively used in various applications; in industry, in academia, in business, and is becoming more and more affordable for end users from the financial point of view. At the same time, in academia and higher education more and more applications are developed, like in medicine, engineering, etc. and students are inquiring to be well-prepared for their professional life after their educational life cycle. Moreover, VR is providing the benefits having the possibility to improve skills but also to understand space as well. This paper presents the methodology used during a course, namely "Geoinformatics applications" at the School of Spatial Planning and Development (Eng.), Aristotle University of Thessaloniki, to create a virtual School space. The course design focuses on the methods and techniques to be used in order to develop the virtual environment. In addition the project aspires to become more and more effective for the students and provide a real virtual environment with useful information not only for the students but also for any citizen interested in the academic life at the School.
Gainotti, Guido
2010-02-01
The aim of the present survey was to review scientific articles dealing with the non-visual (auditory and tactile) forms of neglect to determine: (a) whether behavioural patterns similar to those observed in the visual modality can also be observed in the non-visual modalities; (b) whether a different severity of neglect can be found in the visual and in the auditory and tactile modalities; (c) the reasons for the possible differences between the visual and non-visual modalities. Data pointing to a contralesional orienting of attention in the auditory and the tactile modalities in visual neglect patients were separately reviewed. Results showed: (a) that in patients with right brain damage manifestations of neglect for the contralesional side of space can be found not only in the visual but also in the auditory and tactile modalities; (b) that the severity of neglect is greater in the visual than in the non-visual modalities. This asymmetry in the severity of neglect across modalities seems due to the greater role that the automatic capture of attention by irrelevant ipsilesional stimuli seems to play in the visual modality. Copyright 2009 Elsevier Srl. All rights reserved.
Dynamic Learning in Virtual Spaces: Producers and Consumers of Meaning
ERIC Educational Resources Information Center
Abrams, Sandra Schamroth; Rowsell, Jennifer
2011-01-01
Twenty-first century education includes dynamic learning that is complicated by interactions in both fixed and protean virtual spaces, and it is important to consider the degree of power, agency, and awareness students have as producers and consumers of interactive technology. Outside of school, students engage in meaning making practices, and…
Prospects for Digital Campus with Extensive Applications of Virtual Collaborative Space
ERIC Educational Resources Information Center
Nishide, Ryo
2011-01-01
This paper proposes extensive applications of virtual collaborative space in order to enhance the efficiency and capability of Digital Campus. The usability of Digital Campus has been experimented in different learning environments and evaluated by questionnaire as that the presence technology and a sense of solidarity influence the participants'…
Enhancement of Spatial Thinking with Virtual Spaces 1.0
ERIC Educational Resources Information Center
Hauptman, Hanoch
2010-01-01
Developing a software environment to enhance 3D geometric proficiency demands the consideration of theoretical views of the learning process. Simultaneously, this effort requires taking into account the range of tools that technology offers, as well as their limitations. In this paper, we report on the design of Virtual Spaces 1.0 software, a…
Educational Community: Among the Real and Virtual Civic Initiative
ERIC Educational Resources Information Center
Arsenijevic, Jasmina; Andevski, Milica
2016-01-01
The new media enable numerous advantages in the strengthening of civic engagement, through removing barriers in space and time and through networking of individuals of the same social, civic or political interests at the global level. Different forms of civic engagement and civic responsibility in the virtual space are ever more present, and…
Dixon, Benjamin J; Daly, Michael J; Chan, Harley; Vescan, Allan; Witterick, Ian J; Irish, Jonathan C
2014-04-01
Image-guided surgery (IGS) systems are frequently utilized during cranial base surgery to aid in orientation and facilitate targeted surgery. We wished to assess the performance of our recently developed localized intraoperative virtual endoscopy (LIVE)-IGS prototype in a preclinical setting prior to deployment in the operating room. This system combines real-time ablative instrument tracking, critical structure proximity alerts, three-dimensional virtual endoscopic views, and intraoperative cone-beam computed tomographic image updates. Randomized-controlled trial plus qualitative analysis. Skull base procedures were performed on 14 cadaver specimens by seven fellowship-trained skull base surgeons. Each subject performed two endoscopic transclival approaches; one with LIVE-IGS and one using a conventional IGS system in random order. National Aeronautics and Space Administration Task Load Index (NASA-TLX) scores were documented for each dissection, and a semistructured interview was recorded for qualitative assessment. The NASA-TLX scores for mental demand, effort, and frustration were significantly reduced with the LIVE-IGS system in comparison to conventional navigation (P < .05). The system interface was judged to be intuitive and most useful when there was a combination of high spatial demand, reduced or absent surface landmarks, and proximity to critical structures. The development of auditory icons for proximity alerts during the trial better informed the surgeon while limiting distraction. The LIVE-IGS system provided accurate, intuitive, and dynamic feedback to the operating surgeon. Further refinements to proximity alerts and visualization settings will enhance orientation while limiting distraction. The system is currently being deployed in a prospective clinical trial in skull base surgery. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.
Send, Robert; Kaila, Ville R. I.; Sundholm, Dage
2011-01-01
We investigate how the reduction of the virtual space affects coupled-cluster excitation energies at the approximate singles and doubles coupled-cluster level (CC2). In this reduced-virtual-space (RVS) approach, all virtual orbitals above a certain energy threshold are omitted in the correlation calculation. The effects of the RVS approach are assessed by calculations on the two lowest excitation energies of 11 biochromophores using different sizes of the virtual space. Our set of biochromophores consists of common model systems for the chromophores of the photoactive yellow protein, the green fluorescent protein, and rhodopsin. The RVS calculations show that most of the high-lying virtual orbitals can be neglected without significantly affecting the accuracy of the obtained excitation energies. Omitting all virtual orbitals above 50 eV in the correlation calculation introduces errors in the excitation energies that are smaller than 0.1 eV . By using a RVS energy threshold of 50 eV , the CC2 calculations using triple-ζ basis sets (TZVP) on protonated Schiff base retinal are accelerated by a factor of 6. We demonstrate the applicability of the RVS approach by performing CC2∕TZVP calculations on the lowest singlet excitation energy of a rhodopsin model consisting of 165 atoms using RVS thresholds between 20 eV and 120 eV. The calculations on the rhodopsin model show that the RVS errors determined in the gas-phase are a very good approximation to the RVS errors in the protein environment. The RVS approach thus renders purely quantum mechanical treatments of chromophores in protein environments feasible and offers an ab initio alternative to quantum mechanics∕molecular mechanics separation schemes. PMID:21663351
Send, Robert; Kaila, Ville R I; Sundholm, Dage
2011-06-07
We investigate how the reduction of the virtual space affects coupled-cluster excitation energies at the approximate singles and doubles coupled-cluster level (CC2). In this reduced-virtual-space (RVS) approach, all virtual orbitals above a certain energy threshold are omitted in the correlation calculation. The effects of the RVS approach are assessed by calculations on the two lowest excitation energies of 11 biochromophores using different sizes of the virtual space. Our set of biochromophores consists of common model systems for the chromophores of the photoactive yellow protein, the green fluorescent protein, and rhodopsin. The RVS calculations show that most of the high-lying virtual orbitals can be neglected without significantly affecting the accuracy of the obtained excitation energies. Omitting all virtual orbitals above 50 eV in the correlation calculation introduces errors in the excitation energies that are smaller than 0.1 eV. By using a RVS energy threshold of 50 eV, the CC2 calculations using triple-ζ basis sets (TZVP) on protonated Schiff base retinal are accelerated by a factor of 6. We demonstrate the applicability of the RVS approach by performing CC2/TZVP calculations on the lowest singlet excitation energy of a rhodopsin model consisting of 165 atoms using RVS thresholds between 20 eV and 120 eV. The calculations on the rhodopsin model show that the RVS errors determined in the gas-phase are a very good approximation to the RVS errors in the protein environment. The RVS approach thus renders purely quantum mechanical treatments of chromophores in protein environments feasible and offers an ab initio alternative to quantum mechanics/molecular mechanics separation schemes. © 2011 American Institute of Physics
Influence of auditory and audiovisual stimuli on the right-left prevalence effect.
Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim
2014-01-01
When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.
Brown, Trecia A; Joanisse, Marc F; Gati, Joseph S; Hughes, Sarah M; Nixon, Pam L; Menon, Ravi S; Lomber, Stephen G
2013-01-01
Much of what is known about the cortical organization for audition in humans draws from studies of auditory cortex in the cat. However, these data build largely on electrophysiological recordings that are both highly invasive and provide less evidence concerning macroscopic patterns of brain activation. Optical imaging, using intrinsic signals or dyes, allows visualization of surface-based activity but is also quite invasive. Functional magnetic resonance imaging (fMRI) overcomes these limitations by providing a large-scale perspective of distributed activity across the brain in a non-invasive manner. The present study used fMRI to characterize stimulus-evoked activity in auditory cortex of an anesthetized (ketamine/isoflurane) cat, focusing specifically on the blood-oxygen-level-dependent (BOLD) signal time course. Functional images were acquired for adult cats in a 7 T MRI scanner. To determine the BOLD signal time course, we presented 1s broadband noise bursts between widely spaced scan acquisitions at randomized delays (1-12 s in 1s increments) prior to each scan. Baseline trials in which no stimulus was presented were also acquired. Our results indicate that the BOLD response peaks at about 3.5s in primary auditory cortex (AI) and at about 4.5 s in non-primary areas (AII, PAF) of cat auditory cortex. The observed peak latency is within the range reported for humans and non-human primates (3-4 s). The time course of hemodynamic activity in cat auditory cortex also occurs on a comparatively shorter scale than in cat visual cortex. The results of this study will provide a foundation for future auditory fMRI studies in the cat to incorporate these hemodynamic response properties into appropriate analyses of cat auditory cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias
2018-01-01
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637
1993-12-15
Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. Marshall Spce Flight Center (MSFC) is begirning to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models are used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup is to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provides general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC).
Shared protection based virtual network mapping in space division multiplexing optical networks
NASA Astrophysics Data System (ADS)
Zhang, Huibin; Wang, Wei; Zhao, Yongli; Zhang, Jie
2018-05-01
Space Division Multiplexing (SDM) has been introduced to improve the capacity of optical networks. In SDM optical networks, there are multiple cores/modes in each fiber link, and spectrum resources are multiplexed in both frequency and core/modes dimensions. Enabled by network virtualization technology, one SDM optical network substrate can be shared by several virtual networks operators. Similar with point-to-point connection services, virtual networks (VN) also need certain survivability to guard against network failures. Based on customers' heterogeneous requirements on the survivability of their virtual networks, this paper studies the shared protection based VN mapping problem and proposes a Minimum Free Frequency Slots (MFFS) mapping algorithm to improve spectrum efficiency. Simulation results show that the proposed algorithm can optimize SDM optical networks significantly in terms of blocking probability and spectrum utilization.
Brain bases for auditory stimulus-driven figure-ground segregation.
Teki, Sundeep; Chait, Maria; Kumar, Sukhbinder; von Kriegstein, Katharina; Griffiths, Timothy D
2011-01-05
Auditory figure-ground segregation, listeners' ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure-ground stimulus that incorporates stochastic variation of the figure and background that captures the rich spectrotemporal complexity of natural acoustic scenes. Figure and background signals overlap in spectrotemporal space, but vary in the statistics of fluctuation, such that the only way to extract the figure is by integrating the patterns over time and frequency. Our behavioral results demonstrate that human listeners are remarkably sensitive to the appearance of such figures. In a functional magnetic resonance imaging experiment, aimed at investigating preattentive, stimulus-driven, auditory segregation mechanisms, naive subjects listened to these stimuli while performing an irrelevant task. Results demonstrate significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure-ground decomposition. We did not observe any significant activation in the primary auditory cortex. Our results support a role for automatic, bottom-up mechanisms in the IPS in mediating stimulus-driven, auditory figure-ground segregation, which is consistent with accumulating evidence implicating the IPS in structuring sensory input and perceptual organization.
STS-116 and Expedition 12 Preflight Training, VR Lab Bldg. 9.
2005-05-06
JSC2005-E-18147 (6 May 2005) --- Astronauts Sunita L. Williams (left), Expedition 14 flight engineer, and Joan E. Higginbotham, STS-116 mission specialist, use the virtual reality lab at the Johnson Space Center to train for their duties aboard the space shuttle. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements. Williams will join Expedition 14 in progress and serve as a flight engineer after traveling to the station on space shuttle mission STS-116.
STS-116 Preflight Training, VR Lab
2006-08-07
JSC2006-E-33308 (7 Aug. 2006) --- European Space Agency (ESA) astronaut Christer Fuglesang, STS-116 mission specialist, uses virtual reality hardware in the Space Vehicle Mockup Facility at the Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working. David J. Homan assisted Fuglesang.
Photographic coverage of STS-112 during EVA 3 in VR Lab.
2002-08-21
JSC2002-E-34618 (21 August 2002) --- Astronaut Piers J. Sellers, STS-112 mission specialist, uses virtual reality hardware in the Space Vehicle Mockup Facility at the Johnson Space Center (JSC) to rehearse some of his duties on the upcoming mission to the International Space Station (ISS). This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the International Space Station (ISS) hardware with which they will be working.
The Auditory System of the Dipteran Parasitoid Emblemasoma auditrix (Sarcophagidae).
Tron, Nanina; Stölting, Heiko; Kampschulte, Marian; Martels, Gunhild; Stumpner, Andreas; Lakes-Harlan, Reinhard
2016-01-01
Several taxa of insects evolved a tympanate ear at different body positions, whereby the ear is composed of common parts: a scolopidial sense organ, a tracheal air space, and a tympanal membrane. Here, we analyzed the anatomy and physiology of the ear at the ventral prothorax of the sarcophagid fly, Emblemasoma auditrix (Soper). We used micro-computed tomography to analyze the ear and its tracheal air space in relation to the body morphology. Both tympana are separated by a small cuticular bridge, face in the same frontal direction, and are backed by a single tracheal enlargement. This enlargement is connected to the anterior spiracles at the dorsofrontal thorax and is continuous with the tracheal network in the thorax and in the abdomen. Analyses of responses of auditory afferents and interneurons show that the ear is broadly tuned, with a sensitivity peak at 5 kHz. Single-cell recordings of auditory interneurons indicate a frequency- and intensity-dependent tuning, whereby some neurons react best to 9 kHz, the peak frequency of the host's calling song. The results are compared to the convergently evolved ear in Tachinidae (Diptera). © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America.
The Auditory System of the Dipteran Parasitoid Emblemasoma auditrix (Sarcophagidae)
Tron, Nanina; Stölting, Heiko; Kampschulte, Marian; Martels, Gunhild; Stumpner, Andreas; Lakes-Harlan, Reinhard
2016-01-01
Several taxa of insects evolved a tympanate ear at different body positions, whereby the ear is composed of common parts: a scolopidial sense organ, a tracheal air space, and a tympanal membrane. Here, we analyzed the anatomy and physiology of the ear at the ventral prothorax of the sarcophagid fly, Emblemasoma auditrix (Soper). We used micro-computed tomography to analyze the ear and its tracheal air space in relation to the body morphology. Both tympana are separated by a small cuticular bridge, face in the same frontal direction, and are backed by a single tracheal enlargement. This enlargement is connected to the anterior spiracles at the dorsofrontal thorax and is continuous with the tracheal network in the thorax and in the abdomen. Analyses of responses of auditory afferents and interneurons show that the ear is broadly tuned, with a sensitivity peak at 5 kHz. Single-cell recordings of auditory interneurons indicate a frequency- and intensity-dependent tuning, whereby some neurons react best to 9 kHz, the peak frequency of the host’s calling song. The results are compared to the convergently evolved ear in Tachinidae (Diptera). PMID:27538415
A comprehensive three-dimensional cortical map of vowel space.
Scharinger, Mathias; Idsardi, William J; Poe, Samantha
2011-12-01
Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral-medial, anterior-posterior, and inferior-superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom-up information but crucially involves featural-phonetic top-down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.
Effect of virtual reality on cognitive dysfunction in patients with brain tumor.
Yang, Seoyon; Chun, Min Ho; Son, Yu Ri
2014-12-01
To investigate whether virtual reality (VR) training will help the recovery of cognitive function in brain tumor patients. Thirty-eight brain tumor patients (19 men and 19 women) with cognitive impairment recruited for this study were assigned to either VR group (n=19, IREX system) or control group (n=19). Both VR training (30 minutes a day for 3 times a week) and computer-based cognitive rehabilitation program (30 minutes a day for 2 times) for 4 weeks were given to the VR group. The control group was given only the computer-based cognitive rehabilitation program (30 minutes a day for 5 days a week) for 4 weeks. Computerized neuropsychological tests (CNTs), Korean version of Mini-Mental Status Examination (K-MMSE), and Korean version of Modified Barthel Index (K-MBI) were used to evaluate cognitive function and functional status. The VR group showed improvements in the K-MMSE, visual and auditory continuous performance tests (CPTs), forward and backward digit span tests (DSTs), forward and backward visual span test (VSTs), visual and verbal learning tests, Trail Making Test type A (TMT-A), and K-MBI. The VR group showed significantly (p<0.05) better improvements than the control group in visual and auditory CPTs, backward DST and VST, and TMT-A after treatment. VR training can have beneficial effects on cognitive improvement when it is combined with computer-assisted cognitive rehabilitation. Further randomized controlled studies with large samples according to brain tumor type and location are needed to investigate how VR training improves cognitive impairment.
Effect of Virtual Reality on Cognitive Dysfunction in Patients With Brain Tumor
Yang, Seoyon; Son, Yu Ri
2014-01-01
Objective To investigate whether virtual reality (VR) training will help the recovery of cognitive function in brain tumor patients. Methods Thirty-eight brain tumor patients (19 men and 19 women) with cognitive impairment recruited for this study were assigned to either VR group (n=19, IREX system) or control group (n=19). Both VR training (30 minutes a day for 3 times a week) and computer-based cognitive rehabilitation program (30 minutes a day for 2 times) for 4 weeks were given to the VR group. The control group was given only the computer-based cognitive rehabilitation program (30 minutes a day for 5 days a week) for 4 weeks. Computerized neuropsychological tests (CNTs), Korean version of Mini-Mental Status Examination (K-MMSE), and Korean version of Modified Barthel Index (K-MBI) were used to evaluate cognitive function and functional status. Results The VR group showed improvements in the K-MMSE, visual and auditory continuous performance tests (CPTs), forward and backward digit span tests (DSTs), forward and backward visual span test (VSTs), visual and verbal learning tests, Trail Making Test type A (TMT-A), and K-MBI. The VR group showed significantly (p<0.05) better improvements than the control group in visual and auditory CPTs, backward DST and VST, and TMT-A after treatment. Conclusion VR training can have beneficial effects on cognitive improvement when it is combined with computer-assisted cognitive rehabilitation. Further randomized controlled studies with large samples according to brain tumor type and location are needed to investigate how VR training improves cognitive impairment. PMID:25566470
Multiple Causal Links Between Magnocellular-Dorsal Pathway Deficit and Developmental Dyslexia.
Gori, Simone; Seitz, Aaron R; Ronconi, Luca; Franceschini, Sandro; Facoetti, Andrea
2016-10-17
Although impaired auditory-phonological processing is the most popular explanation of developmental dyslexia (DD), the literature shows that the combination of several causes rather than a single factor contributes to DD. Functioning of the visual magnocellular-dorsal (MD) pathway, which plays a key role in motion perception, is a much debated, but heavily suspected factor contributing to DD. Here, we employ a comprehensive approach that incorporates all the accepted methods required to test the relationship between the MD pathway dysfunction and DD. The results of 4 experiments show that (1) Motion perception is impaired in children with dyslexia in comparison both with age-match and with reading-level controls; (2) pre-reading visual motion perception-independently from auditory-phonological skill-predicts future reading development, and (3) targeted MD trainings-not involving any auditory-phonological stimulation-leads to improved reading skill in children and adults with DD. Our findings demonstrate, for the first time, a causal relationship between MD deficits and DD, virtually closing a 30-year long debate. Since MD dysfunction can be diagnosed much earlier than reading and language disorders, our findings pave the way for low resource-intensive, early prevention programs that could drastically reduce the incidence of DD. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Valencia, Heriberto Gonzalez; Villota Enriquez, Jackeline Amparo; Agredo, Patricia Medina
2017-01-01
This study consisted in characterizing the strategies used by professors; implemented through virtual educational platforms. The context of this research were the classrooms of the Santiago de Cali University and the virtual space of the Chamilo virtual platform, where two professors from the Faculty of Education of the same university…
ERIC Educational Resources Information Center
Hwang, Wu-Yuin; Su, Jia-Han; Huang, Yueh-Min; Dong, Jian-Jie
2009-01-01
In this paper, the development of an innovative Virtual Manipulatives and Whiteboard (VMW) system is described. The VMW system allowed users to manipulate virtual objects in 3D space and find clues to solve geometry problems. To assist with multi-representation transformation, translucent multimedia whiteboards were used to provide a virtual 3D…
The Use of Virtual Ethnography in Distance Education Research
ERIC Educational Resources Information Center
Uzun, Kadriye; Aydin, Cengiz Hakan
2012-01-01
3D virtual worlds can and have been used as a meeting place for distance education courses. Virtual worlds allow for group learning of the kind enjoyed by students gathered in a virtual classroom, where they know they are in a communal space, they are aware of the social process of learning and are affected by the presence and behaviour of their…
ERIC Educational Resources Information Center
Heyward, Kamela S.
2012-01-01
This dissertation examines the strategic practice of virtual racial embodiment, as a case study of African Americans attempting to complicate current constructions of race and social justice in new media. I suggest that dominant racial constructions online teeter between racial stereotypes and the absence of race. Virtual racial classification and…
Grasping trajectories in a virtual environment adhere to Weber's law.
Ozana, Aviad; Berman, Sigal; Ganel, Tzvi
2018-06-01
Virtual-reality and telerobotic devices simulate local motor control of virtual objects within computerized environments. Here, we explored grasping kinematics within a virtual environment and tested whether, as in normal 3D grasping, trajectories in the virtual environment are performed analytically, violating Weber's law with respect to object's size. Participants were asked to grasp a series of 2D objects using a haptic system, which projected their movements to a virtual space presented on a computer screen. The apparatus also provided object-specific haptic information upon "touching" the edges of the virtual targets. The results showed that grasping movements performed within the virtual environment did not produce the typical analytical trajectory pattern obtained during 3D grasping. Unlike as in 3D grasping, grasping trajectories in the virtual environment adhered to Weber's law, which indicates relative resolution in size processing. In addition, the trajectory patterns differed from typical trajectories obtained during 3D grasping, with longer times to complete the movement, and with maximum grip apertures appearing relatively early in the movement. The results suggest that grasping movements within a virtual environment could differ from those performed in real space, and are subjected to irrelevant effects of perceptual information. Such atypical pattern of visuomotor control may be mediated by the lack of complete transparency between the interface and the virtual environment in terms of the provided visual and haptic feedback. Possible implications of the findings to movement control within robotic and virtual environments are further discussed.
Ivanova, T N; Matthews, A; Gross, C; Mappus, R C; Gollnick, C; Swanson, A; Bassell, G J; Liu, R C
2011-05-05
Acquiring the behavioral significance of sound has repeatedly been shown to correlate with long term changes in response properties of neurons in the adult primary auditory cortex. However, the molecular and cellular basis for such changes is still poorly understood. To address this, we have begun examining the auditory cortical expression of an activity-dependent effector immediate early gene (IEG) with documented roles in synaptic plasticity and memory consolidation in the hippocampus: Arc/Arg3.1. For initial characterization, we applied a repeated 10 min (24 h separation) sound exposure paradigm to determine the strength and consistency of sound-evoked Arc/Arg3.1 mRNA expression in the absence of explicit behavioral contingencies for the sound. We used 3D surface reconstruction methods in conjunction with fluorescent in situ hybridization (FISH) to assess the layer-specific subcellular compartmental expression of Arc/Arg3.1 mRNA. We unexpectedly found that both the intranuclear and cytoplasmic patterns of expression depended on the prior history of sound stimulation. Specifically, the percentage of neurons with expression only in the cytoplasm increased for repeated versus singular sound exposure, while intranuclear expression decreased. In contrast, the total cellular expression did not differ, consistent with prior IEG studies of primary auditory cortex. Our results were specific for cortical layers 3-6, as there was virtually no sound driven Arc/Arg3.1 mRNA in layers 1-2 immediately after stimulation. Our results are consistent with the kinetics and/or detectability of cortical subcellular Arc/Arg3.1 mRNA expression being altered by the initial exposure to the sound, suggesting exposure-induced modifications in the cytoplasmic Arc/Arg3.1 mRNA pool. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Ivanova, Tamara; Matthews, Andrew; Gross, Christina; Mappus, Rudolph C.; Gollnick, Clare; Swanson, Andrew; Bassell, Gary J.; Liu, Robert C.
2011-01-01
Acquiring the behavioral significance of a sound has repeatedly been shown to correlate with long term changes in response properties of neurons in the adult primary auditory cortex. However, the molecular and cellular basis for such changes is still poorly understood. To address this, we have begun examining the auditory cortical expression of an activity-dependent effector immediate early gene (IEG) with documented roles in synaptic plasticity and memory consolidation in the hippocampus: Arc/Arg3.1. For initial characterization, we applied a repeated 10 minute (24 hour separation) sound exposure paradigm to determine the strength and consistency of sound-evoked Arc/Arg3.1 mRNA expression in the absence of explicit behavioral contingencies for the sound. We used 3D surface reconstruction methods in conjunction with fluorescent in-situ hybridization (FISH) to assess the layer-specific sub-cellular compartmental expression of Arc/Arg3.1 mRNA. We unexpectedly found that both the intranuclear and cytoplasmic patterns of expression depended on the prior history of sound stimulation. Specifically, the percentage of neurons with expression only in the cytoplasm increased for repeated versus singular sound exposure, while intranuclear expression decreased. In contrast, the total cellular expression did not differ, consistent with prior IEG studies of primary auditory cortex. Our results were specific for cortical layers 3–6, as there was virtually no sound driven Arc/Arg3.1 mRNA in layers 1–2 immediately after stimulation. Our results are consistent with the kinetics and/or detectability of cortical sub-cellular Arc/Arg3.1 mRNA expression being altered by the initial exposure to the sound, suggesting exposure-induced modifications in the cytoplasmic Arc/Arg3.1 mRNA pool. PMID:21334422
The 'F-complex' and MMN tap different aspects of deviance.
Laufer, Ilan; Pratt, Hillel
2005-02-01
To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.
STS-105 Crew Training in VR Lab
2001-03-15
JSC2001-00754 (15 March 2001) --- Astronaut Patrick G. Forrester, STS-105 mission specialist, uses specialized gear in the virtual reality lab at the Johnson Space Center (JSC) to train for his duties aboard the Space Shuttle Discovery. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the International Space Station (ISS) hardware with which they will be working.
STS-109 Crew Training in VR Lab, Building 9
2001-08-08
JSC2001-E-24452 (8 August 2001) --- Astronauts John M. Grunsfeld (left), STS-109 payload commander, and Nancy J. Currie, mission specialist, use the virtual reality lab at the Johnson Space Center (JSC) to train for some of their duties aboard the Space Shuttle Columbia. This type of computer interface paired with virtual reality training hardware and software helps to prepare the entire team to perform its duties during the fourth Hubble Space Telescope (HST) servicing mission.
A Novel Computer-Based Set-Up to Study Movement Coordination in Human Ensembles
Alderisio, Francesco; Lombardi, Maria; Fiore, Gianfranco; di Bernardo, Mario
2017-01-01
Existing experimental works on movement coordination in human ensembles mostly investigate situations where each subject is connected to all the others through direct visual and auditory coupling, so that unavoidable social interaction affects their coordination level. Here, we present a novel computer-based set-up to study movement coordination in human groups so as to minimize the influence of social interaction among participants and implement different visual pairings between them. In so doing, players can only take into consideration the motion of a designated subset of the others. This allows the evaluation of the exclusive effects on coordination of the structure of interconnections among the players in the group and their own dynamics. In addition, our set-up enables the deployment of virtual computer players to investigate dyadic interaction between a human and a virtual agent, as well as group synchronization in mixed teams of human and virtual agents. We show how this novel set-up can be employed to study coordination both in dyads and in groups over different structures of interconnections, in the presence as well as in the absence of virtual agents acting as followers or leaders. Finally, in order to illustrate the capabilities of the architecture, we describe some preliminary results. The platform is available to any researcher who wishes to unfold the mechanisms underlying group synchronization in human ensembles and shed light on its socio-psychological aspects. PMID:28649217
Exploring the simulation requirements for virtual regional anesthesia training
NASA Astrophysics Data System (ADS)
Charissis, V.; Zimmer, C. R.; Sakellariou, S.; Chan, W.
2010-01-01
This paper presents an investigation towards the simulation requirements for virtual regional anaesthesia training. To this end we have developed a prototype human-computer interface designed to facilitate Virtual Reality (VR) augmenting educational tactics for regional anaesthesia training. The proposed interface system, aims to compliment nerve blocking techniques methods. The system is designed to operate in real-time 3D environment presenting anatomical information and enabling the user to explore the spatial relation of different human parts without any physical constrains. Furthermore the proposed system aims to assist the trainee anaesthetists so as to build a mental, three-dimensional map of the anatomical elements and their depictive relationship to the Ultra-Sound imaging which is used for navigation of the anaesthetic needle. Opting for a sophisticated approach of interaction, the interface elements are based on simplified visual representation of real objects, and can be operated through haptic devices and surround auditory cues. This paper discusses the challenges involved in the HCI design, introduces the visual components of the interface and presents a tentative plan of future work which involves the development of realistic haptic feedback and various regional anaesthesia training scenarios.
Virtual reality and cognitive rehabilitation: a review of current outcome research.
Larson, Eric B; Feigon, Maia; Gagliardo, Pablo; Dvorkin, Assaf Y
2014-01-01
Recent advancement in the technology of virtual reality (VR) has allowed improved applications for cognitive rehabilitation. The aim of this review is to facilitate comparisons of therapeutic efficacy of different VR interventions. A systematic approach for the review of VR cognitive rehabilitation outcome research addressed the nature of each sample, treatment apparatus, experimental treatment protocol, control treatment protocol, statistical analysis and results. Using this approach, studies that provide valid evidence of efficacy of VR applications are summarized. Applications that have not yet undergone controlled outcome study but which have promise are introduced. Seventeen studies conducted over the past eight years are reviewed. The few randomized controlled trials that have been completed show that some applications are effective in treating cognitive deficits in people with neurological diagnoses although further study is needed. Innovations requiring further study include the use of enriched virtual environments that provide haptic sensory input in addition to visual and auditory inputs and the use of commercially available gaming systems to provide tele-rehabilitation services. Recommendations are offered to improve efficacy of rehabilitation, to improve scientific rigor of rehabilitation research and to broaden access to the evidence-based treatments that this research has identified.
Designing a Virtual Social Space for Language Acquisition
ERIC Educational Resources Information Center
Woolson, Maria Alessandra
2012-01-01
Middleverse de Español (MdE) is an evolving platform for foreign language (FL) study, aligned to the goals of ACTFL's National Standards and 2007 MLA report. The project simulates an immersive environment in a virtual 3-D space for the acquisition of translingual and transcultural competence in Spanish meant to support content-based and…
ERIC Educational Resources Information Center
Kasperiuniene, Judita; Zydziunaite, Vilma; Eriksson, Malin
2017-01-01
This qualitative study explored the self-regulated learning (SRL) of teachers and their students in virtual social spaces. The processes of SRL were analyzed from 24 semi-structured individual interviews with professors, instructors and their students from five Lithuanian universities. A core category stroking the net whale showed the process of…
Hiding and Searching Strategies of Adult Humans in a Virtual and a Real-Space Room
ERIC Educational Resources Information Center
Talbot, Katherine J.; Legge, Eric L. G.; Bulitko, Vadim; Spetch, Marcia L.
2009-01-01
Adults searched for or cached three objects in nine hiding locations in a virtual room or a real-space room. In both rooms, the locations selected by participants differed systematically between searching and hiding. Specifically, participants moved farther from origin and dispersed their choices more when hiding objects than when searching for…
Source Space Estimation of Oscillatory Power and Brain Connectivity in Tinnitus
Zobay, Oliver; Palmer, Alan R.; Hall, Deborah A.; Sereda, Magdalena; Adjamian, Peyman
2015-01-01
Tinnitus is the perception of an internally generated sound that is postulated to emerge as a result of structural and functional changes in the brain. However, the precise pathophysiology of tinnitus remains unknown. Llinas’ thalamocortical dysrhythmia model suggests that neural deafferentation due to hearing loss causes a dysregulation of coherent activity between thalamus and auditory cortex. This leads to a pathological coupling of theta and gamma oscillatory activity in the resting state, localised to the auditory cortex where normally alpha oscillations should occur. Numerous studies also suggest that tinnitus perception relies on the interplay between auditory and non-auditory brain areas. According to the Global Brain Model, a network of global fronto—parietal—cingulate areas is important in the generation and maintenance of the conscious perception of tinnitus. Thus, the distress experienced by many individuals with tinnitus is related to the top—down influence of this global network on auditory areas. In this magnetoencephalographic study, we compare resting-state oscillatory activity of tinnitus participants and normal-hearing controls to examine effects on spectral power as well as functional and effective connectivity. The analysis is based on beamformer source projection and an atlas-based region-of-interest approach. We find increased functional connectivity within the auditory cortices in the alpha band. A significant increase is also found for the effective connectivity from a global brain network to the auditory cortices in the alpha and beta bands. We do not find evidence of effects on spectral power. Overall, our results provide only limited support for the thalamocortical dysrhythmia and Global Brain models of tinnitus. PMID:25799178
The Encoding of Sound Source Elevation in the Human Auditory Cortex.
Trapeau, Régis; Schönwiesner, Marc
2018-03-28
Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source. Copyright © 2018 the authors 0270-6474/18/383252-13$15.00/0.
Music perception: sounds lost in space.
Stewart, Lauren; Walsh, Vincent
2007-10-23
A recent study of spatial processing in amusia makes a controversial claim that such musical deficits may be understood in terms of a problem in the representation of space. If such a link is demonstrated to be causal, it would challenge the prevailing view that deficits in amusia are specific to the musical or even the auditory domain.
Content Sharing Based on Personal Information in Virtually Secured Space
NASA Astrophysics Data System (ADS)
Sohn, Hosik; Ro, Yong Man; Plataniotis, Kostantinos N.
User generated contents (UGC) are shared in an open space like social media where users can upload and consume contents freely. Since the access of contents is not restricted, the contents could be delivered to unwanted users or misused sometimes. In this paper, we propose a method for sharing UGCs securely based on the personal information of users. With the proposed method, virtual secure space is created for contents delivery. The virtual secure space allows UGC creator to deliver contents to users who have similar personal information and they can consume the contents without any leakage of personal information. In order to verify the usefulness of the proposed method, the experiment was performed where the content was encrypted with personal information of creator, and users with similar personal information have decrypted and consumed the contents. The results showed that UGCs were securely shared among users who have similar personal information.
Robot Teleoperation and Perception Assistance with a Virtual Holographic Display
NASA Technical Reports Server (NTRS)
Goddard, Charles O.
2012-01-01
Teleoperation of robots in space from Earth has historically been dfficult. Speed of light delays make direct joystick-type control infeasible, so it is desirable to command a robot in a very high-level fashion. However, in order to provide such an interface, knowledge of what objects are in the robot's environment and how they can be interacted with is required. In addition, many tasks that would be desirable to perform are highly spatial, requiring some form of six degree of freedom input. These two issues can be combined, allowing the user to assist the robot's perception by identifying the locations of objects in the scene. The zSpace system, a virtual holographic environment, provides a virtual three-dimensional space superimposed over real space and a stylus tracking position and rotation inside of it. Using this system, a possible interface for this sort of robot control is proposed.
The road to virtual: the Sauls Memorial Virtual Library journey.
Waddell, Stacie; Harkness, Amy; Cohen, Mark L
2014-01-01
The Sauls Memorial Virtual Library closed its physical space in 2012. This article outlines the reasons for this change and how the library staff and hospital leadership planned and executed the enormous undertaking. Outcomes of the change and lessons learned from the process are discussed.
[The Museu da Saúde in Portugal: a physical space, a virtual space].
Oliveira, Inês Cavadas de; Andrade, Helena Rebelo de; Miguel, José Pereira
2015-12-01
Museu da Saúde (Museum of Health) in Portugal, based on the dual concept of a multifaceted physical space and a virtual space, is preparing an inventory of its archive. So far, it has studied five of its collections in greater depth: tuberculosis, urology, psychology, medicine, and malaria. In this article, these collections are presented, and the specificities of developing museological activities within a national laboratory, Instituto Nacional de Saúde Doutor Ricardo Jorge, are also discussed, highlighting the issues of the store rooms and exhibition spaces, the inventory process, and the communication activities, with a view to overcoming the challenges inherent to operating in a non-museological space.
2011-01-18
JSC2011-E-003204 (18 Jan. 2011) --- NASA astronauts Rex Walheim, STS-135 mission specialist; and Mike Fossum (foreground), Expedition 28 flight engineer and Expedition 29 commander; use the virtual reality lab in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to train for some of their duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare crew members for dealing with space station elements. STS-135 is planned to be the final mission of the space shuttle program. Photo credit: NASA or National Aeronautics and Space Administration
STS-105 Crew Training in VR Lab
2001-03-15
JSC2001-00748 (15 March 2001) --- Astronaut Patrick G. Forrester, STS-105 mission specialist, prepares to use specialized gear in the virtual reality lab at the Johnson Space Center (JSC) to train for his duties aboard the Space Shuttle Discovery. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the International Space Station (ISS) hardware with which they will be working.
STS-111 Training in VR lab with Expedition IV and V Crewmembers
2001-10-18
JSC2001-E-39083 (18 October 2001) --- Astronaut Franklin R. Chang-Diaz, STS-111 mission specialist, uses specialized gear in the virtual reality lab at the Johnson Space Center (JSC) to train for his duties aboard the Space Shuttle Endeavour. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the International Space Station (ISS) hardware with which they will be working.
STS-120 crew along with Expedition crew members Dan Tani and Sandra Magnus
2007-08-09
JSC2007-E-41535 (9 Aug. 2007) --- Astronaut Douglas H. Wheelock, STS-120 mission specialist, uses virtual reality hardware in the Space Vehicle Mockup Facility at Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear special gloves and other gear while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.
STS-134 crew and Expedition 24/25 crew member Shannon Walker
2010-03-25
JSC2010-E-043660 (25 March 2010) --- NASA astronaut Greg Chamitoff, STS-134 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.
STS-134 crew and Expedition 24/25 crew member Shannon Walker
2010-03-25
JSC2010-E-043685 (25 March 2010) --- NASA astronaut Michael Fincke, STS-134 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.
STS-120 crew along with Expedition crew members Dan Tani and Sandra Magnus
2007-08-09
JSC2007-E-41537 (9 Aug. 2007) --- Astronaut Douglas H. Wheelock, STS-120 mission specialist, uses virtual reality hardware in the Space Vehicle Mockup Facility at Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear special gloves and other gear while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.
Meldrum, Dara; Herdman, Susan; Moloney, Roisin; Murray, Deirdre; Duffy, Douglas; Malone, Kareena; French, Helen; Hone, Stephen; Conroy, Ronan; McConn-Walsh, Rory
2012-03-26
Unilateral peripheral vestibular loss results in gait and balance impairment, dizziness and oscillopsia. Vestibular rehabilitation benefits patients but optimal treatment remains unknown. Virtual reality is an emerging tool in rehabilitation and provides opportunities to improve both outcomes and patient satisfaction with treatment. The Nintendo Wii Fit Plus® (NWFP) is a low cost virtual reality system that challenges balance and provides visual and auditory feedback. It may augment the motor learning that is required to improve balance and gait, but no trials to date have investigated efficacy. In a single (assessor) blind, two centre randomised controlled superiority trial, 80 patients with unilateral peripheral vestibular loss will be randomised to either conventional or virtual reality based (NWFP) vestibular rehabilitation for 6 weeks. The primary outcome measure is gait speed (measured with three dimensional gait analysis). Secondary outcomes include computerised posturography, dynamic visual acuity, and validated questionnaires on dizziness, confidence and anxiety/depression. Outcome will be assessed post treatment (8 weeks) and at 6 months. Advances in the gaming industry have allowed mass production of highly sophisticated low cost virtual reality systems that incorporate technology previously not accessible to most therapists and patients. Importantly, they are not confined to rehabilitation departments, can be used at home and provide an accurate record of adherence to exercise. The benefits of providing augmented feedback, increasing intensity of exercise and accurately measuring adherence may improve conventional vestibular rehabilitation but efficacy must first be demonstrated. Clinical trials.gov identifier: NCT01442623.
Neguț, Alexandra; Jurma, Anda Maria; David, Daniel
2017-08-01
Virtual-reality-based assessment may be a good alternative to classical or computerized neuropsychological assessment due to increased ecological validity. ClinicaVR: Classroom-CPT (VC) is a neuropsychological test embedded in virtual reality that is designed to assess attention deficits in children with attention deficit hyperactivity disorder (ADHD) or other conditions associated with impaired attention. The present study aimed to (1) investigate the diagnostic validity of VC in comparison to a traditional continuous performance test (CPT), (2) explore the task difficulty of VC, (3) address the effect of distractors on the performance of ADHD participants and typically-developing (TD) controls, and (4) compare the two measures on cognitive absorption. A total of 33 children diagnosed with ADHD and 42 TD children, aged between 7 and 13 years, participated in the study and were tested with a traditional CPT or with VC, along with several cognitive measures and an adapted version of the Cognitive Absorption Scale. A mixed multivariate analysis of covariance (MANCOVA) revealed that the children with ADHD performed worse on correct responses had more commissions and omissions errors than the TD children, as well as slower target reaction times . The results showed significant differences between performance in the virtual environment and the traditional computerized one, with longer reaction times in virtual reality. The data analysis highlighted the negative influence of auditory distractors on attention performance in the case of the children with ADHD, but not for the TD children. Finally, the two measures did not differ on the cognitive absorption perceived by the children.
Effect of virtual reality on cognition in stroke patients.
Kim, Bo Ryun; Chun, Min Ho; Kim, Lee Suk; Park, Ji Young
2011-08-01
To investigate the effect of virtual reality on the recovery of cognitive impairment in stroke patients. Twenty-eight patients (11 males and 17 females, mean age 64.2) with cognitive impairment following stroke were recruited for this study. All patients were randomly assigned to one of two groups, the virtual reality (VR) group (n=15) or the control group (n=13). The VR group received both virtual reality training and computer-based cognitive rehabilitation, whereas the control group received only computer-based cognitive rehabilitation. To measure, activity of daily living cognitive and motor functions, the following assessment tools were used: computerized neuropsychological test and the Tower of London (TOL) test for cognitive function assessment, Korean-Modified Barthel index (K-MBI) for functional status evaluation, and the motricity index (MI) for motor function assessment. All recruited patients underwent these evaluations before rehabilitation and four weeks after rehabilitation. The VR group showed significant improvement in the K-MMSE, visual and auditory continuous performance tests (CPT), forward digit span test (DST), forward and backward visual span tests (VST), visual and verbal learning tests, TOL, K-MBI, and MI scores, while the control group showed significant improvement in the K-MMSE, forward DST, visual and verbal learning tests, trail-making test-type A, TOL, K-MBI, and MI scores after rehabilitation. The changes in the visual CPT and backward VST in the VR group after rehabilitation were significantly higher than those in the control group. Our findings suggest that virtual reality training combined with computer-based cognitive rehabilitation may be of additional benefit for treating cognitive impairment in stroke patients.
Integrating an Awareness of Selfhood and Society into Virtual Learning
ERIC Educational Resources Information Center
Stricker, Andrew, Ed.; Calongne, Cynthia, Ed.; Truman, Barbara, Ed.; Arenas, Fil, Ed.
2017-01-01
Recent technological advances have opened new platforms for learning and teaching. By utilizing virtual spaces, more educational opportunities are created for students who cannot attend a physical classroom environment. "Integrating an Awareness of Selfhood and Society into Virtual Learning" is a pivotal reference source that discusses…
Virtual Instruction: Issues and Insights from an International Perspective.
ERIC Educational Resources Information Center
Feyten, Carine M., Ed.; Nutta, Joyce W., Ed.
The essays in this book, by contributors from around the world, clarify predominant theoretical issues that pertain to virtual instruction, and offer practical suggestions for implementing these programs in any setting. Chapters include: "Mapping Space and Time: Virtual Instruction as Global Ritual" (Joyce W. Nutta and Carine M. Feyten);…
A Virtual Map to Support People Who Are Blind in Navigation through Real Spaces
ERIC Educational Resources Information Center
Lahav, Orly; Schloerb, David W.; Kumar, Siddarth; Srinivasan, Mandayam A.
2011-01-01
Most of the spatial information needed by sighted people to construct cognitive maps of spaces is gathered through the visual channel. Unfortunately, people who are blind lack the ability to collect the required spatial information in advance. The use of virtual reality as a learning and rehabilitation tool for people with disabilities has been on…
Photographic coverage of STS-112 during EVA 3 in VR Lab.
2002-08-21
JSC2002-E-34622 (21 August 2002) --- Astronaut David A. Wolf, STS-112 mission specialist, uses the virtual reality lab at the Johnson Space Center (JSC) to train for his duties aboard the Space Shuttle Atlantis. This type of computer interface paired with virtual reality training hardware and software helps to prepare the entire team for dealing with ISS elements.
Assessment of Student Learning in Virtual Spaces, Using Orders of Complexity in Levels of Thinking
ERIC Educational Resources Information Center
Capacho, Jose
2017-01-01
This paper aims at showing a new methodology to assess student learning in virtual spaces supported by Information and Communications Technology-ICT. The methodology is based on the Conceptual Pedagogy Theory, and is supported both on knowledge instruments (KI) and intelectual operations (IO). KI are made up of teaching materials embedded in the…
Design of Learning Spaces in 3D Virtual Worlds: An Empirical Investigation of "Second Life"
ERIC Educational Resources Information Center
Minocha, Shailey; Reeves, Ahmad John
2010-01-01
"Second Life" (SL) is a three-dimensional (3D) virtual world, and educational institutions are adopting SL to support their teaching and learning. Although the question of how 3D learning spaces should be designed to support student learning and engagement has been raised among SL educators and designers, there is hardly any guidance or…
ERIC Educational Resources Information Center
Zheng, Dongping; Schmidt, Matthew; Hu, Ying; Liu, Min; Hsu, Jesse
2017-01-01
The purpose of this research was to explore the relationships between design, learning, and translanguaging in a 3D collaborative virtual learning environment for adolescent learners of Chinese and English. We designed an open-ended space congruent with ecological and dialogical perspectives on second language acquisition. In such a space,…
2001-08-08
Astronauts John M. Grunsfeld (left), STS-109 payload commander, and Nancy J. Currie, mission specialist, use the virtual reality lab at Johnson Space Center to train for upcoming duties aboard the Space Shuttle Columbia. This type of computer interface paired with virtual reality training hardware and software helps to prepare the entire team to perform its duties for the fourth Hubble Space Telescope Servicing mission. The most familiar form of virtual reality technology is some form of headpiece, which fits over your eyes and displays a three dimensional computerized image of another place. Turn your head left and right, and you see what would be to your sides; turn around, and you see what might be sneaking up on you. An important part of the technology is some type of data glove that you use to propel yourself through the virtual world. Currently, the medical community is using the new technologies in four major ways: To see parts of the body more accurately, for study, to make better diagnosis of disease and to plan surgery in more detail; to obtain a more accurate picture of a procedure during surgery; to perform more types of surgery with the most noninvasive, accurate methods possible; and to model interactions among molecules at a molecular level.
NASA Technical Reports Server (NTRS)
Smith, Jeffrey
2003-01-01
The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.
Kaya, Emine Merve
2017-01-01
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information—a phenomenon referred to as the ‘cocktail party problem’. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by ‘bottom-up’ sensory-driven factors, as well as ‘top-down’ task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044012
The Architectonic Experience of Body and Space in Augmented Interiors
Pasqualini, Isabella; Blefari, Maria Laura; Tadi, Tej; Serino, Andrea; Blanke, Olaf
2018-01-01
The environment shapes our experience of space in constant interaction with the body. Architectonic interiors amplify the perception of space through the bodily senses; an effect also known as embodiment. The interaction of the bodily senses with the space surrounding the body can be tested experimentally through the manipulation of multisensory stimulation and measured via a range of behaviors related to bodily self-consciousness. Many studies have used Virtual Reality to show that visuotactile conflicts mediated via a virtual body or avatar can disrupt the unified subjective experience of the body and self. In the full-body illusion paradigm, participants feel as if the avatar was their body (ownership, self-identification) and they shift their center of awareness toward the position of the avatar (self-location). However, the influence of non-bodily spatial cues around the body on embodiment remains unclear, and data about the impact of architectonic space on human perception and self-conscious states are sparse. We placed participants into a Virtual Reality arena, where large and narrow virtual interiors were displayed with and without an avatar. We then applied synchronous or asynchronous visuotactile strokes to the back of the participants and avatar, or, to the front wall of the void interiors. During conditions of illusory self-identification with the avatar, participants reported sensations of containment, drift, and touch with the architectonic environment. The absence of the avatar suppressed such feelings, yet, in the large space, we found an effect of continuity between the physical and the virtual interior depending on the full-body illusion. We discuss subjective feelings evoked by architecture and compare the full-body illusion in augmented interiors to architectonic embodiment. A relevant outcome of this study is the potential to dissociate the egocentric, first-person view from the physical point of view through augmented architectonic space. PMID:29755378
The Architectonic Experience of Body and Space in Augmented Interiors.
Pasqualini, Isabella; Blefari, Maria Laura; Tadi, Tej; Serino, Andrea; Blanke, Olaf
2018-01-01
The environment shapes our experience of space in constant interaction with the body. Architectonic interiors amplify the perception of space through the bodily senses; an effect also known as embodiment. The interaction of the bodily senses with the space surrounding the body can be tested experimentally through the manipulation of multisensory stimulation and measured via a range of behaviors related to bodily self-consciousness. Many studies have used Virtual Reality to show that visuotactile conflicts mediated via a virtual body or avatar can disrupt the unified subjective experience of the body and self. In the full-body illusion paradigm, participants feel as if the avatar was their body (ownership, self-identification) and they shift their center of awareness toward the position of the avatar (self-location). However, the influence of non-bodily spatial cues around the body on embodiment remains unclear, and data about the impact of architectonic space on human perception and self-conscious states are sparse. We placed participants into a Virtual Reality arena, where large and narrow virtual interiors were displayed with and without an avatar. We then applied synchronous or asynchronous visuotactile strokes to the back of the participants and avatar, or, to the front wall of the void interiors. During conditions of illusory self-identification with the avatar, participants reported sensations of containment, drift, and touch with the architectonic environment. The absence of the avatar suppressed such feelings, yet, in the large space, we found an effect of continuity between the physical and the virtual interior depending on the full-body illusion. We discuss subjective feelings evoked by architecture and compare the full-body illusion in augmented interiors to architectonic embodiment. A relevant outcome of this study is the potential to dissociate the egocentric, first-person view from the physical point of view through augmented architectonic space.
An integrated system for dynamic control of auditory perspective in a multichannel sound field
NASA Astrophysics Data System (ADS)
Corey, Jason Andrew
An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.
Egocentric and allocentric representations in auditory cortex
Brimijoin, W. Owen; Bizley, Jennifer K.
2017-01-01
A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796
Michalka, Samantha W; Kong, Lingqiang; Rosen, Maya L; Shinn-Cunningham, Barbara G; Somers, David C
2015-08-19
The frontal lobes control wide-ranging cognitive functions; however, functional subdivisions of human frontal cortex are only coarsely mapped. Here, functional magnetic resonance imaging reveals two distinct visual-biased attention regions in lateral frontal cortex, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), anatomically interdigitated with two auditory-biased attention regions, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic functional connectivity analysis demonstrates that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Interestingly, we observe that spatial and temporal short-term memory (STM), respectively, recruit visual and auditory attention networks in the frontal lobe, independent of sensory modality. These findings not only demonstrate that both sensory modality and information domain influence frontal lobe functional organization, they also demonstrate that spatial processing co-localizes with visual processing and that temporal processing co-localizes with auditory processing in lateral frontal cortex. Copyright © 2015 Elsevier Inc. All rights reserved.
Audio Spatial Representation Around the Body
Aggius-Vella, Elena; Campus, Claudio; Finocchietti, Sara; Gori, Monica
2017-01-01
Studies have found that portions of space around our body are differently coded by our brain. Numerous works have investigated visual and auditory spatial representation, focusing mostly on the spatial representation of stimuli presented at head level, especially in the frontal space. Only few studies have investigated spatial representation around the entire body and its relationship with motor activity. Moreover, it is still not clear whether the space surrounding us is represented as a unitary dimension or whether it is split up into different portions, differently shaped by our senses and motor activity. To clarify these points, we investigated audio localization of dynamic and static sounds at different body levels. In order to understand the role of a motor action in auditory space representation, we asked subjects to localize sounds by pointing with the hand or the foot, or by giving a verbal answer. We found that the audio sound localization was different depending on the body part considered. Moreover, a different pattern of response was observed when subjects were asked to make actions with respect to the verbal responses. These results suggest that the audio space around our body is split in various spatial portions, which are perceived differently: front, back, around chest, and around foot, suggesting that these four areas could be differently modulated by our senses and our actions. PMID:29249999
Research on The Construction of Flexible Multi-body Dynamics Model based on Virtual Components
NASA Astrophysics Data System (ADS)
Dong, Z. H.; Ye, X.; Yang, F.
2018-05-01
Focus on the harsh operation condition of space manipulator, which cannot afford relative large collision momentum, this paper proposes a new concept and technology, called soft-contact technology. In order to solve the problem of collision dynamics of flexible multi-body system caused by this technology, this paper also proposes the concepts of virtual components and virtual hinges, and constructs flexible dynamic model based on virtual components, and also studies on its solutions. On this basis, this paper uses NX to carry out model and comparison simulation for space manipulator in 3 different modes. The results show that using the model of multi-rigid body + flexible body hinge + controllable damping can make effective control on amplitude for the force and torque caused by target satellite collision.
Distracting people from sources of discomfort in a simulated aircraft environment.
Lewis, Laura; Patel, Harshada; Cobb, Sue; D'Cruz, Mirabelle; Bues, Matthias; Stefani, Oliver; Grobler, Tredeaux
2016-07-19
Comfort is an important factor in the acceptance of transport systems. In 2010 and 2011, the European Commission (EC) put forward its vision for air travel in the year 2050 which envisaged the use of in-flight virtual reality. This paper addressed the EC vision by investigating the effect of virtual environments on comfort. Research has shown that virtual environments can provide entertaining experiences and can be effective distracters from painful experiences. To determine the extent to which a virtual environment could distract people from sources of discomfort. Experiments which involved inducing discomfort commonly experienced in-flight (e.g. limited space, noise) in order to determine the extent to which viewing a virtual environment could distract people from discomfort. Virtual environments can fully or partially distract people from sources of discomfort, becoming more effective when they are interesting. They are also more effective at distracting people from discomfort caused by restricted space than noise disturbances. Virtual environments have the potential to enhance passenger comfort by providing positive distractions from sources of discomfort. Further research is required to understand more fully the reasons why the effect was stronger for one source of discomfort than the other.
STS-111 Training in VR lab with Expedition IV and V Crewmembers
2001-10-18
JSC2001-E-39090 (18 October 2001) --- Cosmonaut Valeri G. Korzun, Expedition Five mission commander representing Rosaviakosmos, uses the virtual reality lab at the Johnson Space Center (JSC) to train for his duties on the International Space Station (ISS). This type of computer interface paired with virtual reality training hardware and software helps the entire team for dealing with ISS elements.
ERIC Educational Resources Information Center
Kallkvist, Marie; Gomez, Stephen; Andersson, Holger; Lush, David
2009-01-01
The purpose of this study was to create and evaluate personalised virtual learning spaces (PVLSs) in a course that was previously delivered face-to-face only. The study addressed three related questions: (1) Can a PVLS successfully be introduced into a course where IT has not previously featured? (2) Can the PVLSs be used to enhance the assessment…
The Commercial Side of Virtual Play Worlds
ERIC Educational Resources Information Center
Kargin, Tolga
2018-01-01
In recent years, virtual play spaces have become enormously popular among young children around the world. As yet, though, there has been relatively little research into the ways in which children interact on such sites and what they learn in the process. This article describes a study of kids' experiences with one such virtual world, Club…
Virtual Worlds, Virtual Literacy: An Educational Exploration
ERIC Educational Resources Information Center
Stoerger, Sharon
2008-01-01
Virtual worlds enable students to learn through seeing, knowing, and doing within visually rich and mentally engaging spaces. Rather than reading about events, students become part of the events through the adoption of a pre-set persona. Along with visual feedback that guides the players' activities and the development of visual skills, visual…
ERIC Educational Resources Information Center
Triggs, Riley; Jarmon, Leslie; Villareal, Tracy A.
2010-01-01
Virtual environments can resolve many practical and pedagogical challenges within higher education. Economic considerations, accessibility issues, and safety concerns can all be somewhat alleviated by creating learning activities in a virtual space. Because of the removal of real-world physical limitations like gravity, durability and scope,…
Using EMG to anticipate head motion for virtual-environment applications
NASA Technical Reports Server (NTRS)
Barniv, Yair; Aguilar, Mario; Hasanbelliu, Erion
2005-01-01
In virtual environment (VE) applications, where virtual objects are presented in a see-through head-mounted display, virtual images must be continuously stabilized in space in response to user's head motion. Time delays in head-motion compensation cause virtual objects to "swim" around instead of being stable in space which results in misalignment errors when overlaying virtual and real objects. Visual update delays are a critical technical obstacle for implementing head-mounted displays in applications such as battlefield simulation/training, telerobotics, and telemedicine. Head motion is currently measurable by a head-mounted 6-degrees-of-freedom inertial measurement unit. However, even given this information, overall VE-system latencies cannot be reduced under about 25 ms. We present a novel approach to eliminating latencies, which is premised on the fact that myoelectric signals from a muscle precede its exertion of force, thereby limb or head acceleration. We thus suggest utilizing neck-muscles' myoelectric signals to anticipate head motion. We trained a neural network to map such signals onto equivalent time-advanced inertial outputs. The resulting network can achieve time advances of up to 70 ms.
Using EMG to anticipate head motion for virtual-environment applications.
Barniv, Yair; Aguilar, Mario; Hasanbelliu, Erion
2005-06-01
In virtual environment (VE) applications, where virtual objects are presented in a see-through head-mounted display, virtual images must be continuously stabilized in space in response to user's head motion. Time delays in head-motion compensation cause virtual objects to "swim" around instead of being stable in space which results in misalignment errors when overlaying virtual and real objects. Visual update delays are a critical technical obstacle for implementing head-mounted displays in applications such as battlefield simulation/training, telerobotics, and telemedicine. Head motion is currently measurable by a head-mounted 6-degrees-of-freedom inertial measurement unit. However, even given this information, overall VE-system latencies cannot be reduced under about 25 ms. We present a novel approach to eliminating latencies, which is premised on the fact that myoelectric signals from a muscle precede its exertion of force, thereby limb or head acceleration. We thus suggest utilizing neck-muscles' myoelectric signals to anticipate head motion. We trained a neural network to map such signals onto equivalent time-advanced inertial outputs. The resulting network can achieve time advances of up to 70 ms.
An adaptive process-based cloud infrastructure for space situational awareness applications
NASA Astrophysics Data System (ADS)
Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce
2014-06-01
Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.
Virtual Boutique: a 3D modeling and content-based management approach to e-commerce
NASA Astrophysics Data System (ADS)
Paquet, Eric; El-Hakim, Sabry F.
2000-12-01
The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.
Fawcett, Kayleigh; Ratcliffe, John M
2015-03-01
We compared the influence of conspecifics and clutter on echolocation and flight speed in the bat Myotis daubentonii. In a large room, actual pairs of bats exhibited greater disparity in peak frequency (PF), minimum frequency (F MIN) and call period compared to virtual pairs of bats, each flying alone. Greater inter-individual disparity in PF and F MIN may reduce acoustic interference and/or increase signal self-recognition in the presence of conspecifics. Bats flying alone in a smaller flight room, to simulate a more cluttered habitat as compared to the large flight room, produced calls of shorter duration and call period, lower intensity, and flew at lower speeds. In cluttered space, shorter call duration should reduce masking, while shorter call period equals more updates to the bat's auditory scene. Lower intensity likely reflects reduced range detection requirements, reduced speed the demands of flying in clutter. Our results show that some changes (e.g. PF separation) are associated with conspecifics, others with closed habitat (e.g. reduced call intensity). However, we demonstrate that call duration, period, and flight speed appear similarly influenced by conspecifics and clutter. We suggest that some changes reduce conspecific interference and/or improve self-recognition, while others demonstrate that bats experience each other like clutter.
Microgravity vestibular investigations (10-IML-1)
NASA Technical Reports Server (NTRS)
Reschke, Millard F.
1992-01-01
Our perception of how we are oriented in space is dependent on the interaction of virtually every sensory system. For example, to move about in our environment we integrate inputs in our brain from visual, haptic (kinesthetic, proprioceptive, and cutaneous), auditory systems, and labyrinths. In addition to this multimodal system for orientation, our expectations about the direction and speed of our chosen movement are also important. Changes in our environment and the way we interact with the new stimuli will result in a different interpretation by the nervous system of the incoming sensory information. We will adapt to the change in appropriate ways. Because our orientation system is adaptable and complex, it is often difficult to trace a response or change in behavior to any one source of information in this synergistic orientation system. However, with a carefully designed investigation, it is possible to measure signals at the appropriate level of response (both electrophysiological and perceptual) and determine the effect that stimulus rearrangement has on our sense of orientation. The environment of orbital flight represents the stimulus arrangement that is our immediate concern. The Microgravity Vestibular Investigations (MVI) represent a group of experiments designed to investigate the effects of orbital flight and a return to Earth on our orientation system.
Iachini, Tina; Pagliaro, Stefano; Ruggiero, Gennaro
2015-10-01
Near body distance is a key component of action and social interaction. Recent research has shown that peripersonal space (reachability-distance for acting with objects) and interpersonal space (comfort-distance for interacting with people) share common mechanisms and reflect the social valence of stimuli. The social psychological literature has demonstrated that information about morality is crucial because it affects impression formation and the intention to approach-avoid others. Here we explore whether peripersonal/interpersonal spaces are modulated by moral information. Thirty-six participants interacted with male/female virtual confederates described by moral/immoral/neutral sentences. The modulation of body space was measured by reachability-distance and comfort-distance while participants stood still or walked toward virtual confederates. Results showed that distance expanded with immorally described confederates and contracted with morally described confederates. This pattern was present in both spaces, although it was stronger in comfort-distance. Consistent with an embodied cognition approach, the findings suggest that high-level socio-cognitive processes are linked to sensorimotor-spatial processes. Copyright © 2015. Published by Elsevier B.V.
The Benefits of Virtual Presence in Space (VPS) to Deep Space Missions
NASA Technical Reports Server (NTRS)
De Jong, Eric M.; McGuffie, Barbara A; Levoe, Steven R.; Suzuki, Shigeru; Gorjian, Zareh; Leung, Chris; Cordell, Christopher; Loaiza, Frank; Baldwin, Robert; Craig, Jason;
2006-01-01
Understanding our place in the Universe is one of mankind's greatest scientific and technological challenges and achievements. The invention of the telescope, the Copernican Revolution, the development of Newtonian mechanics, and the Space Age exploration of our solar system; provided us with a deeper understanding of our place in the Universe; based on better observations and models. As we approach the end of the first decade of the new millennium, the same quest, to understand our place in the Universe, remains a great challenge. New technologies will enable us to construct and interact with a "Virtual Universe" based on remote and in situ observations of other worlds. As we continue the exploration that began in the last century, we will experience a "Virtual Presence in Space (VPS)" in this century. This paper describes VPS technology, the mechanisms for VPS product distribution and display, the benefits of this technology, and future plans. Deep space mission stereo observations and frames from stereo High Definition Television (HDTV) mission animations are used to illustrate the effectiveness of VPS technology.
STS-131 crew during VR Lab MSS/EVAB SUPT3 Team 91016 training
2009-09-25
JSC2009-E-214340 (25 Sept. 2009) --- NASA astronaut Clayton Anderson, STS-131 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.
STS-132 crew during their MSS/SIMP EVA3 OPS 4 training
2010-01-28
JSC2010-E-014958 (28 Jan. 2010) --- NASA astronaut Michael Good, STS-132 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.
STS-132 crew during their MSS/SIMP EVA3 OPS 4 training
2010-01-28
JSC2010-E-014962 (28 Jan. 2010) --- NASA astronauts Michael Good (foreground) and Garrett Reisman, both STS-132 mission specialists, use virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of their duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working.
STS-132 crew during their MSS/SIMP EVA3 OPS 4 training
2010-01-28
JSC2010-E-014957 (28 Jan. 2010) --- NASA astronaut Michael Good, STS-132 mission specialist, uses virtual reality hardware in the Space Vehicle Mock-up Facility at NASA's Johnson Space Center to rehearse some of his duties on the upcoming mission to the International Space Station. This type of virtual reality training allows the astronauts to wear a helmet and special gloves while looking at computer displays simulating actual movements around the various locations on the station hardware with which they will be working. David Homan assisted Good.
du Sert, Olivier Percie; Potvin, Stéphane; Lipp, Olivier; Dellazizzo, Laura; Laurelli, Mélanie; Breton, Richard; Lalonde, Pierre; Phraxayavong, Kingsada; O'Connor, Kieron; Pelletier, Jean-François; Boukhalfi, Tarik; Renaud, Patrice; Dumais, Alexandre
2018-02-24
Schizophrenia is a chronic and severe mental illness that poses significant challenges. While many pharmacological and psychosocial interventions are available, many treatment-resistant schizophrenia patients continue to suffer from persistent psychotic symptoms, notably auditory verbal hallucinations (AVH), which are highly disabling. This unmet clinical need requires new innovative treatment options. Recently, a psychological therapy using computerized technology has shown large therapeutic effects on AVH severity by enabling patients to engage in a dialogue with a computerized representation of their voices. These very promising results have been extended by our team using immersive virtual reality (VR). Our study was a 7-week phase-II, randomized, partial cross-over trial. Nineteen schizophrenia patients with refractory AVH were recruited and randomly allocated to either VR-assisted therapy (VRT) or treatment-as-usual (TAU). The group allocated to TAU consisted of antipsychotic treatment and usual meetings with clinicians. The TAU group then received a delayed 7weeks of VRT. A follow-up was ensured 3months after the last VRT therapy session. Changes in psychiatric symptoms, before and after TAU or VRT, were assessed using a linear mixed-effects model. Our findings showed that VRT produced significant improvements in AVH severity, depressive symptoms and quality of life that lasted at the 3-month follow-up period. Consistent with previous research, our results suggest that VRT might be efficacious in reducing AVH related distress. The therapeutic effects of VRT on the distress associated with the voices were particularly prominent (d=1.2). VRT is a highly novel and promising intervention for refractory AVH in schizophrenia. Copyright © 2018. Published by Elsevier B.V.
Neuro-parity pattern recognition system and method
Gross, Kenneth C.; Singer, Ralph M.; Van Alstine, Rollin G.; Wegerich, Stephan W.; Yue, Yong
2000-01-01
A method and system for monitoring a process and determining its condition. Initial data is sensed, a first set of virtual data is produced by applying a system state analyzation to the initial data, a second set of virtual data is produced by applying a neural network analyzation to the initial data and a parity space analyzation is applied to the first and second set of virtual data and also to the initial data to provide a parity space decision about the condition of the process. A logic test can further be applied to produce a further system decision about the state of the process.
Direct access inter-process shared memory
Brightwell, Ronald B; Pedretti, Kevin; Hudson, Trammell B
2013-10-22
A technique for directly sharing physical memory between processes executing on processor cores is described. The technique includes loading a plurality of processes into the physical memory for execution on a corresponding plurality of processor cores sharing the physical memory. An address space is mapped to each of the processes by populating a first entry in a top level virtual address table for each of the processes. The address space of each of the processes is cross-mapped into each of the processes by populating one or more subsequent entries of the top level virtual address table with the first entry in the top level virtual address table from other processes.
Astronauts Prepare for Mission With Virtual Reality Hardware
NASA Technical Reports Server (NTRS)
2001-01-01
Astronauts John M. Grunsfeld (left), STS-109 payload commander, and Nancy J. Currie, mission specialist, use the virtual reality lab at Johnson Space Center to train for upcoming duties aboard the Space Shuttle Columbia. This type of computer interface paired with virtual reality training hardware and software helps to prepare the entire team to perform its duties for the fourth Hubble Space Telescope Servicing mission. The most familiar form of virtual reality technology is some form of headpiece, which fits over your eyes and displays a three dimensional computerized image of another place. Turn your head left and right, and you see what would be to your sides; turn around, and you see what might be sneaking up on you. An important part of the technology is some type of data glove that you use to propel yourself through the virtual world. Currently, the medical community is using the new technologies in four major ways: To see parts of the body more accurately, for study, to make better diagnosis of disease and to plan surgery in more detail; to obtain a more accurate picture of a procedure during surgery; to perform more types of surgery with the most noninvasive, accurate methods possible; and to model interactions among molecules at a molecular level.
Yasuda, Kazuhiro; Muroi, Daisuke; Ohira, Masahiro; Iwata, Hiroyasu
2017-10-01
Unilateral spatial neglect (USN) is defined as impaired ability to attend and see on one side, and when present, it interferes seriously with daily life. These symptoms can exist for near and far spaces combined or independently, and it is important to provide effective intervention for near and far space neglect. The purpose of this pilot study was to propose an immersive virtual reality (VR) rehabilitation program using a head-mounted display that is able to train both near and far space neglect, and to validate the immediate effect of the VR program in both near and far space neglect. Ten USN patients underwent the VR program with a pre-post design and no control. In the virtual environment, we developed visual searching and reaching tasks using an immersive VR system. Behavioral inattention test (BIT) scores obtained pre- and immediate post-VR program were compared. BIT scores obtained pre- and post-VR program revealed that far space neglect but not near space neglect improved promptly after the VR program. This effect for far space neglect was observed in the cancelation task, but not in the line bisection task. Positive effects of the immersive VR program for far space neglect are suggested by the results of the present pilot study. However, further studies with rigorous designs are needed to validate its clinical effectiveness.
Non-auditory factors affecting urban soundscape evaluation.
Jeon, Jin Yong; Lee, Pyoung Jik; Hong, Joo Young; Cabrera, Densil
2011-12-01
The aim of this study is to characterize urban spaces, which combine landscape, acoustics, and lighting, and to investigate people's perceptions of urban soundscapes through quantitative and qualitative analyses. A general questionnaire survey and soundwalk were performed to investigate soundscape perception in urban spaces. Non-auditory factors (visual image, day lighting, and olfactory perceptions), as well as acoustic comfort, were selected as the main contexts that affect soundscape perception, and context preferences and overall impressions were evaluated using an 11-point numerical scale. For qualitative analysis, a semantic differential test was performed in the form of a social survey, and subjects were also asked to describe their impressions during a soundwalk. The results showed that urban soundscapes can be characterized by soundmarks, and soundscape perceptions are dominated by acoustic comfort, visual images, and day lighting, whereas reverberance in urban spaces does not yield consistent preference judgments. It is posited that the subjective evaluation of reverberance can be replaced by physical measurements. The categories extracted from the qualitative analysis revealed that spatial impressions such as openness and density emerged as some of the contexts of soundscape perception. © 2011 Acoustical Society of America
Auditory cortex of newborn bats is prewired for echolocation.
Kössl, Manfred; Voss, Cornelia; Mora, Emanuel C; Macias, Silvio; Foeller, Elisabeth; Vater, Marianne
2012-04-10
Neuronal computation of object distance from echo delay is an essential task that echolocating bats must master for spatial orientation and the capture of prey. In the dorsal auditory cortex of bats, neurons specifically respond to combinations of short frequency-modulated components of emitted call and delayed echo. These delay-tuned neurons are thought to serve in target range calculation. It is unknown whether neuronal correlates of active space perception are established by experience-dependent plasticity or by innate mechanisms. Here we demonstrate that in the first postnatal week, before onset of echolocation and flight, dorsal auditory cortex already contains functional circuits that calculate distance from the temporal separation of a simulated pulse and echo. This innate cortical implementation of a purely computational processing mechanism for sonar ranging should enhance survival of juvenile bats when they first engage in active echolocation behaviour and flight.
Auditory and visual cortex of primates: a comparison of two sensory systems
Rauschecker, Josef P.
2014-01-01
A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separating the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features on the columnar level are direction selectivity, size/bandwidth selectivity, as well as receptive fields with segregated versus overlapping on- and off-sub-regions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: 1) identification of objects and 2) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independent of sensory modality. PMID:25728177
Space and Time Partitioning with Hardware Support for Space Applications
NASA Astrophysics Data System (ADS)
Pinto, S.; Tavares, A.; Montenegro, S.
2016-08-01
Complex and critical systems like airplanes and spacecraft implement a very fast growing amount of functions. Typically, those systems were implemented with fully federated architectures, but the number and complexity of desired functions of todays systems led aerospace industry to follow another strategy. Integrated Modular Avionics (IMA) arose as an attractive approach for consolidation, by combining several applications into one single generic computing resource. Current approach goes towards higher integration provided by space and time partitioning (STP) of system virtualization. The problem is existent virtualization solutions are not ready to fully provide what the future of aerospace are demanding: performance, flexibility, safety, security while simultaneously containing Size, Weight, Power and Cost (SWaP-C).This work describes a real time hypervisor for space applications assisted by commercial off-the-shell (COTS) hardware. ARM TrustZone technology is exploited to implement a secure virtualization solution with low overhead and low memory footprint. This is demonstrated by running multiple guest partitions of RODOS operating system on a Xilinx Zynq platform.
Deficient gaze pattern during virtual multiparty conversation in patients with schizophrenia.
Han, Kiwan; Shin, Jungeun; Yoon, Sang Young; Jang, Dong-Pyo; Kim, Jae-Jin
2014-06-01
Virtual reality has been used to measure abnormal social characteristics, particularly in one-to-one situations. In real life, however, conversations with multiple companions are common and more complicated than two-party conversations. In this study, we explored the features of social behaviors in patients with schizophrenia during virtual multiparty conversations. Twenty-three patients with schizophrenia and 22 healthy controls performed the virtual three-party conversation task, which included leading and aiding avatars, positive- and negative-emotion-laden situations, and listening and speaking phases. Patients showed a significant negative correlation in the listening phase between the amount of gaze on the between-avatar space and reasoning ability, and demonstrated increased gaze on the between-avatar space in the speaking phase that was uncorrelated with attentional ability. These results suggest that patients with schizophrenia have active avoidance of eye contact during three-party conversations. Virtual reality may provide a useful way to measure abnormal social characteristics during multiparty conversations in schizophrenia. Copyright © 2014 Elsevier Ltd. All rights reserved.
Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg
2018-01-01
Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as ‘presence’, when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user’s overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience. PMID:29390023
Cooper, Natalia; Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg
2018-01-01
Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as 'presence', when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user's overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience.
Photorealistic virtual anatomy based on Chinese Visible Human data.
Heng, P A; Zhang, S X; Xie, Y M; Wong, T T; Chui, Y P; Cheng, C Y
2006-04-01
Virtual reality based learning of human anatomy is feasible when a database of 3D organ models is available for the learner to explore, visualize, and dissect in virtual space interactively. In this article, we present our latest work on photorealistic virtual anatomy applications based on the Chinese Visible Human (CVH) data. We have focused on the development of state-of-the-art virtual environments that feature interactive photo-realistic visualization and dissection of virtual anatomical models constructed from ultra-high resolution CVH datasets. We also outline our latest progress in applying these highly accurate virtual and functional organ models to generate realistic look and feel to advanced surgical simulators. (c) 2006 Wiley-Liss, Inc.
Expedition 11 Training with Krikalev/Henderson
2004-08-12
Expedition 11 Training with Krikalev/Henderson as their continued their training in the Virtual Reality Laboratory in building 9. View includes: Sergei Krikalev and Henderson using the virtual optics to view the International Space Station.
NASA Technical Reports Server (NTRS)
Ross, M. D.
2001-01-01
Safety of astronauts during long-term space exploration is a priority for NASA. This paper describes efforts to produce Earth-based models for providing expert medical advice when unforeseen medical emergencies occur on spacecraft. These models are Virtual Collaborative Clinics that reach into remote sites using telecommunications and emerging stereo-imaging and sensor technologies. c 2001. Elsevier Science Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Twombly, I. Alexander; Smith, Jeffrey; Bruyns, Cynthia; Montgomery, Kevin; Boyle, Richard
2003-01-01
The International Space Station will soon provide an unparalleled research facility for studying the near- and longer-term effects of microgravity on living systems. Using the Space Station Glovebox Facility - a compact, fully contained reach-in environment - astronauts will conduct technically challenging life sciences experiments. Virtual environment technologies are being developed at NASA Ames Research Center to help realize the scientific potential of this unique resource by facilitating the experimental hardware and protocol designs and by assisting the astronauts in training. The Virtual GloveboX (VGX) integrates high-fidelity graphics, force-feedback devices and real- time computer simulation engines to achieve an immersive training environment. Here, we describe the prototype VGX system, the distributed processing architecture used in the simulation environment, and modifications to the visualization pipeline required to accommodate the display configuration.
Mattotti, M; Micholt, L; Braeken, D; Kovačić, D
2015-04-01
One of the strategies to improve cochlear implant technology is to increase the number of electrodes in the neuro-electronic interface. The objective was to characterize in vitro cultures of spiral ganglion neurons (SGN) cultured on surfaces of novel silicon micro-pillar substrates (MPS). SGN from P5 rat pups were cultured on MPS with different micro-pillar widths (1-5.6 μm) and spacings (0.6-15 μm) and were compared with control SGN cultures on glass coverslips by immunocytochemistry and scanning electron microscopy (SEM). Overall, MPS support SGN growth equally well as the control glass surfaces. Micro-pillars of a particular size-range (1.2-2.4 μm) were optimal in promoting SGN presence, neurite growth and alignment. On this specific micro-pillar size, more SGN were present, and neurites were longer and more aligned. SEM pictures highlight how cells on micro-pillars with smaller spacings grow directly on top of pillars, while at wider spacings (from 3.2 to 15 μm) they grow on the bottom of the surface, losing contact guidance. Further, we found that MPS encourage more monopolar and bipolar SGN morphologies compared to the control condition. Finally, MPS induce longest neurite growth with minimal interaction of S100+ glial cells. These results indicate that silicon micro-pillar substrates create a permissive environment for the growth of primary auditory neurons promoting neurite sprouting and are a promising technology for future high-density three-dimensional CMOS-based auditory neuro-electronic interfaces.
Soft Where? Licensing Struggles in a Virtual World
ERIC Educational Resources Information Center
Ramaswami, Rama
2011-01-01
As virtualization becomes commonplace in higher education, it is clear that the traditional licensing options for software are woefully inadequate. The definitions of who is licensed to use what--and where--are blurring, as users move from physical to virtual spaces and can access software from a variety of devices. In discussing the need for new…
Attitude and Self-Efficacy Change: English Language Learning in Virtual Worlds
ERIC Educational Resources Information Center
Zheng, Dongping; Young, Michael F.; Brewer, Robert A.; Wagner, Manuela
2009-01-01
This study explored affective factors in learning English as a foreign language in a 3D game-like virtual world, Quest Atlantis (QA). Through the use of communication tools (e.g., chat, bulletin board, telegrams, and email), 3D avatars, and 2D webpage navigation tools in virtual space, nonnative English speakers (NNES) co-solved online…
Virtual Team Governance: Addressing the Governance Mechanisms and Virtual Team Performance
NASA Astrophysics Data System (ADS)
Zhan, Yihong; Bai, Yu; Liu, Ziheng
As technology has improved and collaborative software has been developed, virtual teams with geographically dispersed members spread across diverse physical locations have become increasingly prominent. Virtual team is supported by advancing communication technologies, which makes virtual teams able to largely transcend time and space. Virtual teams have changed the corporate landscape, which are more complex and dynamic than traditional teams since the members of virtual teams are spread on diverse geographical locations and their roles in the virtual team are different. Therefore, how to realize good governance of virtual team and arrive at good virtual team performance is becoming critical and challenging. Good virtual team governance is essential for a high-performance virtual team. This paper explores the performance and the governance mechanism of virtual team. It establishes a model to explain the relationship between the performance and the governance mechanisms in virtual teams. This paper is focusing on managing virtual teams. It aims to find the strategies to help business organizations to improve the performance of their virtual teams and arrive at the objectives of good virtual team management.
Haptic interfaces: Hardware, software and human performance
NASA Technical Reports Server (NTRS)
Srinivasan, Mandayam A.
1995-01-01
Virtual environments are computer-generated synthetic environments with which a human user can interact to perform a wide variety of perceptual and motor tasks. At present, most of the virtual environment systems engage only the visual and auditory senses, and not the haptic sensorimotor system that conveys the sense of touch and feel of objects in the environment. Computer keyboards, mice, and trackballs constitute relatively simple haptic interfaces. Gloves and exoskeletons that track hand postures have more interaction capabilities and are available in the market. Although desktop and wearable force-reflecting devices have been built and implemented in research laboratories, the current capabilities of such devices are quite limited. To realize the full promise of virtual environments and teleoperation of remote systems, further developments of haptic interfaces are critical. In this paper, the status and research needs in human haptics, technology development and interactions between the two are described. In particular, the excellent performance characteristics of Phantom, a haptic interface recently developed at MIT, are highlighted. Realistic sensations of single point of contact interactions with objects of variable geometry (e.g., smooth, textured, polyhedral) and material properties (e.g., friction, impedance) in the context of a variety of tasks (e.g., needle biopsy, switch panels) achieved through this device are described and the associated issues in haptic rendering are discussed.
Virtual Images: Going Through the Looking Glass
NASA Astrophysics Data System (ADS)
Mota, Ana Rita; dos Santos, João Lopes
2017-01-01
Virtual images are often introduced through a "geometric" perspective, with little conceptual or qualitative illustrations, hindering a deeper understanding of this physical concept. In this paper, we present two rather simple observations that force a critical reflection on the optical nature of a virtual image. This approach is supported by the reflect-view, a useful device in geometrical optics classes because it allows a visual confrontation between virtual images and real objects that seemingly occupy the same region of space.
Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2013-02-16
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
Effect of Virtual Reality on Cognition in Stroke Patients
Kim, Bo Ryun; Kim, Lee Suk; Park, Ji Young
2011-01-01
Objective To investigate the effect of virtual reality on the recovery of cognitive impairment in stroke patients. Method Twenty-eight patients (11 males and 17 females, mean age 64.2) with cognitive impairment following stroke were recruited for this study. All patients were randomly assigned to one of two groups, the virtual reality (VR) group (n=15) or the control group (n=13). The VR group received both virtual reality training and computer-based cognitive rehabilitation, whereas the control group received only computer-based cognitive rehabilitation. To measure, activity of daily living cognitive and motor functions, the following assessment tools were used: computerized neuropsychological test and the Tower of London (TOL) test for cognitive function assessment, Korean-Modified Barthel index (K-MBI) for functional status evaluation, and the motricity index (MI) for motor function assessment. All recruited patients underwent these evaluations before rehabilitation and four weeks after rehabilitation. Results The VR group showed significant improvement in the K-MMSE, visual and auditory continuous performance tests (CPT), forward digit span test (DST), forward and backward visual span tests (VST), visual and verbal learning tests, TOL, K-MBI, and MI scores, while the control group showed significant improvement in the K-MMSE, forward DST, visual and verbal learning tests, trail-making test-type A, TOL, K-MBI, and MI scores after rehabilitation. The changes in the visual CPT and backward VST in the VR group after rehabilitation were significantly higher than those in the control group. Conclusion Our findings suggest that virtual reality training combined with computer-based cognitive rehabilitation may be of additional benefit for treating cognitive impairment in stroke patients. PMID:22506159
Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.
Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel
2013-01-01
The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events).
Space-Time Coordinate Metadata for the Virtual Observatory Version 1.33
NASA Astrophysics Data System (ADS)
Rots, A. H.; Rots, A. H.
2007-10-01
This document provides a complete design description of the Space-Time Coordinate (STC) metadata for the Virtual Observatory. It explains the various components, highlights some implementation considerations, presents a complete set of UML diagrams, and discusses the relation between STC and certain other parts of the Data Model. Two serializations are discussed: XML Schema (STC-X) and String (STC-S); the former is an integral part of this Recommendation.
Distributed computing environments for future space control systems
NASA Technical Reports Server (NTRS)
Viallefont, Pierre
1993-01-01
The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.
Learning Behaviors and Interaction Patterns among Students in Virtual Learning Worlds
ERIC Educational Resources Information Center
Lin, Chi-Syan; Ma, Jung Tsan; Chen, Yi-Lung; Kuo, Ming-Shiou
2010-01-01
The goal of this study is to investigate how students behave themselves in the virtual learning worlds. The study creates a 3D virtual learning world, entitled the Best Digital Village, and implements a learning program on it. The learning program, the Expo, takes place at the Exhibition Center in the Best Digital Village. The space in the Expo is…
White Paper for Virtual Control Room
NASA Technical Reports Server (NTRS)
Little, William; Tully-Hanson, Benjamin
2015-01-01
The Virtual Control Room (VCR) Proof of Concept (PoC) project is the result of an award given by the Fourth Annual NASA T&I Labs Challenge Project Call. This paper will outline the work done over the award period to build and enhance the capabilities of the Augmented/Virtual Reality (AVR) Lab at NASA's Kennedy Space Center (KSC) to create the VCR.
ERIC Educational Resources Information Center
Flores, Serena; Walters, Nicole McZeal; Kiekel, Jean
2018-01-01
The purpose of this qualitative study was to examine holistic perceptions of teachers in a virtual high school who deliver secondary instruction using an online format. The demand for equitable learning spaces to support both teachers and students have led to the increased demand of virtual schools. The questionnaire administered to eight online…
Virtual Worlds; Real Learning: Design Principles for Engaging Immersive Environments
NASA Technical Reports Server (NTRS)
Wu (u. Sjarpm)
2012-01-01
The EMDT master's program at Full Sail University embarked on a small project to use a virtual environment to teach graduate students. The property used for this project has evolved our several iterations and has yielded some basic design principles and pedagogy for virtual spaces. As a result, students are emerging from the program with a better grasp of future possibilities.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
1991-01-01
Natural environments have a content, i.e., the objects in them; a geometry, i.e., a pattern of rules for positioning and displacing the objects; and a dynamics, i.e., a system of rules describing the effects of forces acting on the objects. Human interaction with most common natural environments has been optimized by centuries of evolution. Virtual environments created through the human-computer interface similarly have a content, geometry, and dynamics, but the arbitrary character of the computer simulation creating them does not insure that human interaction with these virtual environments will be natural. The interaction, indeed, could be supernatural but it also could be impossible. An important determinant of the comprehensibility of a virtual environment is the correspondence between the environmental frames of reference and those associated with the control of environmental objects. The effects of rotation and displacement of control frames of reference with respect to corresponding environmental references differ depending upon whether perceptual judgement or manual tracking performance is measured. The perceptual effects of frame of reference displacement may be analyzed in terms of distortions in the process of virtualizing the synthetic environment space. The effects of frame of reference displacement and rotation have been studied by asking subjects to estimate exocentric direction in a virtual space.
Accessibility Standards, Illustrated.
ERIC Educational Resources Information Center
Jones, Michael A.
The book sets forth Illinois environmental accessibility standards for disabled persons based on observation and interview data. Photographs, drawings, and detailed floor plans are included in sections dealing with human data (including space requirements for maneuvering wheelchairs, color blindness, incontinence, and severe auditory or visual…
NASA Astrophysics Data System (ADS)
Boffi, Nicholas M.; Jain, Manish; Natan, Amir
2016-02-01
A real-space high order finite difference method is used to analyze the effect of spherical domain size on the Hartree-Fock (and density functional theory) virtual eigenstates. We show the domain size dependence of both positive and negative virtual eigenvalues of the Hartree-Fock equations for small molecules. We demonstrate that positive states behave like a particle in spherical well and show how they approach zero. For the negative eigenstates, we show that large domains are needed to get the correct eigenvalues. We compare our results to those of Gaussian basis sets and draw some conclusions for real-space, basis-sets, and plane-waves calculations.
Handling knowledge via Concept Maps: a space weather use case
NASA Astrophysics Data System (ADS)
Messerotti, Mauro; Fox, Peter
Concept Maps (Cmaps) are powerful means for knowledge coding in graphical form. As flexible software tools exist to manipulate the knowledge embedded in Cmaps in machine-readable form, such complex entities are suitable candidates not only for the representation of ontologies and semantics in Virtual Observatory (VO) architectures, but also for knowledge handling and knowledge discovery. In this work, we present a use case relevant to space weather applications and we elaborate on its possible implementation and adavanced use in Semantic Virtual Observatories dedicated to Sun-Earth Connections. This analysis was carried out in the framework of the Electronic Geophysical Year (eGY) and represents an achievement synergized by the eGY Virtual Observatories Working Group.
STS-103 crew perform virtual reality training in building 9N
1999-05-24
S99-05678 (24 May 1999)--- Astronaut Jean-Francois Clervoy (right), STS-103 mission specialist representing the European Space Agency (ESA), "controls" the shuttle's remote manipulator system (RMS) during a simulation using virtual reality type hardware at the Johnson Space Center (JSC). Looking on is astronaut John M. Grunsfeld, mission specialist. Both astronauts are assigned to separate duties supporting NASA's third Hubble Space Telescope (HST) servicing mission. Clervoy will be controlling Discovery's RMS and Grunsfeld is one of four astronauts that will be paired off for a total of three spacewalks on the mission.
Ranky, Richard G; Sivak, Mark L; Lewis, Jeffrey A; Gade, Venkata K; Deutsch, Judith E; Mavroidis, Constantinos
2014-06-05
Cycling has been used in the rehabilitation of individuals with both chronic and post-surgical conditions. Among the challenges with implementing bicycling for rehabilitation is the recruitment of both extremities, in particular when one is weaker or less coordinated. Feedback embedded in virtual reality (VR) augmented cycling may serve to address the requirement for efficacious cycling; specifically recruitment of both extremities and exercising at a high intensity. In this paper a mechatronic rehabilitation bicycling system with an interactive virtual environment, called Virtual Reality Augmented Cycling Kit (VRACK), is presented. Novel hardware components embedded with sensors were implemented on a stationary exercise bicycle to monitor physiological and biomechanical parameters of participants while immersing them in an augmented reality simulation providing the user with visual, auditory and haptic feedback. This modular and adaptable system attaches to commercially-available stationary bicycle systems and interfaces with a personal computer for simulation and data acquisition processes. The complete bicycle system includes: a) handle bars based on hydraulic pressure sensors; b) pedals that monitor pedal kinematics with an inertial measurement unit (IMU) and forces on the pedals while providing vibratory feedback; c) off the shelf electronics to monitor heart rate and d) customized software for rehabilitation. Bench testing for the handle and pedal systems is presented for calibration of the sensors detecting force and angle. The modular mechatronic kit for exercise bicycles was tested in bench testing and human tests. Bench tests performed on the sensorized handle bars and the instrumented pedals validated the measurement accuracy of these components. Rider tests with the VRACK system focused on the pedal system and successfully monitored kinetic and kinematic parameters of the rider's lower extremities. The VRACK system, a virtual reality mechatronic bicycle rehabilitation modular system was designed to convert most bicycles in virtual reality (VR) cycles. Preliminary testing of the augmented reality bicycle system was successful in demonstrating that a modular mechatronic kit can monitor and record kinetic and kinematic parameters of several riders.
2014-09-01
estimation in an open space. In: Glotin, editor. Soundscape semiotics: localisation and categorisation. Rijeka (Croatia): InTech; 2014. Available at...Glotin H, editor. Soundscape semiotics – localisation and categorisation. Rijeka (Croatia): InTech; 2014. Available at: http://www.intechopen.com...books/ soundscape -semiotics-localisation -and-categorisation/auditory-distance-estimation-in-an-open-space. The purpose of the study was to expand our
3-D Sound for Virtual Reality and Multimedia
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)
2000-01-01
Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.
STS-103 crew perform virtual reality training in building 9N
1999-05-24
S99-05679 (24 May 1999) --- Astronauts Claude Nicollier (seated), representing the European Space Agency (ESA), and John M. Grunsfeld use virtual reality hardware to rehearse some of their duties for the upcoming STS-103 mission, NASA's third servicing visit to the Earth-orbiting Hubble Space Telescope (HST). The two mission specialists will be joined by five other astronauts, including a second ESA representative, for the STS-103 mission, scheduled for autumn of this year.
STS-111 Training in VR lab with Expedition IV and V Crewmembers
2001-10-18
JSC2001-E-39082 (18 October 2001) --- Cosmonaut Valeri G. Korzun (left), Expedition Five mission commander, and astronaut Carl E. Walz, Expedition Four flight engineer, use the virtual reality lab at the Johnson Space Center (JSC) to train for their duties on the International Space Station (ISS). This type of computer interface paired with virtual reality training hardware and software helps the entire team for dealing with ISS elements. Korzun represents Rosaviakosmos.
Reduced event-related current density in the anterior cingulate cortex in schizophrenia.
Mulert, C; Gallinat, J; Pascual-Marqui, R; Dorn, H; Frick, K; Schlattmann, P; Mientus, S; Herrmann, W M; Winterer, G
2001-04-01
There is good evidence from neuroanatomic postmortem and functional imaging studies that dysfunction of the anterior cingulate cortex plays a prominent role in the pathophysiology of schizophrenia. So far, no electrophysiological localization study has been performed to investigate this deficit. We investigated 18 drug-free schizophrenic patients and 25 normal subjects with an auditory choice reaction task and measured event-related activity with 19 electrodes. Estimation of the current source density distribution in Talairach space was performed with low-resolution electromagnetic tomography (LORETA). In normals, we could differentiate between an early event-related potential peak of the N1 (90-100 ms) and a later N1 peak (120-130 ms). Subsequent current-density LORETA analysis in Talairach space showed increased activity in the auditory cortex area during the first N1 peak and increased activity in the anterior cingulate gyrus during the second N1 peak. No activation difference was observed in the auditory cortex between normals and patients with schizophrenia. However, schizophrenics showed significantly less anterior cingulate gyrus activation and slowed reaction times. Our results confirm previous findings of an electrical source in the anterior cingulate and an anterior cingulate dysfunction in schizophrenics. Our data also suggest that anterior cingulate function in schizophrenics is disturbed at a relatively early time point in the information-processing stream (100-140 ms poststimulus). Copyright 2001 Academic Press.
The representation of sound localization cues in the barn owl's inferior colliculus
Singheiser, Martin; Gutfreund, Yoram; Wagner, Hermann
2012-01-01
The barn owl is a well-known model system for studying auditory processing and sound localization. This article reviews the morphological and functional organization, as well as the role of the underlying microcircuits, of the barn owl's inferior colliculus (IC). We focus on the processing of frequency and interaural time (ITD) and level differences (ILD). We first summarize the morphology of the sub-nuclei belonging to the IC and their differentiation by antero- and retrograde labeling and by staining with various antibodies. We then focus on the response properties of neurons in the three major sub-nuclei of IC [core of the central nucleus of the IC (ICCc), lateral shell of the central nucleus of the IC (ICCls), and the external nucleus of the IC (ICX)]. ICCc projects to ICCls, which in turn sends its information to ICX. The responses of neurons in ICCc are sensitive to changes in ITD but not to changes in ILD. The distribution of ITD sensitivity with frequency in ICCc can only partly be explained by optimal coding. We continue with the tuning properties of ICCls neurons, the first station in the midbrain where the ITD and ILD pathways merge after they have split at the level of the cochlear nucleus. The ICCc and ICCls share similar ITD and frequency tuning. By contrast, ICCls shows sigmoidal ILD tuning which is absent in ICCc. Both ICCc and ICCls project to the forebrain, and ICCls also projects to ICX, where space-specific neurons are found. Space-specific neurons exhibit side peak suppression in ITD tuning, bell-shaped ILD tuning, and are broadly tuned to frequency. These neurons respond only to restricted positions of auditory space and form a map of two-dimensional auditory space. Finally, we briefly review major IC features, including multiplication-like computations, correlates of echo suppression, plasticity, and adaptation. PMID:22798945
NASA Astrophysics Data System (ADS)
Allitt, B. J.; Benjaminsen, C.; Morgan, S. J.; Paolini, A. G.
2013-08-01
Objective. Auditory midbrain implants (AMI) provide inadequate frequency discrimination for open set speech perception. AMIs that can take advantage of the tonotopic laminar of the midbrain may be able to better deliver frequency specific perception and lead to enhanced performance. Stimulation strategies that best elicit frequency specific activity need to be identified. This research examined the characteristic frequency (CF) relationship between regions of the auditory cortex (AC), in response to stimulated regions of the inferior colliculus (IC), comparing monopolar, and intralaminar bipolar electrical stimulation. Approach. Electrical stimulation using multi-channel micro-electrode arrays in the IC was used to elicit AC responses in anaesthetized male hooded Wistar rats. The rate of activity in AC regions with CFs within 3 kHz (CF-aligned) and unaligned CFs was used to assess the frequency specificity of responses. Main results. Both monopolar and bipolar IC stimulation led to CF-aligned neural activity in the AC. Altering the distance between the stimulation and reference electrodes in the IC led to changes in both threshold and dynamic range, with bipolar stimulation with 400 µm spacing evoking the lowest AC threshold and widest dynamic range. At saturation, bipolar stimulation elicited a significantly higher mean spike count in the AC at CF-aligned areas than at CF-unaligned areas when electrode spacing was 400 µm or less. Bipolar stimulation using electrode spacing of 400 µm or less also elicited a higher rate of elicited activity in the AC in both CF-aligned and CF-unaligned regions than monopolar stimulation. When electrodes were spaced 600 µm apart no benefit over monopolar stimulation was observed. Furthermore, monopolar stimulation of the external cortex of the IC resulted in more localized frequency responses than bipolar stimulation when stimulation and reference sites were 200 µm apart. Significance. These findings have implications for the future development of AMI, as a bipolar stimulation strategy may improve the ability of implant users to discriminate between frequencies.
Poganiatz, I; Wagner, H
2001-04-01
Interaural level differences play an important role for elevational sound localization in barn owls. The changes of this cue with sound location are complex and frequency dependent. We exploited the opportunities offered by the virtual space technique to investigate the behavioral relevance of the overall interaural level difference by fixing this parameter in virtual stimuli to a constant value or introducing additional broadband level differences to normal virtual stimuli. Frequency-specific monaural cues in the stimuli were not manipulated. We observed an influence of the broadband interaural level differences on elevational, but not on azimuthal sound localization. Since results obtained with our manipulations explained only part of the variance in elevational turning angle, we conclude that frequency-specific cues are also important. The behavioral consequences of changes of the overall interaural level difference in a virtual sound depended on the combined interaural time difference contained in the stimulus, indicating an indirect influence of temporal cues on elevational sound localization as well. Thus, elevational sound localization is influenced by a combination of many spatial cues including frequency-dependent and temporal features.
Virtualizing Resources for the Application Services and Framework Team
NASA Technical Reports Server (NTRS)
Varner, Justin T.; Crawford, Linda K.
2010-01-01
Virtualization is an emerging technology that will undoubtedly have a major impact on the future of Information Technology. It allows for the centralization of resources in an enterprise system without the need to make any changes to the host operating system, file system, or registry. In turn, this significantly reduces cost and administration, and provides a much greater level of security, compatibility, and efficiency. This experiment examined the practicality, methodology, challenges, and benefits of implementing the technology for the Launch Control System (LCS), and more specifically the Application Services (AS) group of the National Aeronautics and Space Administration (NASA) at the Kennedy Space Center (KSC). In order to carry out this experiment, I used several tools from the virtualization company known as VMWare; these programs included VMWare ThinApp, VMWare Workstation, and VMWare ACE. Used in conjunction, these utilities provided the engine necessary to virtualize and deploy applications in a desktop environment on any Windows platform available. The results clearly show that virtualization is a viable technology that can, when implemented properly, dramatically cut costs, enhance stability and security, and provide easier management for administrators.
Zhou, Xiangmin; Zhang, Nan; Sha, Desong; Shen, Yunhe; Tamma, Kumar K; Sweet, Robert
2009-01-01
The inability to render realistic soft-tissue behavior in real time has remained a barrier to face and content aspects of validity for many virtual reality surgical training systems. Biophysically based models are not only suitable for training purposes but also for patient-specific clinical applications, physiological modeling and surgical planning. When considering the existing approaches for modeling soft tissue for virtual reality surgical simulation, the computer graphics-based approach lacks predictive capability; the mass-spring model (MSM) based approach lacks biophysically realistic soft-tissue dynamic behavior; and the finite element method (FEM) approaches fail to meet the real-time requirement. The present development stems from physics fundamental thermodynamic first law; for a space discrete dynamic system directly formulates the space discrete but time continuous governing equation with embedded material constitutive relation and results in a discrete mechanics framework which possesses a unique balance between the computational efforts and the physically realistic soft-tissue dynamic behavior. We describe the development of the discrete mechanics framework with focused attention towards a virtual laparoscopic nephrectomy application.
Virtual Environments: Issues and Opportunities for Researching Inclusive Educational Practices
NASA Astrophysics Data System (ADS)
Sheehy, Kieron
This chapter argues that virtual environments offer new research areas for those concerned with inclusive education. Further, it proposes that they also present opportunities for developing increasingly inclusive research processes. This chapter considers how researchers might approach researching some of these affordances. It discusses the relationship between specific features of inclusive pedagogy, derived from an international systematic literature review, and the affordances of different forms of virtual characters and environments. Examples are drawn from research in Second LifeTM (SL), virtual tutors and augmented reality. In doing this, the chapter challenges a simplistic notion of isolated physical and virtual worlds and, in the context of inclusion, between the practice of research and the research topic itself. There are a growing number of virtual worlds in which identified educational activities are taking place, or whose activities are being noted for their educational merit. These encompasses non-themed worlds such as SL and Active Worlds, game based worlds such as World of Warcraft and Runescape, and even Club Penguin, a themed virtual where younger players interact through a variety of Penguin themed environments and activities. It has been argued that these spaces, outside traditional education, are able to offer pedagogical insights (Twining 2009) i.e. that these global virtual communities have been identified as being useful as creative educational environments (Delwiche 2006; Sheehy 2009). This chapter will explore how researchers might use these spaces to investigative and create inclusive educational experiences for learners. In order to do this the chapter considers three interrelated issues: What is inclusive education?; How might inclusive education influence virtual world research? And, what might inclusive education look like in virtual worlds?
Business Case Analysis of the Marine Corps Base Pendleton Virtual Smart Grid
2017-06-01
Metering Infrastructure on DOD installations. An examination of five case studies highlights the costs and benefits of the Virtual Smart Grid (VSG...studies highlights the costs and benefits of the Virtual Smart Grid (VSG) developed by Space and Naval Warfare Systems Command for use at Marine Corps...41 A. SMART GRID BENEFITS .....................................................................41 B. SUMMARY OF VSG ESTIMATED COSTS AND BENEFITS