Sample records for auditory virtual environments

  1. Headphone and Head-Mounted Visual Displays for Virtual Environments

    NASA Technical Reports Server (NTRS)

    Begault, Duran R.; Ellis, Stephen R.; Wenzel, Elizabeth M.; Trejo, Leonard J. (Technical Monitor)

    1998-01-01

    A realistic auditory environment can contribute to both the overall subjective sense of presence in a virtual display, and to a quantitative metric predicting human performance. Here, the role of audio in a virtual display and the importance of auditory-visual interaction are examined. Conjectures are proposed regarding the effectiveness of audio compared to visual information for creating a sensation of immersion, the frame of reference within a virtual display, and the compensation of visual fidelity by supplying auditory information. Future areas of research are outlined for improving simulations of virtual visual and acoustic spaces. This paper will describe some of the intersensory phenomena that arise during operator interaction within combined visual and auditory virtual environments. Conjectures regarding audio-visual interaction will be proposed.

  2. Evidence for enhanced discrimination of virtual auditory distance among blind listeners using level and direct-to-reverberant cues.

    PubMed

    Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina

    2013-02-01

    Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.

  3. SeaTouch: A Haptic and Auditory Maritime Environment for Non Visual Cognitive Mapping of Blind Sailors

    NASA Astrophysics Data System (ADS)

    Simonnet, Mathieu; Jacobson, Dan; Vieilledent, Stephane; Tisseau, Jacques

    Navigating consists of coordinating egocentric and allocentric spatial frames of reference. Virtual environments have afforded researchers in the spatial community with tools to investigate the learning of space. The issue of the transfer between virtual and real situations is not trivial. A central question is the role of frames of reference in mediating spatial knowledge transfer to external surroundings, as is the effect of different sensory modalities accessed in simulated and real worlds. This challenges the capacity of blind people to use virtual reality to explore a scene without graphics. The present experiment involves a haptic and auditory maritime virtual environment. In triangulation tasks, we measure systematic errors and preliminary results show an ability to learn configurational knowledge and to navigate through it without vision. Subjects appeared to take advantage of getting lost in an egocentric “haptic” view in the virtual environment to improve performances in the real environment.

  4. Binaural room simulation

    NASA Technical Reports Server (NTRS)

    Lehnert, H.; Blauert, Jens; Pompetzki, W.

    1991-01-01

    In every-day listening the auditory event perceived by a listener is determined not only by the sound signal that a sound emits but also by a variety of environmental parameters. These parameters are the position, orientation and directional characteristics of the sound source, the listener's position and orientation, the geometrical and acoustical properties of surfaces which affect the sound field and the sound propagation properties of the surrounding fluid. A complete set of these parameters can be called an Acoustic Environment. If the auditory event perceived by a listener is manipulated in such a way that the listener is shifted acoustically into a different acoustic environment without moving himself physically, a Virtual Acoustic Environment has been created. Here, we deal with a special technique to set up nearly arbitrary Virtual Acoustic Environments, the Binaural Room Simulation. The purpose of the Binaural Room Simulation is to compute the binaural impulse response related to a virtual acoustic environment taking into account all parameters mentioned above. One possible way to describe a Virtual Acoustic Environment is the concept of the virtual sound sources. Each of the virtual sources emits a certain signal which is correlated but not necessarily identical with the signal emitted by the direct sound source. If source and receiver are non moving, the acoustic environment becomes a linear time-invariant system. Then, the Binaural Impulse Response from the source to a listener' s eardrums contains all relevant auditory information related to the Virtual Acoustic Environment. Listening into the simulated environment can easily be achieved by convolving the Binaural Impulse Response with dry signals and representing the results via headphones.

  5. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.

  6. Modulation of Visually Evoked Postural Responses by Contextual Visual, Haptic and Auditory Information: A ‘Virtual Reality Check’

    PubMed Central

    Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.

    2013-01-01

    Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760

  7. Education about Hallucinations Using an Internet Virtual Reality System: A Qualitative Survey

    ERIC Educational Resources Information Center

    Yellowlees, Peter M.; Cook, James N.

    2006-01-01

    Objective: The authors evaluate an Internet virtual reality technology as an education tool about the hallucinations of psychosis. Method: This is a pilot project using Second Life, an Internet-based virtual reality system, in which a virtual reality environment was constructed to simulate the auditory and visual hallucinations of two patients…

  8. Objective Fidelity Evaluation in Multisensory Virtual Environments: Auditory Cue Fidelity in Flight Simulation

    PubMed Central

    Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.

    2012-01-01

    We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068

  9. Auditory spatial representations of the world are compressed in blind humans.

    PubMed

    Kolarik, Andrew J; Pardhan, Shahina; Cirstea, Silvia; Moore, Brian C J

    2017-02-01

    Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.

  10. Using Auditory Cues to Perceptually Extract Visual Data in Collaborative, Immersive Big-Data Display Systems

    NASA Astrophysics Data System (ADS)

    Lee, Wendy

    The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.

  11. The effect of contextual auditory stimuli on virtual spatial navigation in patients with focal hemispheric lesions.

    PubMed

    Cogné, Mélanie; Knebel, Jean-François; Klinger, Evelyne; Bindschaedler, Claire; Rapin, Pierre-André; Joseph, Pierre-Alain; Clarke, Stephanie

    2018-01-01

    Topographical disorientation is a frequent deficit among patients suffering from brain injury. Spatial navigation can be explored in this population using virtual reality environments, even in the presence of motor or sensory disorders. Furthermore, the positive or negative impact of specific stimuli can be investigated. We studied how auditory stimuli influence the performance of brain-injured patients in a navigational task, using the Virtual Action Planning-Supermarket (VAP-S) with the addition of contextual ("sonar effect" and "name of product") and non-contextual ("periodic randomised noises") auditory stimuli. The study included 22 patients with a first unilateral hemispheric brain lesion and 17 healthy age-matched control subjects. After a software familiarisation, all subjects were tested without auditory stimuli, with a sonar effect or periodic random sounds in a random order, and with the stimulus "name of product". Contextual auditory stimuli improved patient performance more than control group performance. Contextual stimuli benefited most patients with severe executive dysfunction or with severe unilateral neglect. These results indicate that contextual auditory stimuli are useful in the assessment of navigational abilities in brain-damaged patients and that they should be used in rehabilitation paradigms.

  12. Scalable metadata environments (MDE): artistically impelled immersive environments for large-scale data exploration

    NASA Astrophysics Data System (ADS)

    West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram

    2014-02-01

    Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.

  13. Virtual acoustics displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-01-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  14. Virtual acoustics displays

    NASA Astrophysics Data System (ADS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-03-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  15. Fusion interfaces for tactical environments: An application of virtual reality technology

    NASA Technical Reports Server (NTRS)

    Haas, Michael W.

    1994-01-01

    The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and nonvirtual concepts and devices across the visual, auditory, and haptic sensory modalities. A fusion interface is a multisensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion interface concepts. This new facility, the Fusion Interfaces for Tactical Environments (FITE) Facility is a specialized flight simulator enabling efficient concept development through rapid prototyping and direct experience of new fusion concepts. The FITE Facility also supports evaluation of fusion concepts by operation fighter pilots in an air combat environment. The facility is utilized by a multidisciplinary design team composed of human factors engineers, electronics engineers, computer scientists, experimental psychologists, and oeprational pilots. The FITE computational architecture is composed of twenty-five 80486-based microcomputers operating in real-time. The microcomputers generate out-the-window visuals, in-cockpit and head-mounted visuals, localized auditory presentations, haptic displays on the stick and rudder pedals, as well as executing weapons models, aerodynamic models, and threat models.

  16. Virtual Environments for People Who Are Visually Impaired Integrated into an Orientation and Mobility Program

    ERIC Educational Resources Information Center

    Lahav, Orly; Schloerb, David W.; Srinivasan, Mandayam A.

    2015-01-01

    Introduction: The BlindAid, a virtual system developed for orientation and mobility (O&M) training of people who are blind or have low vision, allows interaction with different virtual components (structures and objects) via auditory and haptic feedback. This research examined if and how the BlindAid that was integrated within an O&M…

  17. Human Behavior Representation in Constructive Simulation (La representation du comportement humain dans la simulation constructive)

    DTIC Science & Technology

    2009-09-01

    Environmental Medicine USN United States Navy VAE Virtual Air Environment VACP Visual, Auditory, Cognitive, Psychomotor (demand) VR Virtual Reality ...0 .5 m/s. Another useful approach to capturing leg, trunk, whole body, or movement tasks comes from virtual reality - based training research and...referred to as semi-automated forces (SAF). From: http://www.sedris.org/glossary.htm#C_grp. Constructive Models Abstractions from the reality to

  18. Reaching nearby sources: comparison between real and virtual sound and visual targets

    PubMed Central

    Parseihian, Gaëtan; Jouffrais, Christophe; Katz, Brian F. G.

    2014-01-01

    Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments. PMID:25228855

  19. Comparing Science Virtual and Paper-Based Test to Measure Students’ Critical Thinking based on VAK Learning Style Model

    NASA Astrophysics Data System (ADS)

    Rosyidah, T. H.; Firman, H.; Rusyati, L.

    2017-02-01

    This research was comparing virtual and paper-based test to measure students’ critical thinking based on VAK (Visual-Auditory-Kynesthetic) learning style model. Quasi experiment method with one group post-test only design is applied in this research in order to analyze the data. There was 40 eight grade students at one of public junior high school in Bandung becoming the sample in this research. The quantitative data was obtained through 26 questions about living thing and environment sustainability which is constructed based on the eight elements of critical thinking and be provided in the form of virtual and paper-based test. Based on analysis of the result, it is shown that within visual, auditory, and kinesthetic were not significantly difference in virtual and paper-based test. Besides, all result was supported by quistionnaire about students’ respond on virtual test which shows 3.47 in the scale of 4. Means that student showed positive respond in all aspet measured, which are interest, impression, and expectation.

  20. High sensitivity to multisensory conflicts in agoraphobia exhibited by virtual reality.

    PubMed

    Viaud-Delmon, Isabelle; Warusfel, Olivier; Seguelas, Angeline; Rio, Emmanuel; Jouvent, Roland

    2006-10-01

    The primary aim of this study was to evaluate the effect of auditory feedback in a VR system planned for clinical use and to address the different factors that should be taken into account in building a bimodal virtual environment (VE). We conducted an experiment in which we assessed spatial performances in agoraphobic patients and normal subjects comparing two kinds of VEs, visual alone (Vis) and auditory-visual (AVis), during separate sessions. Subjects were equipped with a head-mounted display coupled with an electromagnetic sensor system and immersed in a virtual town. Their task was to locate different landmarks and become familiar with the town. In the AVis condition subjects were equipped with the head-mounted display and headphones, which delivered a soundscape updated in real-time according to their movement in the virtual town. While general performances remained comparable across the conditions, the reported feeling of immersion was more compelling in the AVis environment. However, patients exhibited more cybersickness symptoms in this condition. The result of this study points to the multisensory integration deficit of agoraphobic patients and underline the need for further research on multimodal VR systems for clinical use.

  1. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    PubMed Central

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-01-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli. PMID:27853290

  2. Beyond the real world: attention debates in auditory mismatch negativity.

    PubMed

    Chung, Kyungmi; Park, Jin Young

    2018-04-11

    The aim of this study was to address the potential for the auditory mismatch negativity (aMMN) to be used in applied event-related potential (ERP) studies by determining whether the aMMN would be an attention-dependent ERP component and could be differently modulated across visual tasks or virtual reality (VR) stimuli with different visual properties and visual complexity levels. A total of 80 participants, aged 19-36 years, were assigned to either a reading-task (21 men and 19 women) or a VR-task (22 men and 18 women) group. Two visual-task groups of healthy young adults were matched in age, sex, and handedness. All participants were instructed to focus only on the given visual tasks and ignore auditory change detection. While participants in the reading-task group read text slides, those in the VR-task group viewed three 360° VR videos in a random order and rated how visually complex the given virtual environment was immediately after each VR video ended. Inconsistent with the finding of a partial significant difference in perceived visual complexity in terms of brightness of virtual environments, both visual properties of distance and brightness showed no significant differences in the modulation of aMMN amplitudes. A further analysis was carried out to compare elicited aMMN amplitudes of a typical MMN task and an applied VR task. No significant difference in the aMMN amplitudes was found across the two groups who completed visual tasks with different visual-task demands. In conclusion, the aMMN is a reliable ERP marker of preattentive cognitive processing for auditory deviance detection.

  3. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  4. Human Machine Interfaces for Teleoperators and Virtual Environments: Conference Held in Santa Barbara, California on 4-9 March 1990.

    DTIC Science & Technology

    1990-03-01

    decided to have three kinds of sessions: invited-paper sessions, panel discussions, and poster sessions. The invited papers were divided into papers...soon followed. Applications in medicine, involving exploration and operation within the human body, are now receiving increased attention . Early... attention toward issues that may be important for the design of auditory interfaces. The importance of appropriate auditory inputs to observers with normal

  5. Multisensory Integration in the Virtual Hand Illusion with Active Movement

    PubMed Central

    Satoh, Satoru; Hachimura, Kozaburo

    2016-01-01

    Improving the sense of immersion is one of the core issues in virtual reality. Perceptual illusions of ownership can be perceived over a virtual body in a multisensory virtual reality environment. Rubber Hand and Virtual Hand Illusions showed that body ownership can be manipulated by applying suitable visual and tactile stimulation. In this study, we investigate the effects of multisensory integration in the Virtual Hand Illusion with active movement. A virtual xylophone playing system which can interactively provide synchronous visual, tactile, and auditory stimulation was constructed. We conducted two experiments regarding different movement conditions and different sensory stimulations. Our results demonstrate that multisensory integration with free active movement can improve the sense of immersion in virtual reality. PMID:27847822

  6. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.

    PubMed

    Rutkowski, Tomasz M

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.

  7. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    PubMed Central

    Rutkowski, Tomasz M.

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538

  8. Rehabilitation Program Integrating Virtual Environment to Improve Orientation and Mobility Skills for People Who Are Blind

    PubMed Central

    Lahav, Orly; Schloerb, David W.; Srinivasan, Mandayam A.

    2014-01-01

    This paper presents the integration of a virtual environment (BlindAid) in an orientation and mobility rehabilitation program as a training aid for people who are blind. BlindAid allows the users to interact with different virtual structures and objects through auditory and haptic feedback. This research explores if and how use of the BlindAid in conjunction with a rehabilitation program can help people who are blind train themselves in familiar and unfamiliar spaces. The study, focused on nine participants who were congenitally, adventitiously, and newly blind, during their orientation and mobility rehabilitation program at the Carroll Center for the Blind (Newton, Massachusetts, USA). The research was implemented using virtual environment (VE) exploration tasks and orientation tasks in virtual environments and real spaces. The methodology encompassed both qualitative and quantitative methods, including interviews, a questionnaire, videotape recording, and user computer logs. The results demonstrated that the BlindAid training gave participants additional time to explore the virtual environment systematically. Secondly, it helped elucidate several issues concerning the potential strengths of the BlindAid system as a training aid for orientation and mobility for both adults and teenagers who are congenitally, adventitiously, and newly blind. PMID:25284952

  9. Measuring Presence in Virtual Environments

    DTIC Science & Technology

    1994-10-01

    viewpoint to change what they see, or to reposition their head to affect binaural hearing, or to search the environment haptically, they will experience a...increase presence in an alternate environment. For example a head mounted display that isolates the user from the real world may increase the sense...movement interface devices such as treadmills and trampolines , different gloves, and auditory equipment. Even as a low end technological implementation of

  10. Intelligibility of speech in a virtual 3-D environment.

    PubMed

    MacDonald, Justin A; Balakrishnan, J D; Orosz, Michael D; Karplus, Walter J

    2002-01-01

    In a simulated air traffic control task, improvement in the detection of auditory warnings when using virtual 3-D audio depended on the spatial configuration of the sounds. Performance improved substantially when two of four sources were placed to the left and the remaining two were placed to the right of the participant. Surprisingly, little or no benefits were observed for configurations involving the elevation or transverse (front/back) dimensions of virtual space, suggesting that position on the interaural (left/right) axis is the crucial factor to consider in auditory display design. The relative importance of interaural spacing effects was corroborated in a second, free-field (real space) experiment. Two additional experiments showed that (a) positioning signals to the side of the listener is superior to placing them in front even when two sounds are presented in the same location, and (b) the optimal distance on the interaural axis varies with the amplitude of the sounds. These results are well predicted by the behavior of an ideal observer under the different display conditions. This suggests that guidelines for auditory display design that allow for effective perception of speech information can be developed from an analysis of the physical sound patterns.

  11. The Effects of Attentional Engagement on Route Learning Performance in a Virtual Environment: An Aging Study

    PubMed Central

    Hartmeyer, Steffen; Grzeschik, Ramona; Wolbers, Thomas; Wiener, Jan M.

    2017-01-01

    Route learning is a common navigation task affected by cognitive aging. Here we present a novel experimental paradigm to investigate whether age-related declines in executive control of attention contributes to route learning deficits. A young and an older participant group was repeatedly presented with a route through a virtual maze comprised of 12 decision points (DP) and non-decision points (non-DP). To investigate attentional engagement with the route learning task, participants had to respond to auditory probes at both DP and non-DP. Route knowledge was assessed by showing participants screenshots or landmarks from DPs and non-DPs and asking them to indicate the movement direction required to continue the route. Results demonstrate better performance for DPs than for non-DPs and slower responses to auditory probes at DPs compared to non-DPs. As expected we found slower route learning and slower responses to the auditory probes in the older participant group. Interestingly, differences in response times to the auditory probes between DPs and non-DPs can predict the success of route learning in both age groups and may explain slower knowledge acquisition in the older participant group. PMID:28775689

  12. A training system of orientation and mobility for blind people using acoustic virtual reality.

    PubMed

    Seki, Yoshikazu; Sato, Tetsuji

    2011-02-01

    A new auditory orientation training system was developed for blind people using acoustic virtual reality (VR) based on a head-related transfer function (HRTF) simulation. The present training system can reproduce a virtual training environment for orientation and mobility (O&M) instruction, and the trainee can walk through the virtual training environment safely by listening to sounds such as vehicles, stores, ambient noise, etc., three-dimensionally through headphones. The system can reproduce not only sound sources but also sound reflection and insulation, so that the trainee can learn both sound location and obstacle perception skills. The virtual training environment is described in extensible markup language (XML), and the O&M instructor can edit it easily according to the training curriculum. Evaluation experiments were conducted to test the efficiency of some features of the system. Thirty subjects who had not acquired O&M skills attended the experiments. The subjects were separated into three groups: a no-training group, a virtual-training group using the present system, and a real-training group in real environments. The results suggested that virtual-training can reduce "veering" more than real-training and also can reduce stress as much as real training. The subjective technical and anxiety scores also improved.

  13. Virtually-augmented interfaces for tactical aircraft.

    PubMed

    Haas, M W

    1995-05-01

    The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and non-virtual concepts and devices across the visual, auditory and haptic sensory modalities. A fusion interface is a multi-sensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion-interface concepts. One of the virtual concepts to be investigated in the Fusion Interfaces for Tactical Environments facility (FITE) is the application of EEG and other physiological measures for virtual control of functions within the flight environment. FITE is a specialized flight simulator which allows efficient concept development through the use of rapid prototyping followed by direct experience of new fusion concepts. The FITE facility also supports evaluation of fusion concepts by operational fighter pilots in a high fidelity simulated air combat environment. The facility was utilized by a multi-disciplinary team composed of operational pilots, human-factors engineers, electronics engineers, computer scientists, and experimental psychologists to prototype and evaluate the first multi-sensory, virtually-augmented cockpit. The cockpit employed LCD-based head-down displays, a helmet-mounted display, three-dimensionally localized audio displays, and a haptic display. This paper will endeavor to describe the FITE facility architecture, some of the characteristics of the FITE virtual display and control devices, and the potential application of EEG and other physiological measures within the FITE facility.

  14. Spatial Audio on the Web: Or Why Can't I hear Anything Over There?

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Schlickenmaier, Herbert (Technical Monitor); Johnson, Gerald (Technical Monitor); Frey, Mary Anne (Technical Monitor); Schneider, Victor S. (Technical Monitor); Ahunada, Albert J. (Technical Monitor)

    1997-01-01

    Auditory complexity, freedom of movement and interactivity is not always possible in a "true" virtual environment, much less in web-based audio. However, a lot of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to spatial audio on the web. My talk will discuss some of these engineering constraints and their perceptual consequences, and attempt to relate these issues to implementation on the web.

  15. Virtual fixtures as tools to enhance operator performance in telepresence environments

    NASA Astrophysics Data System (ADS)

    Rosenberg, Louis B.

    1993-12-01

    This paper introduces the notion of virtual fixtures for use in telepresence systems and presents an empirical study which demonstrates that such virtual fixtures can greatly enhance operator performance within remote environments. Just as tools and fixtures in the real world can enhance human performance by guiding manual operations, providing localizing references, and reducing the mental processing required to perform a task, virtual fixtures are computer generated percepts overlaid on top of the reflection of a remote workspace which can provide similar benefits. Like a ruler guiding a pencil in a real manipulation task, a virtual fixture overlaid on top of a remote workspace can act to reduce the mental processing required to perform a task, limit the workload of certain sensory modalities, and most of all allow precision and performance to exceed natural human abilities. Because such perceptual overlays are virtual constructions they can be diverse in modality, abstract in form, and custom tailored to individual task or user needs. This study investigates the potential of virtual fixtures by implementing simple combinations of haptic and auditory sensations as perceptual overlays during a standardized telemanipulation task.

  16. Human Machine Interfaces for Teleoperators and Virtual Environments Conference

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.

  17. Modality effects in Second Life: the mediating role of social presence and the moderating role of product involvement.

    PubMed

    Jin, Seung-A Annie

    2009-12-01

    The rapid growth of virtual worlds is one of the most recent Internet trends. Some distinguishing features of virtual environments include the employment of avatars and multimodal communication among avatars. This study examined the effects of the modality (text vs. audio) of message presentation on people's evaluation of spokes-avatar credibility and the informational value of promotional messages in avatar-based advertising inside 3D virtual environments. An experiment was conducted in the virtual Apple retail store inside Second Life, the most popular and fastest growing virtual world. The author designed a two-group (textual advertisement vs. auditory advertisement) comparison experiment by manipulating the modality of conveying advertisement messages. The author also created a spokes-avatar that represents a real-life organization (Apple) and presents promotional messages about its innovative product, the iPhone. Data analyses showed that (a) textual modality (vs. auditory modality) resulted in greater source expertise, informational value of the advertisement message, and social presence; and that (b) high product involvement (vs. low product involvement) resulted in a more positive attitude toward the product, higher buying intention, and a higher level of perceived interactivity. In addition to the main effects of product involvement and modality, results showed significant interaction between involvement and modality. Modality effects were stronger for people with low product involvement than for those with high product involvement, thus confirming the moderating effects of product involvement. Results of a path analysis also showed that social presence mediated the effects of modality on the perceived informational value of the advertisement message.

  18. Influence of non-contextual auditory stimuli on navigation in a virtual reality context involving executive functions among patients after stroke.

    PubMed

    Cogné, Mélanie; Violleau, Marie-Hélène; Klinger, Evelyne; Joseph, Pierre-Alain

    2018-01-31

    Topographical disorientation is frequent among patients after a stroke and can be well explored with virtual environments (VEs). VEs also allow for the addition of stimuli. A previous study did not find any effect of non-contextual auditory stimuli on navigational performance in the virtual action planning-supermarket (VAP-S) simulating a medium-sized 3D supermarket. However, the perceptual or cognitive load of the sounds used was not high. We investigated how non-contextual auditory stimuli with high load affect navigational performance in the VAP-S for patients who have had a stroke and any correlation between this performance and dysexecutive disorders. Four kinds of stimuli were considered: sounds from living beings, sounds from supermarket objects, beeping sounds and names of other products that were not available in the VAP-S. The condition without auditory stimuli was the control. The Groupe de réflexion pour l'évaluation des fonctions exécutives (GREFEX) battery was used to evaluate executive functions of patients. The study included 40 patients who have had a stroke (n=22 right-hemisphere and n=18 left-hemisphere stroke). Patients' navigational performance was decreased under the 4 conditions with non-contextual auditory stimuli (P<0.05), especially for those with dysexecutive disorders. For the 5 conditions, the lower the performance, the more GREFEX tests were failed. Patients felt significantly disadvantaged by the non-contextual sounds sounds from living beings, sounds from supermarket objects and names of other products as compared with beeping sounds (P<0.01). Patients' verbal recall of the collected objects was significantly lower under the condition with names of other products (P<0.001). Left and right brain-damaged patients did not differ in navigational performance in the VAP-S under the 5 auditory conditions. These non-contextual auditory stimuli could be used in neurorehabilitation paradigms to train patients with dysexecutive disorders to inhibit disruptive stimuli. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  19. Aging and Sensory Substitution in a Virtual Navigation Task.

    PubMed

    Levy-Tzedek, S; Maidenbaum, S; Amedi, A; Lackner, J

    2016-01-01

    Virtual environments are becoming ubiquitous, and used in a variety of contexts-from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual maze. We added a layer of complexity by conducting the experiment in a rotating room, in order to test the effect of the spatial bias induced by the rotation on performance. Results from 29 participants showed that with the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues. The older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group. There was no effect of room rotation on the performance, nor were there any significant interactions among age, feedback modality and room rotation. We conclude that there is a decline in performance with age, and that while navigation with auditory cues is possible even at an old age, it presents more challenges than visual navigation.

  20. Defense applications of the CAVE (CAVE automatic virtual environment)

    NASA Astrophysics Data System (ADS)

    Isabelle, Scott K.; Gilkey, Robert H.; Kenyon, Robert V.; Valentino, George; Flach, John M.; Spenny, Curtis H.; Anderson, Timothy R.

    1997-07-01

    The CAVE is a multi-person, room-sized, high-resolution, 3D video and auditory environment, which can be used to present very immersive virtual environment experiences. This paper describes the CAVE technology and the capability of the CAVE system as originally developed at the Electronics Visualization Laboratory of the University of Illinois- Chicago and as more recently implemented by Wright State University (WSU) in the Armstrong Laboratory at Wright- Patterson Air Force Base (WPAFB). One planned use of the WSU/WPAFB CAVE is research addressing the appropriate design of display and control interfaces for controlling uninhabited aerial vehicles. The WSU/WPAFB CAVE has a number of features that make it well-suited to this work: (1) 360 degrees surround, plus floor, high resolution visual displays, (2) virtual spatialized audio, (3) the ability to integrate real and virtual objects, and (4) rapid and flexible reconfiguration. However, even though the CAVE is likely to have broad utility for military applications, it does have certain limitations that may make it less well- suited to applications that require 'natural' haptic feedback, vestibular stimulation, or an ability to interact with close detailed objects.

  1. Human Exploration of Enclosed Spaces through Echolocation.

    PubMed

    Flanagin, Virginia L; Schörnich, Sven; Schranner, Michael; Hummel, Nadine; Wallmeier, Ludwig; Wahlberg, Magnus; Stephan, Thomas; Wiegrebe, Lutz

    2017-02-08

    Some blind humans have developed echolocation, as a method of navigation in space. Echolocation is a truly active sense because subjects analyze echoes of dedicated, self-generated sounds to assess space around them. Using a special virtual space technique, we assess how humans perceive enclosed spaces through echolocation, thereby revealing the interplay between sensory and vocal-motor neural activity while humans perform this task. Sighted subjects were trained to detect small changes in virtual-room size analyzing real-time generated echoes of their vocalizations. Individual differences in performance were related to the type and number of vocalizations produced. We then asked subjects to estimate virtual-room size with either active or passive sounds while measuring their brain activity with fMRI. Subjects were better at estimating room size when actively vocalizing. This was reflected in the hemodynamic activity of vocal-motor cortices, even after individual motor and sensory components were removed. Activity in these areas also varied with perceived room size, although the vocal-motor output was unchanged. In addition, thalamic and auditory-midbrain activity was correlated with perceived room size; a likely result of top-down auditory pathways for human echolocation, comparable with those described in echolocating bats. Our data provide evidence that human echolocation is supported by active sensing, both behaviorally and in terms of brain activity. The neural sensory-motor coupling complements the fundamental acoustic motor-sensory coupling via the environment in echolocation. SIGNIFICANCE STATEMENT Passive listening is the predominant method for examining brain activity during echolocation, the auditory analysis of self-generated sounds. We show that sighted humans perform better when they actively vocalize than during passive listening. Correspondingly, vocal motor and cerebellar activity is greater during active echolocation than vocalization alone. Motor and subcortical auditory brain activity covaries with the auditory percept, although motor output is unchanged. Our results reveal behaviorally relevant neural sensory-motor coupling during echolocation. Copyright © 2017 the authors 0270-6474/17/371614-14$15.00/0.

  2. The effect of extended sensory range via the EyeCane sensory substitution device on the characteristics of visionless virtual navigation.

    PubMed

    Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel Robert; Namer-Furstenberg, Rinat; Amedi, Amir

    2014-01-01

    Mobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane's in the real world. We compared performance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navigation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation.

  3. The many facets of auditory display

    NASA Technical Reports Server (NTRS)

    Blattner, Meera M.

    1995-01-01

    In this presentation we will examine some of the ways sound can be used in a virtual world. We make the case that many different types of audio experience are available to us. A full range of audio experiences include: music, speech, real-world sounds, auditory displays, and auditory cues or messages. The technology of recreating real-world sounds through physical modeling has advanced in the past few years allowing better simulation of virtual worlds. Three-dimensional audio has further enriched our sensory experiences.

  4. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    PubMed

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  5. Perceptual effects in auralization of virtual rooms

    NASA Astrophysics Data System (ADS)

    Kleiner, Mendel; Larsson, Pontus; Vastfjall, Daniel; Torres, Rendell R.

    2002-05-01

    By using various types of binaural simulation (or ``auralization'') of physical environments, it is now possible to study basic perceptual issues relevant to room acoustics, as well to simulate the acoustic conditions found in concert halls and other auditoria. Binaural simulation of physical spaces in general is also important to virtual reality systems. This presentation will begin with an overview of the issues encountered in the auralization of room and other environments. We will then discuss the influence of various approximations in room modeling, in particular, edge- and surface scattering, on the perceived room response. Finally, we will discuss cross-modal effects, such as the influence of visual cues on the perception of auditory cues, and the influence of cross-modal effects on the judgement of ``perceived presence'' and the rating of room acoustic quality.

  6. Neural Correlates of Sound Localization in Complex Acoustic Environments

    PubMed Central

    Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto

    2013-01-01

    Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185

  7. Psychophysical evaluation of three-dimensional auditory displays

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.

    1991-01-01

    Work during this reporting period included the completion of our research on the use of principal components analysis (PCA) to model the acoustical head related transfer functions (HRTFs) that are used to synthesize virtual sources for three dimensional auditory displays. In addition, a series of studies was initiated on the perceptual errors made by listeners when localizing free-field and virtual sources. Previous research has revealed that under certain conditions these perceptual errors, often called 'confusions' or 'reversals', are both large and frequent, thus seriously comprising the utility of a 3-D virtual auditory display. The long-range goal of our work in this area is to elucidate the sources of the confusions and to develop signal-processing strategies to reduce or eliminate them.

  8. Three-dimensional virtual acoustic displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.

    1991-01-01

    The development of an alternative medium for displaying information in complex human-machine interfaces is described. The 3-D virtual acoustic display is a means for accurately transferring information to a human operator using the auditory modality; it combines directional and semantic characteristics to form naturalistic representations of dynamic objects and events in remotely sensed or simulated environments. Although the technology can stand alone, it is envisioned as a component of a larger multisensory environment and will no doubt find its greatest utility in that context. The general philosophy in the design of the display has been that the development of advanced computer interfaces should be driven first by an understanding of human perceptual requirements, and later by technological capabilities or constraints. In expanding on this view, current and potential uses are addressed of virtual acoustic displays, such displays are characterized, and recent approaches to their implementation and application are reviewed, the research project at NASA-Ames is described in detail, and finally some critical research issues for the future are outlined.

  9. The contribution of virtual reality to the diagnosis of spatial navigation disorders and to the study of the role of navigational aids: A systematic literature review.

    PubMed

    Cogné, M; Taillade, M; N'Kaoua, B; Tarruella, A; Klinger, E; Larrue, F; Sauzéon, H; Joseph, P-A; Sorita, E

    2017-06-01

    Spatial navigation, which involves higher cognitive functions, is frequently implemented in daily activities, and is critical to the participation of human beings in mainstream environments. Virtual reality is an expanding tool, which enables on one hand the assessment of the cognitive functions involved in spatial navigation, and on the other the rehabilitation of patients with spatial navigation difficulties. Topographical disorientation is a frequent deficit among patients suffering from neurological diseases. The use of virtual environments enables the information incorporated into the virtual environment to be manipulated empirically. But the impact of manipulations seems differ according to their nature (quantity, occurrence, and characteristics of the stimuli) and the target population. We performed a systematic review of research on virtual spatial navigation covering the period from 2005 to 2015. We focused first on the contribution of virtual spatial navigation for patients with brain injury or schizophrenia, or in the context of ageing and dementia, and then on the impact of visual or auditory stimuli on virtual spatial navigation. On the basis of 6521 abstracts identified in 2 databases (Pubmed and Scopus) with the keywords « navigation » and « virtual », 1103 abstracts were selected by adding the keywords "ageing", "dementia", "brain injury", "stroke", "schizophrenia", "aid", "help", "stimulus" and "cue"; Among these, 63 articles were included in the present qualitative analysis. Unlike pencil-and-paper tests, virtual reality is useful to assess large-scale navigation strategies in patients with brain injury or schizophrenia, or in the context of ageing and dementia. Better knowledge about both the impact of the different aids and the cognitive processes involved is essential for the use of aids in neurorehabilitation. Copyright © 2016. Published by Elsevier Masson SAS.

  10. Auditory Distance Coding in Rabbit Midbrain Neurons and Human Perception: Monaural Amplitude Modulation Depth as a Cue

    PubMed Central

    Zahorik, Pavel; Carney, Laurel H.; Bishop, Brian B.; Kuwada, Shigeyuki

    2015-01-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35–200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. PMID:25834060

  11. Auditory and visual cueing modulate cycling speed of older adults and persons with Parkinson's disease in a Virtual Cycling (V-Cycle) system.

    PubMed

    Gallagher, Rosemary; Damodaran, Harish; Werner, William G; Powell, Wendy; Deutsch, Judith E

    2016-08-19

    Evidence based virtual environments (VEs) that incorporate compensatory strategies such as cueing may change motor behavior and increase exercise intensity while also being engaging and motivating. The purpose of this study was to determine if persons with Parkinson's disease and aged matched healthy adults responded to auditory and visual cueing embedded in a bicycling VE as a method to increase exercise intensity. We tested two groups of participants, persons with Parkinson's disease (PD) (n = 15) and age-matched healthy adults (n = 13) as they cycled on a stationary bicycle while interacting with a VE. Participants cycled under two conditions: auditory cueing (provided by a metronome) and visual cueing (represented as central road markers in the VE). The auditory condition had four trials in which auditory cues or the VE were presented alone or in combination. The visual condition had five trials in which the VE and visual cue rate presentation was manipulated. Data were analyzed by condition using factorial RMANOVAs with planned t-tests corrected for multiple comparisons. There were no differences in pedaling rates between groups for both the auditory and visual cueing conditions. Persons with PD increased their pedaling rate in the auditory (F 4.78, p = 0.029) and visual cueing (F 26.48, p < 0.000) conditions. Age-matched healthy adults also increased their pedaling rate in the auditory (F = 24.72, p < 0.000) and visual cueing (F = 40.69, p < 0.000) conditions. Trial-to-trial comparisons in the visual condition in age-matched healthy adults showed a step-wise increase in pedaling rate (p = 0.003 to p < 0.000). In contrast, persons with PD increased their pedaling rate only when explicitly instructed to attend to the visual cues (p < 0.000). An evidenced based cycling VE can modify pedaling rate in persons with PD and age-matched healthy adults. Persons with PD required attention directed to the visual cues in order to obtain an increase in cycling intensity. The combination of the VE and auditory cues was neither additive nor interfering. These data serve as preliminary evidence that embedding auditory and visual cues to alter cycling speed in a VE as method to increase exercise intensity that may promote fitness.

  12. Approaches to the study of neural coding of sound source location and sound envelope in real environments

    PubMed Central

    Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.

    2012-01-01

    The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505

  13. Haptic interfaces: Hardware, software and human performance

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandayam A.

    1995-01-01

    Virtual environments are computer-generated synthetic environments with which a human user can interact to perform a wide variety of perceptual and motor tasks. At present, most of the virtual environment systems engage only the visual and auditory senses, and not the haptic sensorimotor system that conveys the sense of touch and feel of objects in the environment. Computer keyboards, mice, and trackballs constitute relatively simple haptic interfaces. Gloves and exoskeletons that track hand postures have more interaction capabilities and are available in the market. Although desktop and wearable force-reflecting devices have been built and implemented in research laboratories, the current capabilities of such devices are quite limited. To realize the full promise of virtual environments and teleoperation of remote systems, further developments of haptic interfaces are critical. In this paper, the status and research needs in human haptics, technology development and interactions between the two are described. In particular, the excellent performance characteristics of Phantom, a haptic interface recently developed at MIT, are highlighted. Realistic sensations of single point of contact interactions with objects of variable geometry (e.g., smooth, textured, polyhedral) and material properties (e.g., friction, impedance) in the context of a variety of tasks (e.g., needle biopsy, switch panels) achieved through this device are described and the associated issues in haptic rendering are discussed.

  14. Using virtual reality to distinguish subjects with multiple- but not single-domain amnestic mild cognitive impairment from normal elderly subjects.

    PubMed

    Mohammadi, Alireza; Kargar, Mahmoud; Hesami, Ehsan

    2018-03-01

    Spatial disorientation is a hallmark of amnestic mild cognitive impairment (aMCI) and Alzheimer's disease. Our aim was to use virtual reality to determine the allocentric and egocentric memory deficits of subjects with single-domain aMCI (aMCIsd) and multiple-domain aMCI (aMCImd). For this purpose, we introduced an advanced virtual reality navigation task (VRNT) to distinguish these deficits in mild Alzheimer's disease (miAD), aMCIsd, and aMCImd. The VRNT performance of 110 subjects, including 20 with miAD, 30 with pure aMCIsd, 30 with pure aMCImd, and 30 cognitively normal controls was compared. Our newly developed VRNT consists of a virtual neighbourhood (allocentric memory) and virtual maze (egocentric memory). Verbal and visuospatial memory impairments were also examined with Rey Auditory-Verbal Learning Test and Rey-Osterrieth Complex Figure Test, respectively. We found that miAD and aMCImd subjects were impaired in both allocentric and egocentric memory, but aMCIsd subjects performed similarly to the normal controls on both tasks. The miAD, aMCImd, and aMCIsd subjects performed worse on finding the target or required more time in the virtual environment than the aMCImd, aMCIsd, and normal controls, respectively. Our findings indicated the aMCImd and miAD subjects, as well as the aMCIsd subjects, were more impaired in egocentric orientation than allocentric orientation. We concluded that VRNT can distinguish aMCImd subjects, but not aMCIsd subjects, from normal elderly subjects. The VRNT, along with the Rey Auditory-Verbal Learning Test and Rey-Osterrieth Complex Figure Test, can be used as a valid diagnostic tool for properly distinguishing different forms of aMCI. © 2018 Japanese Psychogeriatric Society.

  15. The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment

    PubMed Central

    Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg

    2018-01-01

    Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as ‘presence’, when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user’s overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience. PMID:29390023

  16. The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment.

    PubMed

    Cooper, Natalia; Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg

    2018-01-01

    Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as 'presence', when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user's overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience.

  17. Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity.

    PubMed

    Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou

    2018-01-01

    Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.

  18. Localization of virtual sound at 4 Gz.

    PubMed

    Sandor, Patrick M B; McAnally, Ken I; Pellieux, Lionel; Martin, Russell L

    2005-02-01

    Acceleration directed along the body's z-axis (Gz) leads to misperception of the elevation of visual objects (the "elevator illusion"), most probably as a result of errors in the transformation from eye-centered to head-centered coordinates. We have investigated whether the location of sound sources is misperceived under increased Gz. Visually guided localization responses were made, using a remotely controlled laser pointer, to virtual auditory targets under conditions of 1 and 4 Gz induced in a human centrifuge. As these responses would be expected to be affected by the elevator illusion, we also measured the effect of Gz on the accuracy with which subjects could point to the horizon. Horizon judgments were lower at 4 Gz than at 1 Gz, so sound localization responses at 4 Gz were corrected for this error in the transformation from eye-centered to head-centered coordinates. We found that the accuracy and bias of sound localization are not significantly affected by increased Gz. The auditory modality is likely to provide a reliable means of conveying spatial information to operators in dynamic environments in which Gz can vary.

  19. The Plausibility of a String Quartet Performance in Virtual Reality.

    PubMed

    Bergstrom, Ilias; Azevedo, Sergio; Papiotis, Panos; Saldanha, Nuno; Slater, Mel

    2017-04-01

    We describe an experiment that explores the contribution of auditory and other features to the illusion of plausibility in a virtual environment that depicts the performance of a string quartet. 'Plausibility' refers to the component of presence that is the illusion that the perceived events in the virtual environment are really happening. The features studied were: Gaze (the musicians ignored the participant, the musicians sometimes looked towards and followed the participant's movements), Sound Spatialization (Mono, Stereo, Spatial), Auralization (no sound reflections, reflections corresponding to a room larger than the one perceived, reflections that exactly matched the virtual room), and Environment (no sound from outside of the room, birdsong and wind corresponding to the outside scene). We adopted the methodology based on color matching theory, where 20 participants were first able to assess their feeling of plausibility in the environment with each of the four features at their highest setting. Then five times participants started from a low setting on all features and were able to make transitions from one system configuration to another until they matched their original feeling of plausibility. From these transitions a Markov transition matrix was constructed, and also probabilities of a match conditional on feature configuration. The results show that Environment and Gaze were individually the most important factors influencing the level of plausibility. The highest probability transitions were to improve Environment and Gaze, and then Auralization and Spatialization. We present this work as both a contribution to the methodology of assessing presence without questionnaires, and showing how various aspects of a musical performance can influence plausibility.

  20. Emotion modulates activity in the 'what' but not 'where' auditory processing pathway.

    PubMed

    Kryklywy, James H; Macpherson, Ewan A; Greening, Steven G; Mitchell, Derek G V

    2013-11-15

    Auditory cortices can be separated into dissociable processing pathways similar to those observed in the visual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how emotion interacts with dorsal processing streams, and essentially nothing is known about the impact of emotion on auditory stimulus localization. In the current study, we used fMRI in concert with individualized auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization task. Surprisingly, participants were significantly slower to localize emotional relative to neutral sounds. A separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent of emotion. When applied to the main experimental task, a significant main effect of location, but not emotion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of auditory cortex were modulated by sound location; however, additional anterior-lateral areas of auditory cortex demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region resembled areas described in dual pathway models of auditory processing as the 'what' processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the 'what' pathway) independent of location and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the 'what,' but not the 'where,' auditory processing pathway. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Multimodal information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Displays

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Bittner, Rachel M.; Anderson, Mark R.

    2012-01-01

    Auditory communication displays within the NextGen data link system may use multiple synthetic speech messages replacing traditional ATC and company communications. The design of an interface for selecting amongst multiple incoming messages can impact both performance (time to select, audit and release a message) and preference. Two design factors were evaluated: physical pressure-sensitive switches versus flat panel "virtual switches", and the presence or absence of auditory feedback from switch contact. Performance with stimuli using physical switches was 1.2 s faster than virtual switches (2.0 s vs. 3.2 s); auditory feedback provided a 0.54 s performance advantage (2.33 s vs. 2.87 s). There was no interaction between these variables. Preference data were highly correlated with performance.

  2. Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

    PubMed Central

    Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2013-01-01

    The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338

  3. Novel design of interactive multimodal biofeedback system for neurorehabilitation.

    PubMed

    Huang, He; Chen, Y; Xu, W; Sundaram, H; Olson, L; Ingalls, T; Rikakis, T; He, Jiping

    2006-01-01

    A previous design of a biofeedback system for Neurorehabilitation in an interactive multimodal environment has demonstrated the potential of engaging stroke patients in task-oriented neuromotor rehabilitation. This report explores the new concept and alternative designs of multimedia based biofeedback systems. In this system, the new interactive multimodal environment was constructed with abstract presentation of movement parameters. Scenery images or pictures and their clarity and orientation are used to reflect the arm movement and relative position to the target instead of the animated arm. The multiple biofeedback parameters were classified into different hierarchical levels w.r.t. importance of each movement parameter to performance. A new quantified measurement for these parameters were developed to assess the patient's performance both real-time and offline. These parameters were represented by combined visual and auditory presentations with various distinct music instruments. Overall, the objective of newly designed system is to explore what information and how to feedback information in interactive virtual environment could enhance the sensorimotor integration that may facilitate the efficient design and application of virtual environment based therapeutic intervention.

  4. Experimental Evaluation of Performance Feedback Using the Dismounted Infantry Virtual After Action Review System. Long Range Navy and Marine Corps Science and Technology Program

    DTIC Science & Technology

    2007-11-14

    Artificial intelligence and 4 23 education , Volume 1: Learning environments and tutoring systems. Hillsdale, NJ: Erlbaum. Wickens, C.D. (1984). Processing...and how to use it to best optimize the learning process. Some researchers (see Loftin & Savely, 1991) have proposed adding intelligent systems to the...is experienced as the cognitive centers in an individual’s brain process visual, tactile, kinesthetic , olfactory, proprioceptive, and auditory

  5. Semi-Immersive Virtual Turbine Engine Simulation System

    NASA Astrophysics Data System (ADS)

    Abidi, Mustufa H.; Al-Ahmari, Abdulrahman M.; Ahmad, Ali; Darmoul, Saber; Ameen, Wadea

    2018-05-01

    The design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.

  6. Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness

    PubMed Central

    Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.

    2014-01-01

    Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752

  7. Listeners' expectation of room acoustical parameters based on visual cues

    NASA Astrophysics Data System (ADS)

    Valente, Daniel L.

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.

  8. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  9. Eye-tracking and EMG supported 3D Virtual Reality - an integrated tool for perceptual and motor development of children with severe physical disabilities: a research concept.

    PubMed

    Pulay, Márk Ágoston

    2015-01-01

    Letting children with severe physical disabilities (like Tetraparesis spastica) to get relevant motional experiences of appropriate quality and quantity is now the greatest challenge for us in the field of neurorehabilitation. These motional experiences may establish many cognitive processes, but may also cause additional secondary cognitive dysfunctions such as disorders in body image, figure invariance, visual perception, auditory differentiation, concentration, analytic and synthetic ways of thinking, visual memory etc. Virtual Reality is a technology that provides a sense of presence in a real environment with the help of 3D pictures and animations formed in a computer environment and enable the person to interact with the objects in that environment. One of our biggest challenges is to find a well suited input device (hardware) to let the children with severe physical disabilities to interact with the computer. Based on our own experiences and a thorough literature review we have come to the conclusion that an effective combination of eye-tracking and EMG devices should work well.

  10. A virtual display system for conveying three-dimensional acoustic information

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Wightman, Frederic L.; Foster, Scott H.

    1988-01-01

    The development of a three-dimensional auditory display system is discussed. Theories of human sound localization and techniques for synthesizing various features of auditory spatial perceptions are examined. Psychophysical data validating the system are presented. The human factors applications of the system are considered.

  11. Psychophysical Evaluation of Three-Dimensional Auditory Displays

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L. (Principal Investigator)

    1995-01-01

    This report describes the process made during the first year of a three-year Cooperative Research Agreement (CRA NCC2-542). The CRA proposed a program of applied of psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years. we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners' head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on two of these topics, the role of head movements and the role of echoes and reflections, were reported in the most recent Semi-Annual Pro-ress Report (Appendix A). In the period since the last Progress Report we have been studying a third topic, the localizability of moving sources. The results of this research are described. The fidelity of a virtual auditory display is critically dependent on precise measurement of the listener''s Head-Related Transfer Functions (HRTFs), which are used to produce the virtual auditory images. We continue to explore methods for improving our HRTF measurement technique. During this reporting period we compared HRTFs measured using our standard open-canal probe tube technique and HRTFs measured with the closed-canal insert microphones from the Crystal River Engineering Snapshot system.

  12. Virtual acoustic displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.

    1991-01-01

    A 3D auditory display can potentially enhance information transfer by combining directional and iconic information in a quite naturalistic representation of dynamic objects in the interface. Another aspect of auditory spatial clues is that, in conjunction with other modalities, it can act as a potentiator of information in the display. For example, visual and auditory cues together can reinforce the information content of the display and provide a greater sense of presence or realism in a manner not readily achievable by either modality alone. This phenomenon will be particularly useful in telepresence applications, such as advanced teleconferencing environments, shared electronic workspaces, and monitoring telerobotic activities in remote or hazardous situations. Thus, the combination of direct spatial cues with good principles of iconic design could provide an extremely powerful and information-rich display which is also quite easy to use. An alternative approach, recently developed at ARC, generates externalized, 3D sound cues over headphones in realtime using digital signal processing. Here, the synthesis technique involves the digital generation of stimuli using Head-Related Transfer Functions (HRTF's) measured in the two ear-canals of individual subjects. Other similar approaches include an analog system developed by Loomis, et. al., (1990) and digital systems which make use of transforms derived from normative mannikins and simulations of room acoustics. Such an interface also requires the careful psychophysical evaluation of listener's ability to accurately localize the virtual or synthetic sound sources. From an applied standpoint, measurement of each potential listener's HRTF's may not be possible in practice. For experienced listeners, localization performance was only slightly degraded compared to a subject's inherent ability. Alternatively, even inexperienced listeners may be able to adapt to a particular set of HRTF's as long as they provide adequate cues for localization. In general, these data suggest that most listeners can obtain useful directional information from an auditory display without requiring the use of individually-tailored HRTF's.

  13. Virtual environment navigation with look-around mode to explore new real spaces by people who are blind.

    PubMed

    Lahav, Orly; Gedalevitz, Hadas; Battersby, Steven; Brown, David; Evett, Lindsay; Merritt, Patrick

    2018-05-01

    This paper examines the ability of people who are blind to construct a mental map and perform orientation tasks in real space by using Nintendo Wii technologies to explore virtual environments. The participant explores new spaces through haptic and auditory feedback triggered by pointing or walking in the virtual environments and later constructs a mental map, which can be used to navigate in real space. The study included 10 participants who were congenitally or adventitiously blind, divided into experimental and control groups. The research was implemented by using virtual environments exploration and orientation tasks in real spaces, using both qualitative and quantitative methods in its methodology. The results show that the mode of exploration afforded to the experimental group is radically new in orientation and mobility training; as a result 60% of the experimental participants constructed mental maps that were based on map model, compared with only 30% of the control group participants. Using technology that enabled them to explore and to collect spatial information in a way that does not exist in real space influenced the ability of the experimental group to construct a mental map based on the map model. Implications for rehabilitation The virtual cane system for the first time enables people who are blind to explore and collect spatial information via the look-around mode in addition to the walk-around mode. People who are blind prefer to use look-around mode to explore new spaces, as opposed to the walking mode. Although the look-around mode requires users to establish a complex collecting and processing procedure for the spatial data, people who are blind using this mode are able to construct a mental map as a map model. For people who are blind (as for the sighted) construction of a mental map based on map model offers more flexibility in choosing a walking path in a real space, accounting for changes that occur in the space.

  14. An initial validation of the Virtual Reality Paced Auditory Serial Addition Test in a college sample.

    PubMed

    Parsons, Thomas D; Courtney, Christopher G

    2014-01-30

    Numerous studies have demonstrated that the Paced Auditory Serial Addition Test (PASAT) has utility for the detection of cognitive processing deficits. While the PASAT has demonstrated high levels of internal consistency and test-retest reliability, administration of the PASAT has been known to create undue anxiety and frustration in participants. As a result, degradation of performance may be found on the PASAT. The difficult nature of the PASAT may subsequently decrease the probability of their return for follow up testing. This study is a preliminary attempt at assessing the potential of a PASAT embedded in a virtual reality environment. The Virtual Reality PASAT (VR-PASAT) was compared with a paper-and-pencil version of the PASAT as well as other standardized neuropsychological measures. The two modalities of the PASAT were conducted with a sample of 50 healthy university students, between the ages of 19 and 34 years. Equivalent distributions were found for age, gender, education, and computer familiarity. Moderate relationships were found between VR-PASAT and other putative attentional processing measures. The VR-PASAT was unrelated to indices of learning, memory, or visuospatial processing. Comparison of the VR-PASAT with the traditional paper-and-pencil PASAT indicated that both versions require the examinee to sustain attention at an increasingly demanding, externally determined rate. Results offer preliminary support for the construct validity (in a college sample) of the VR-PASAT as an attentional processing measure and suggest that this task may provide some unique information not tapped by traditional attentional processing tasks. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Effects of sensory cueing in virtual motor rehabilitation. A review.

    PubMed

    Palacios-Navarro, Guillermo; Albiol-Pérez, Sergio; García-Magariño García, Iván

    2016-04-01

    To critically identify studies that evaluate the effects of cueing in virtual motor rehabilitation in patients having different neurological disorders and to make recommendations for future studies. Data from MEDLINE®, IEEExplore, Science Direct, Cochrane library and Web of Science was searched until February 2015. We included studies that investigate the effects of cueing in virtual motor rehabilitation related to interventions for upper or lower extremities using auditory, visual, and tactile cues on motor performance in non-immersive, semi-immersive, or fully immersive virtual environments. These studies compared virtual cueing with an alternative or no intervention. Ten studies with a total number of 153 patients were included in the review. All of them refer to the impact of cueing in virtual motor rehabilitation, regardless of the pathological condition. After selecting the articles, the following variables were extracted: year of publication, sample size, study design, type of cueing, intervention procedures, outcome measures, and main findings. The outcome evaluation was done at baseline and end of the treatment in most of the studies. All of studies except one showed improvements in some or all outcomes after intervention, or, in some cases, in favor of the virtual rehabilitation group compared to the control group. Virtual cueing seems to be a promising approach to improve motor learning, providing a channel for non-pharmacological therapeutic intervention in different neurological disorders. However, further studies using larger and more homogeneous groups of patients are required to confirm these findings. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task.

    PubMed

    Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya

    2013-09-01

    Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can implicitly strengthen automatic change detection from an early stage in a cross-sensory manner, at least in the vision to audition direction.

  17. Psychophysics of human echolocation.

    PubMed

    Schörnich, Sven; Wallmeier, Ludwig; Gessele, Nikodemus; Nagy, Andreas; Schranner, Michael; Kish, Daniel; Wiegrebe, Lutz

    2013-01-01

    The skills of some blind humans orienting in their environment through the auditory analysis of reflections from self-generated sounds have received only little scientific attention to date. Here we present data from a series of formal psychophysical experiments with sighted subjects trained to evaluate features of a virtual echo-acoustic space, allowing for rigid and fine-grain control of the stimulus parameters. The data show how subjects shape both their vocalisations and auditory analysis of the echoes to serve specific echo-acoustic tasks. First, we show that humans can echo-acoustically discriminate target distances with a resolution of less than 1 m for reference distances above 3.4 m. For a reference distance of 1.7 m, corresponding to an echo delay of only 10 ms, distance JNDs were typically around 0.5 m. Second, we explore the interplay between the precedence effect and echolocation. We show that the strong perceptual asymmetry between lead and lag is weakened during echolocation. Finally, we show that through the auditory analysis of self-generated sounds, subjects discriminate room-size changes as small as 10%.In summary, the current data confirm the practical efficacy of human echolocation, and they provide a rigid psychophysical basis for addressing its neural foundations.

  18. Effect of Blast Injury on Auditory Localization in Military Service Members.

    PubMed

    Kubli, Lina R; Brungart, Douglas; Northern, Jerry

    Among the many advantages of binaural hearing are the abilities to localize sounds in space and to attend to one sound in the presence of many sounds. Binaural hearing provides benefits for all listeners, but it may be especially critical for military personnel who must maintain situational awareness in complex tactical environments with multiple speech and noise sources. There is concern that Military Service Members who have been exposed to one or more high-intensity blasts during their tour of duty may have difficulty with binaural and spatial ability due to degradation in auditory and cognitive processes. The primary objective of this study was to assess the ability of blast-exposed Military Service Members to localize speech sounds in quiet and in multisource environments with one or two competing talkers. Participants were presented with one, two, or three topic-related (e.g., sports, food, travel) sentences under headphones and required to attend to, and then locate the source of, the sentence pertaining to a prespecified target topic within a virtual space. The listener's head position was monitored by a head-mounted tracking device that continuously updated the apparent spatial location of the target and competing speech sounds as the subject turned within the virtual space. Measurements of auditory localization ability included mean absolute error in locating the source of the target sentence, the time it took to locate the target sentence within 30 degrees, target/competitor confusion errors, response time, and cumulative head motion. Twenty-one blast-exposed Active-Duty or Veteran Military Service Members (blast-exposed group) and 33 non-blast-exposed Service Members and beneficiaries (control group) were evaluated. In general, the blast-exposed group performed as well as the control group if the task involved localizing the source of a single speech target. However, if the task involved two or three simultaneous talkers, localization ability was compromised for some participants in the blast-exposed group. Blast-exposed participants were less accurate in their localization responses and required more exploratory head movements to find the location of the target talker. Results suggest that blast-exposed participants have more difficulty than non-blast-exposed participants in localizing sounds in complex acoustic environments. This apparent deficit in spatial hearing ability highlights the need to develop new diagnostic tests using complex listening tasks that involve multiple sound sources that require speech segregation and comprehension.

  19. How far away is plug 'n' play? Assessing the near-term potential of sonification and auditory display

    NASA Technical Reports Server (NTRS)

    Bargar, Robin

    1995-01-01

    The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.

  20. Virtual reality in the assessment and treatment of psychosis: a systematic review of its utility, acceptability and effectiveness.

    PubMed

    Rus-Calafell, M; Garety, P; Sason, E; Craig, T J K; Valmaggia, L R

    2018-02-01

    Over the last two decades, there has been a rapid increase of studies testing the efficacy and acceptability of virtual reality in the assessment and treatment of mental health problems. This systematic review was carried out to investigate the use of virtual reality in the assessment and the treatment of psychosis. Web of Science, PsychInfo, EMBASE, Scopus, ProQuest and PubMed databases were searched, resulting in the identification of 638 articles potentially eligible for inclusion; of these, 50 studies were included in the review. The main fields of research in virtual reality and psychosis are: safety and acceptability of the technology; neurocognitive evaluation; functional capacity and performance evaluation; assessment of paranoid ideation and auditory hallucinations; and interventions. The studies reviewed indicate that virtual reality offers a valuable method of assessing the presence of symptoms in ecologically valid environments, with the potential to facilitate learning new emotional and behavioural responses. Virtual reality is a promising method to be used in the assessment of neurocognitive deficits and the study of relevant clinical symptoms. Furthermore, preliminary findings suggest that it can be applied to the delivery of cognitive rehabilitation, social skills training interventions and virtual reality-assisted therapies for psychosis. The potential benefits for enhancing treatment are highlighted. Recommendations for future research include demonstrating generalisability to real-life settings, examining potential negative effects, larger sample sizes and long-term follow-up studies. The present review has been registered in the PROSPERO register: CDR 4201507776.

  1. Auditory and visual 3D virtual reality therapy as a new treatment for chronic subjective tinnitus: Results of a randomized controlled trial.

    PubMed

    Malinvaud, D; Londero, A; Niarra, R; Peignard, Ph; Warusfel, O; Viaud-Delmon, I; Chatellier, G; Bonfils, P

    2016-03-01

    Subjective tinnitus (ST) is a frequent audiologic condition that still requires effective treatment. This study aimed at evaluating two therapeutic approaches: Virtual Reality (VR) immersion in auditory and visual 3D environments and Cognitive Behaviour Therapy (CBT). This open, randomized and therapeutic equivalence trial used bilateral testing of VR versus CBT. Adult patients displaying unilateral or predominantly unilateral ST, and fulfilling inclusion criteria were included after giving their written informed consent. We measured the different therapeutic effect by comparing the mean scores of validated questionnaires and visual analog scales, pre and post protocol. Equivalence was established if both strategies did not differ for more than a predetermined limit. We used univariate and multivariate analysis adjusted on baseline values to assess treatment efficacy. In addition of this trial, purely exploratory comparison to a waiting list group (WL) was provided. Between August, 2009 and November, 2011, 148 of 162 screened patients were enrolled (VR n = 61, CBT n = 58, WL n = 29). These groups did not differ at baseline for demographic data. Three month after the end of the treatment, we didn't find any difference between VR and CBT groups either for tinnitus severity (p = 0.99) or tinnitus handicap (p = 0.36). VR appears to be at least as effective as CBT in unilateral ST patients. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Challenges and solutions for realistic room simulation

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    2002-05-01

    Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.

  3. Human-Avatar Symbiosis for the Treatment of Auditory Verbal Hallucinations in Schizophrenia through Virtual/Augmented Reality and Brain-Computer Interfaces

    PubMed Central

    Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J.; Latorre, José M.; Rodriguez-Jimenez, Roberto

    2017-01-01

    This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis. PMID:29209193

  4. Human-Avatar Symbiosis for the Treatment of Auditory Verbal Hallucinations in Schizophrenia through Virtual/Augmented Reality and Brain-Computer Interfaces.

    PubMed

    Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J; Latorre, José M; Rodriguez-Jimenez, Roberto

    2017-01-01

    This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis.

  5. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  6. Music and learning-induced cortical plasticity.

    PubMed

    Pantev, Christo; Ross, Bernhard; Fujioka, Takkao; Trainor, Laurel J; Schulte, Michael; Schulz, Matthias

    2003-11-01

    Auditory stimuli are encoded by frequency-tuned neurons in the auditory cortex. There are a number of tonotopic maps, indicating that there are multiple representations, as in a mosaic. However, the cortical organization is not fixed due to the brain's capacity to adapt to current requirements of the environment. Several experiments on cerebral cortical organization in musicians demonstrate an astonishing plasticity. We used the MEG technique in a number of studies to investigate the changes that occur in the human auditory cortex when a skill is acquired, such as when learning to play a musical instrument. We found enlarged cortical representation of tones of the musical scale as compared to pure tones in skilled musicians. Enlargement was correlated with the age at which musicians began to practice. We also investigated cortical representations for notes of different timbre (violin and trumpet) and found that they are enhanced in violinists and trumpeters, preferentially for the timbre of the instrument on which the musician was trained. In recent studies we extended these findings in three ways. First, we show that we can use MEG to measure the effects of relatively short-term laboratory training involving learning to perceive virtual instead of spectral pitch and that the switch to perceiving virtual pitch is manifested in the gamma band frequency. Second, we show that there is cross-modal plasticity in that when the lips of trumpet players are stimulated (trumpet players assess their auditory performance by monitoring the position and pressure of their lips touching the mouthpiece of their instrument) at the same time as a trumpet tone, activation in the somatosensory cortex is increased more than it is during the sum of the separate lip and trumpet tone stimulation. Third, we show that musicians' automatic encoding and discrimination of pitch contour and interval information in melodies are specifically enhanced compared to those in nonmusicians in that musicians show larger functional mismatch negativity (MMNm) responses to occasional changes in melodic contour or interval, but that the two groups show similar MMNm responses to changes in the frequency of a pure tone.

  7. Experience with V-STORE: considerations on presence in virtual environments for effective neuropsychological rehabilitation of executive functions.

    PubMed

    Lo Priore, Corrado; Castelnuovo, Gianluca; Liccione, Diego; Liccione, Davide

    2003-06-01

    The paper discusses the use of immersive virtual reality systems for the cognitive rehabilitation of dysexecutive syndrome, usually caused by prefrontal brain injuries. With respect to classical P&P and flat-screen computer rehabilitative tools, IVR systems might prove capable of evoking a more intense and compelling sense of presence, thanks to the highly naturalistic subject-environment interaction allowed. Within a constructivist framework applied to holistic rehabilitation, we suggest that this difference might enhance the ecological validity of cognitive training, partly overcoming the implicit limits of a lab setting, which seem to affect non-immersive procedures especially when applied to dysexecutive symptoms. We tested presence in a pilot study applied to a new VR-based rehabilitation tool for executive functions, V-Store; it allows patients to explore a virtual environment where they solve six series of tasks, ordered for complexity and designed to stimulate executive functions, programming, categorical abstraction, short-term memory and attention. We compared sense of presence experienced by unskilled normal subjects, randomly assigned to immersive or non-immersive (flat screen) sessions of V-Store, through four different indexes: self-report questionnaire, psychophysiological (GSR, skin conductance), neuropsychological (incidental recall memory test related to auditory information coming from the "real" environment) and count of breaks in presence (BIPs). Preliminary results show in the immersive group a significantly higher GSR response during tasks; neuropsychological data (fewer recalled elements from "reality") and less BIPs only show a congruent but yet non-significant advantage for the immersive condition; no differences were evident from the self-report questionnaire. A larger experimental group is currently under examination to evaluate significance of these data, which also might prove interesting with respect to the question of objective-subjective measures of presence.

  8. [What do virtual reality tools bring to child and adolescent psychiatry?

    PubMed

    Bioulac, S; de Sevin, E; Sagaspe, P; Claret, A; Philip, P; Micoulaud-Franchi, J A; Bouvard, M P

    2018-06-01

    Virtual reality is a relatively new technology that enables individuals to immerse themselves in a virtual world. It offers several advantages including a more realistic, lifelike environment that may allow subjects to "forget" they are being assessed, allow a better participation and an increased generalization of learning. Moreover, the virtual reality system can provide multimodal stimuli, such as visual and auditory stimuli, and can also be used to evaluate the patient's multimodal integration and to aid rehabilitation of cognitive abilities. The use of virtual reality to treat various psychiatric disorders in adults (phobic anxiety disorders, post-traumatic stress disorder, eating disorders, addictions…) and its efficacy is supported by numerous studies. Similar research for children and adolescents is lagging behind. This may be particularly beneficial to children who often show great interest and considerable success on computer, console or videogame tasks. This article will expose the main studies that have used virtual reality with children and adolescents suffering from psychiatric disorders. The use of virtual reality to treat anxiety disorders in adults is gaining popularity and its efficacy is supported by various studies. Most of the studies attest to the significant efficacy of the virtual reality exposure therapy (or in virtuo exposure). In children, studies have covered arachnophobia social anxiety and school refusal phobia. Despite the limited number of studies, results are very encouraging for treatment in anxiety disorders. Several studies have reported the clinical use of virtual reality technology for children and adolescents with autistic spectrum disorders (ASD). Extensive research has proven the efficiency of technologies as support tools for therapy. Researches are found to be focused on communication and on learning and social imitation skills. Virtual reality is also well accepted by subjects with ASD. The virtual environment offers the opportunity to administer controlled tasks such as the typical neuropsychological tools, but in an environment much more like a standard classroom. The virtual reality classroom offers several advantages compared to classical tools such as more realistic and lifelike environment but also records various measures in standardized conditions. Most of the studies using a virtual classroom have found that children with Attention Deficit/Hyperactivity Disorder make significantly fewer correct hits and more commission errors compared with controls. The virtual classroom has proven to be a good clinical tool for evaluation of attention in ADHD. For eating disorders, cognitive behavioural therapy (CBT) program enhanced by a body image specific component using virtual reality techniques was shown to be more efficient than cognitive behavioural therapy alone. The body image-specific component using virtual reality techniques boots efficiency and accelerates the CBT change process for eating disorders. Virtual reality is a relatively new technology and its application in child and adolescent psychiatry is recent. However, this technique is still in its infancy and much work is needed including controlled trials before it can be introduced in routine clinical use. Virtual reality interventions should also investigate how newly acquired skills are transferred to the real world. At present virtual reality can be considered a useful tool in evaluation and treatment for child and adolescent disorders. Copyright © 2017 L'Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  9. Revolutionizing Education: The Promise of Virtual Reality

    ERIC Educational Resources Information Center

    Gadelha, Rene

    2018-01-01

    Virtual reality (VR) has the potential to revolutionize education, as it immerses students in their learning more than any other available medium. By blocking out visual and auditory distractions in the classroom, it has the potential to help students deeply connect with the material they are learning in a way that has never been possible before.…

  10. Attentional Demand of a Virtual Reality-Based Reaching Task in Nondisabled Older Adults.

    PubMed

    Chen, Yi-An; Chung, Yu-Chen; Proffitt, Rachel; Wade, Eric; Winstein, Carolee

    2015-12-01

    Attention during exercise is known to affect performance; however, the attentional demand inherent to virtual reality (VR)-based exercise is not well understood. We used a dual-task paradigm to compare the attentional demands of VR-based and non-VR-based (conventional, real-world) exercise: 22 non-disabled older adults performed a primary reaching task to virtual and real targets in a counterbalanced block order while verbally responding to an unanticipated auditory tone in one third of the trials. The attentional demand of the primary reaching task was inferred from the voice response time (VRT) to the auditory tone. Participants' engagement level and task experience were also obtained using questionnaires. The virtual target condition was more attention demanding (significantly longer VRT) than the real target condition. Secondary analyses revealed a significant interaction between engagement level and target condition on attentional demand. For participants who were highly engaged, attentional demand was high and independent of target condition. However, for those who were less engaged, attentional demand was low and depended on target condition (i.e., virtual > real). These findings add important knowledge to the growing body of research pertaining to the development and application of technology-enhanced exercise for elders and for rehabilitation purposes.

  11. Attentional Demand of a Virtual Reality-Based Reaching Task in Nondisabled Older Adults

    PubMed Central

    Chen, Yi-An; Chung, Yu-Chen; Proffitt, Rachel; Wade, Eric; Winstein, Carolee

    2015-01-01

    Attention during exercise is known to affect performance; however, the attentional demand inherent to virtual reality (VR)-based exercise is not well understood. We used a dual-task paradigm to compare the attentional demands of VR-based and non-VR-based (conventional, real-world) exercise: 22 non-disabled older adults performed a primary reaching task to virtual and real targets in a counterbalanced block order while verbally responding to an unanticipated auditory tone in one third of the trials. The attentional demand of the primary reaching task was inferred from the voice response time (VRT) to the auditory tone. Participants' engagement level and task experience were also obtained using questionnaires. The virtual target condition was more attention demanding (significantly longer VRT) than the real target condition. Secondary analyses revealed a significant interaction between engagement level and target condition on attentional demand. For participants who were highly engaged, attentional demand was high and independent of target condition. However, for those who were less engaged, attentional demand was low and depended on target condition (i.e., virtual > real). These findings add important knowledge to the growing body of research pertaining to the development and application of technology-enhanced exercise for elders and for rehabilitation purposes. PMID:27004233

  12. Modular mechatronic system for stationary bicycles interfaced with virtual environment for rehabilitation.

    PubMed

    Ranky, Richard G; Sivak, Mark L; Lewis, Jeffrey A; Gade, Venkata K; Deutsch, Judith E; Mavroidis, Constantinos

    2014-06-05

    Cycling has been used in the rehabilitation of individuals with both chronic and post-surgical conditions. Among the challenges with implementing bicycling for rehabilitation is the recruitment of both extremities, in particular when one is weaker or less coordinated. Feedback embedded in virtual reality (VR) augmented cycling may serve to address the requirement for efficacious cycling; specifically recruitment of both extremities and exercising at a high intensity. In this paper a mechatronic rehabilitation bicycling system with an interactive virtual environment, called Virtual Reality Augmented Cycling Kit (VRACK), is presented. Novel hardware components embedded with sensors were implemented on a stationary exercise bicycle to monitor physiological and biomechanical parameters of participants while immersing them in an augmented reality simulation providing the user with visual, auditory and haptic feedback. This modular and adaptable system attaches to commercially-available stationary bicycle systems and interfaces with a personal computer for simulation and data acquisition processes. The complete bicycle system includes: a) handle bars based on hydraulic pressure sensors; b) pedals that monitor pedal kinematics with an inertial measurement unit (IMU) and forces on the pedals while providing vibratory feedback; c) off the shelf electronics to monitor heart rate and d) customized software for rehabilitation. Bench testing for the handle and pedal systems is presented for calibration of the sensors detecting force and angle. The modular mechatronic kit for exercise bicycles was tested in bench testing and human tests. Bench tests performed on the sensorized handle bars and the instrumented pedals validated the measurement accuracy of these components. Rider tests with the VRACK system focused on the pedal system and successfully monitored kinetic and kinematic parameters of the rider's lower extremities. The VRACK system, a virtual reality mechatronic bicycle rehabilitation modular system was designed to convert most bicycles in virtual reality (VR) cycles. Preliminary testing of the augmented reality bicycle system was successful in demonstrating that a modular mechatronic kit can monitor and record kinetic and kinematic parameters of several riders.

  13. Hearing in three dimensions

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara

    2003-04-01

    One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.

  14. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    A spatial auditory display was used to convolve speech stimuli, consisting of 130 different call signs used in the communications protocol of NASA's John F. Kennedy Space Center, to different virtual auditory positions. An adaptive staircase method was used to determine intelligibility levels of the signal against diotic speech babble, with spatial positions at 30 deg azimuth increments. Non-individualized, minimum-phase approximations of head-related transfer functions were used. The results showed a maximal intelligibility improvement of about 6 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  15. Evaluation of Domain-Specific Collaboration Interfaces for Team Command and Control Tasks

    DTIC Science & Technology

    2012-05-01

    Technologies 1.1.1. Virtual Whiteboard Cognitive theories relating the utilization, storage, and retrieval of verbal and spatial information, such as...AE Spatial emergent SE Auditory linguistic AL Spatial positional SP Facial figural FF Spatial quantitative SQ Facial motive FM Tactile figural...driven by the auditory linguistic (AL), short-term memory (STM), spatial attentive (SA), visual temporal (VT), and vocal process (V) subscales. 0

  16. The Effects of Vision-Related Aspects on Noise Perception of Wind Turbines in Quiet Areas

    PubMed Central

    Maffei, Luigi; Iachini, Tina; Masullo, Massimiliano; Aletta, Francesco; Sorrentino, Francesco; Senese, Vincenzo Paolo; Ruotolo, Francesco

    2013-01-01

    Preserving the soundscape and geographic extension of quiet areas is a great challenge against the wide-spreading of environmental noise. The E.U. Environmental Noise Directive underlines the need to preserve quiet areas as a new aim for the management of noise in European countries. At the same time, due to their low population density, rural areas characterized by suitable wind are considered appropriate locations for installing wind farms. However, despite the fact that wind farms are represented as environmentally friendly projects, these plants are often viewed as visual and audible intruders, that spoil the landscape and generate noise. Even though the correlations are still unclear, it is obvious that visual impacts of wind farms could increase due to their size and coherence with respect to the rural/quiet environment. In this paper, by using the Immersive Virtual Reality technique, some visual and acoustical aspects of the impact of a wind farm on a sample of subjects were assessed and analyzed. The subjects were immersed in a virtual scenario that represented a situation of a typical rural outdoor scenario that they experienced at different distances from the wind turbines. The influence of the number and the colour of wind turbines on global, visual and auditory judgment were investigated. The main results showed that, regarding the number of wind turbines, the visual component has a weak effect on individual reactions, while the colour influences both visual and auditory individual reactions, although in a different way. PMID:23624578

  17. The effects of vision-related aspects on noise perception of wind turbines in quiet areas.

    PubMed

    Maffei, Luigi; Iachini, Tina; Masullo, Massimiliano; Aletta, Francesco; Sorrentino, Francesco; Senese, Vincenzo Paolo; Ruotolo, Francesco

    2013-04-26

    Preserving the soundscape and geographic extension of quiet areas is a great challenge against the wide-spreading of environmental noise. The E.U. Environmental Noise Directive underlines the need to preserve quiet areas as a new aim for the management of noise in European countries. At the same time, due to their low population density, rural areas characterized by suitable wind are considered appropriate locations for installing wind farms. However, despite the fact that wind farms are represented as environmentally friendly projects, these plants are often viewed as visual and audible intruders, that spoil the landscape and generate noise. Even though the correlations are still unclear, it is obvious that visual impacts of wind farms could increase due to their size and coherence with respect to the rural/quiet environment. In this paper, by using the Immersive Virtual Reality technique, some visual and acoustical aspects of the impact of a wind farm on a sample of subjects were assessed and analyzed. The subjects were immersed in a virtual scenario that represented a situation of a typical rural outdoor scenario that they experienced at different distances from the wind turbines. The influence of the number and the colour of wind turbines on global, visual and auditory judgment were investigated. The main results showed that, regarding the number of wind turbines, the visual component has a weak effect on individual reactions, while the colour influences both visual and auditory individual reactions, although in a different way.

  18. Psychophysical Evaluation of Three-Dimensional Auditory Displays

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.

    1996-01-01

    This report describes the progress made during the second year of a three-year Cooperative Research Agreement. The CRA proposed a program of applied psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years, we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners'head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on one of these topics, the localization of multiple sources, was reported in the most recent Semi-Annual Progress Report (Appendix A). That same progress report described work on two related topics, the influence of a listener's a-priori knowledge of source characteristics and the discriminability of real and virtual sources. In the period since the last Progress Report we have conducted several new studies to evaluate the effectiveness of a new and simpler method for measuring the HRTF's that are used to synthesize virtual sources and have expanded our studies of multiple sources. The results of this research are described below.

  19. Exploring the simulation requirements for virtual regional anesthesia training

    NASA Astrophysics Data System (ADS)

    Charissis, V.; Zimmer, C. R.; Sakellariou, S.; Chan, W.

    2010-01-01

    This paper presents an investigation towards the simulation requirements for virtual regional anaesthesia training. To this end we have developed a prototype human-computer interface designed to facilitate Virtual Reality (VR) augmenting educational tactics for regional anaesthesia training. The proposed interface system, aims to compliment nerve blocking techniques methods. The system is designed to operate in real-time 3D environment presenting anatomical information and enabling the user to explore the spatial relation of different human parts without any physical constrains. Furthermore the proposed system aims to assist the trainee anaesthetists so as to build a mental, three-dimensional map of the anatomical elements and their depictive relationship to the Ultra-Sound imaging which is used for navigation of the anaesthetic needle. Opting for a sophisticated approach of interaction, the interface elements are based on simplified visual representation of real objects, and can be operated through haptic devices and surround auditory cues. This paper discusses the challenges involved in the HCI design, introduces the visual components of the interface and presents a tentative plan of future work which involves the development of realistic haptic feedback and various regional anaesthesia training scenarios.

  20. Virtual reality and cognitive rehabilitation: a review of current outcome research.

    PubMed

    Larson, Eric B; Feigon, Maia; Gagliardo, Pablo; Dvorkin, Assaf Y

    2014-01-01

    Recent advancement in the technology of virtual reality (VR) has allowed improved applications for cognitive rehabilitation. The aim of this review is to facilitate comparisons of therapeutic efficacy of different VR interventions. A systematic approach for the review of VR cognitive rehabilitation outcome research addressed the nature of each sample, treatment apparatus, experimental treatment protocol, control treatment protocol, statistical analysis and results. Using this approach, studies that provide valid evidence of efficacy of VR applications are summarized. Applications that have not yet undergone controlled outcome study but which have promise are introduced. Seventeen studies conducted over the past eight years are reviewed. The few randomized controlled trials that have been completed show that some applications are effective in treating cognitive deficits in people with neurological diagnoses although further study is needed. Innovations requiring further study include the use of enriched virtual environments that provide haptic sensory input in addition to visual and auditory inputs and the use of commercially available gaming systems to provide tele-rehabilitation services. Recommendations are offered to improve efficacy of rehabilitation, to improve scientific rigor of rehabilitation research and to broaden access to the evidence-based treatments that this research has identified.

  1. Training to Facilitate Adaptation to Novel Sensory Environments

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Brady, R. A.; Batson, C. D.; Ploutz-Snyder, R. J.; Cohen, H. S.

    2010-01-01

    After spaceflight, the process of readapting to Earth s gravity causes locomotor dysfunction. We are developing a gait training countermeasure to facilitate adaptive responses in locomotor function. Our training system is comprised of a treadmill placed on a motion-base facing a virtual visual scene that provides an unstable walking surface combined with incongruent visual flow designed to train subjects to rapidly adapt their gait patterns to changes in the sensory environment. The goal of our present study was to determine if training improved both the locomotor and dual-tasking ability responses to a novel sensory environment and to quantify the retention of training. Subjects completed three, 30-minute training sessions during which they walked on the treadmill while receiving discordant support surface and visual input. Control subjects walked on the treadmill without any support surface or visual alterations. To determine the efficacy of training, all subjects were then tested using a novel visual flow and support surface movement not previously experienced during training. This test was performed 20 minutes, 1 week, and 1, 3, and 6 months after the final training session. Stride frequency and auditory reaction time were collected as measures of postural stability and cognitive effort, respectively. Subjects who received training showed less alteration in stride frequency and auditory reaction time compared to controls. Trained subjects maintained their level of performance over 6 months. We conclude that, with training, individuals became more proficient at walking in novel discordant sensorimotor conditions and were able to devote more attention to competing tasks.

  2. Salient Feature of Haptic-Based Guidance of People in Low Visibility Environments Using Hard Reins.

    PubMed

    Ranasinghe, Anuradha; Sornkarn, Nantachai; Dasgupta, Prokar; Althoefer, Kaspar; Penders, Jacques; Nanayakkara, Thrishantha

    2016-02-01

    This paper presents salient features of human-human interaction where one person with limited auditory and visual perception of the environment (a follower) is guided by an agent with full perceptual capabilities (a guider) via a hard rein along a given path. We investigate several salient features of the interaction between the guider and follower such as: 1) the order of an autoregressive (AR) control policy that maps states of the follower to actions of the guider; 2) how the guider may modulate the pulling force in response to the trust level of the follower; and 3) how learning may successively apportion the responsibility of control across different muscles of the guider. Based on experimental systems identification on human demonstrations from ten pairs of naive subjects, we show that guiders tend to adopt a third-order AR predictive control policy and followers tend to adopt second-order reactive control policy. Moreover, the extracted guider's control policy was implemented and validated by human-robot interaction experiments. By modeling the follower's dynamics with a time varying virtual damped inertial system, we found that it is the coefficient of virtual damping which is most sensitive to the trust level of the follower. We used these experimental insights to derive a novel controller that integrates an optimal order control policy with a push/pull force modulator in response to the trust level of the follower monitored using a time varying virtual damped inertial model.

  3. The Efficacy of Virtual Reality in Treating Post-traumatic Stress Disorder in U.S. Warfighters Returning from Iraq and Afghanistan Combat Theaters

    DTIC Science & Technology

    2011-11-08

    kinesthetic VR stimuli with patient arousal responses. Treatment consisted of 10 sessions (2x/week) for 5 weeks, and a control group received structured...that provided the treatment therapist control over the visual, auditory, and kinesthetic elements experienced by the participant. The experimental...graded presentation of visual, auditory, and kinesthetic stimuli to stimulate memory recall of traumatic combat events in a safe

  4. An Introduction to 3-D Sound

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    This talk will overview the basic technologies related to the creation of virtual acoustic images, and the potential of including spatial auditory displays in human-machine interfaces. Research into the perceptual error inherent in both natural and virtual spatial hearing is reviewed, since the formation of improved technologies is tied to psychoacoustic research. This includes a discussion of Head Related Transfer Function (HRTF) measurement techniques (the HRTF provides important perceptual cues within a virtual acoustic display). Many commercial applications of virtual acoustics have so far focused on games and entertainment ; in this review, other types of applications are examined, including aeronautic safety, voice communications, virtual reality, and room acoustic simulation. In particular, the notion that realistic simulation is optimized within a virtual acoustic display when head motion and reverberation cues are included within a perceptual model.

  5. Effects of Bone Vibrator Position on Auditory Spatial Perception Tasks.

    PubMed

    McBride, Maranda; Tran, Phuong; Pollard, Kimberly A; Letowski, Tomasz; McMillan, Garnett P

    2015-12-01

    This study assessed listeners' ability to localize spatially differentiated virtual audio signals delivered by bone conduction (BC) vibrators and circumaural air conduction (AC) headphones. Although the skull offers little intracranial sound wave attenuation, previous studies have demonstrated listeners' ability to localize auditory signals delivered by a pair of BC vibrators coupled to the mandibular condyle bones. The current study extended this research to other BC vibrator locations on the skull. Each participant listened to virtual audio signals originating from 16 different horizontal locations using circumaural headphones or BC vibrators placed in front of, above, or behind the listener's ears. The listener's task was to indicate the signal's perceived direction of origin. Localization accuracy with the BC front and BC top positions was comparable to that with the headphones, but responses for the BC back position were less accurate than both the headphones and BC front position. This study supports the conclusion of previous studies that listeners can localize virtual 3D signals equally well using AC and BC transducers. Based on these results, it is apparent that BC devices could be substituted for AC headphones with little to no localization performance degradation. BC headphones can be used when spatial auditory information needs to be delivered without occluding the ears. Although vibrator placement in front of the ears appears optimal from the localization standpoint, the top or back position may be acceptable from an operational standpoint or if the BC system is integrated into headgear. © 2015, Human Factors and Ergonomics Society.

  6. An acoustic gap between the NICU and womb: a potential risk for compromised neuroplasticity of the auditory system in preterm infants.

    PubMed

    Lahav, Amir; Skoe, Erika

    2014-01-01

    The intrauterine environment allows the fetus to begin hearing low-frequency sounds in a protected fashion, ensuring initial optimal development of the peripheral and central auditory system. However, the auditory nursery provided by the womb vanishes once the preterm newborn enters the high-frequency (HF) noisy environment of the neonatal intensive care unit (NICU). The present article draws a concerning line between auditory system development and HF noise in the NICU, which we argue is not necessarily conducive to fostering this development. Overexposure to HF noise during critical periods disrupts the functional organization of auditory cortical circuits. As a result, we theorize that the ability to tune out noise and extract acoustic information in a noisy environment may be impaired, leading to increased risks for a variety of auditory, language, and attention disorders. Additionally, HF noise in the NICU often masks human speech sounds, further limiting quality exposure to linguistic stimuli. Understanding the impact of the sound environment on the developing auditory system is an important first step in meeting the developmental demands of preterm newborns undergoing intensive care.

  7. Binaural fusion and the representation of virtual pitch in the human auditory cortex.

    PubMed

    Pantev, C; Elbert, T; Ross, B; Eulitz, C; Terhardt, E

    1996-10-01

    The auditory system derives the pitch of complex tones from the tone's harmonics. Research in psychoacoustics predicted that binaural fusion was an important feature of pitch processing. Based on neuromagnetic human data, the first neurophysiological confirmation of binaural fusion in hearing is presented. The centre of activation within the cortical tonotopic map corresponds to the location of the perceived pitch and not to the locations that are activated when the single frequency constituents are presented. This is also true when the different harmonics of a complex tone are presented dichotically. We conclude that the pitch processor includes binaural fusion to determine the particular pitch location which is activated in the auditory cortex.

  8. Human Fear Conditioning Conducted in Full Immersion 3-Dimensional Virtual Reality

    PubMed Central

    Huff, Nicole C.; Zielinski, David J.; Fecteau, Matthew E.; Brady, Rachael; LaBar, Kevin S.

    2010-01-01

    Fear conditioning is a widely used paradigm in non-human animal research to investigate the neural mechanisms underlying fear and anxiety. A major challenge in conducting conditioning studies in humans is the ability to strongly manipulate or simulate the environmental contexts that are associated with conditioned emotional behaviors. In this regard, virtual reality (VR) technology is a promising tool. Yet, adapting this technology to meet experimental constraints requires special accommodations. Here we address the methodological issues involved when conducting fear conditioning in a fully immersive 6-sided VR environment and present fear conditioning data. In the real world, traumatic events occur in complex environments that are made up of many cues, engaging all of our sensory modalities. For example, cues that form the environmental configuration include not only visual elements, but aural, olfactory, and even tactile. In rodent studies of fear conditioning animals are fully immersed in a context that is rich with novel visual, tactile and olfactory cues. However, standard laboratory tests of fear conditioning in humans are typically conducted in a nondescript room in front of a flat or 2D computer screen and do not replicate the complexity of real world experiences. On the other hand, a major limitation of clinical studies aimed at reducing (extinguishing) fear and preventing relapse in anxiety disorders is that treatment occurs after participants have acquired a fear in an uncontrolled and largely unknown context. Thus the experimenters are left without information about the duration of exposure, the true nature of the stimulus, and associated background cues in the environment1. In the absence of this information it can be difficult to truly extinguish a fear that is both cue and context-dependent. Virtual reality environments address these issues by providing the complexity of the real world, and at the same time allowing experimenters to constrain fear conditioning and extinction parameters to yield empirical data that can suggest better treatment options and/or analyze mechanistic hypotheses. In order to test the hypothesis that fear conditioning may be richly encoded and context specific when conducted in a fully immersive environment, we developed distinct virtual reality 3-D contexts in which participants experienced fear conditioning to virtual snakes or spiders. Auditory cues co-occurred with the CS in order to further evoke orienting responses and a feeling of "presence" in subjects 2 . Skin conductance response served as the dependent measure of fear acquisition, memory retention and extinction. PMID:20736913

  9. Auditory Confrontation Naming in Alzheimer’s Disease

    PubMed Central

    Brandt, Jason; Bakker, Arnold; Maroof, David Aaron

    2010-01-01

    Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer’s disease (AD). We developed an Auditory Naming Task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies, mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the Auditory Naming Task. This task was also more difficult than two versions of a comparable Visual Naming Task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal subjects. Nonetheless, our Auditory Naming Test may prove useful in research and clinical practice, especially with visually-impaired patients. PMID:20981630

  10. The 'F-complex' and MMN tap different aspects of deviance.

    PubMed

    Laufer, Ilan; Pratt, Hillel

    2005-02-01

    To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.

  11. Modular mechatronic system for stationary bicycles interfaced with virtual environment for rehabilitation

    PubMed Central

    2014-01-01

    Background Cycling has been used in the rehabilitation of individuals with both chronic and post-surgical conditions. Among the challenges with implementing bicycling for rehabilitation is the recruitment of both extremities, in particular when one is weaker or less coordinated. Feedback embedded in virtual reality (VR) augmented cycling may serve to address the requirement for efficacious cycling; specifically recruitment of both extremities and exercising at a high intensity. Methods In this paper a mechatronic rehabilitation bicycling system with an interactive virtual environment, called Virtual Reality Augmented Cycling Kit (VRACK), is presented. Novel hardware components embedded with sensors were implemented on a stationary exercise bicycle to monitor physiological and biomechanical parameters of participants while immersing them in an augmented reality simulation providing the user with visual, auditory and haptic feedback. This modular and adaptable system attaches to commercially-available stationary bicycle systems and interfaces with a personal computer for simulation and data acquisition processes. The complete bicycle system includes: a) handle bars based on hydraulic pressure sensors; b) pedals that monitor pedal kinematics with an inertial measurement unit (IMU) and forces on the pedals while providing vibratory feedback; c) off the shelf electronics to monitor heart rate and d) customized software for rehabilitation. Bench testing for the handle and pedal systems is presented for calibration of the sensors detecting force and angle. Results The modular mechatronic kit for exercise bicycles was tested in bench testing and human tests. Bench tests performed on the sensorized handle bars and the instrumented pedals validated the measurement accuracy of these components. Rider tests with the VRACK system focused on the pedal system and successfully monitored kinetic and kinematic parameters of the rider’s lower extremities. Conclusions The VRACK system, a virtual reality mechatronic bicycle rehabilitation modular system was designed to convert most bicycles in virtual reality (VR) cycles. Preliminary testing of the augmented reality bicycle system was successful in demonstrating that a modular mechatronic kit can monitor and record kinetic and kinematic parameters of several riders. PMID:24902780

  12. Improvements of sound localization abilities by the facial ruff of the barn owl (Tyto alba) as demonstrated by virtual ruff removal.

    PubMed

    Hausmann, Laura; von Campenhausen, Mark; Endler, Frank; Singheiser, Martin; Wagner, Hermann

    2009-11-05

    When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs), which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba) are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD), interaural intensity differences (ILD), and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl's ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests. HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly) smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes. The facial ruff a) improves azimuthal sound localization by increasing the ITD range and b) improves elevational sound localization in the frontal field by introducing a shift of iso-ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the virtual removal of the ruff. These data provide new insights into the function of external hearing structures and open up the possibility to apply the results on autonomous agents, creation of virtual auditory environments for humans, or in hearing aids.

  13. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732

  14. Virtual-reality-based attention assessment of ADHD: ClinicaVR: Classroom-CPT versus a traditional continuous performance test.

    PubMed

    Neguț, Alexandra; Jurma, Anda Maria; David, Daniel

    2017-08-01

    Virtual-reality-based assessment may be a good alternative to classical or computerized neuropsychological assessment due to increased ecological validity. ClinicaVR: Classroom-CPT (VC) is a neuropsychological test embedded in virtual reality that is designed to assess attention deficits in children with attention deficit hyperactivity disorder (ADHD) or other conditions associated with impaired attention. The present study aimed to (1) investigate the diagnostic validity of VC in comparison to a traditional continuous performance test (CPT), (2) explore the task difficulty of VC, (3) address the effect of distractors on the performance of ADHD participants and typically-developing (TD) controls, and (4) compare the two measures on cognitive absorption. A total of 33 children diagnosed with ADHD and 42 TD children, aged between 7 and 13 years, participated in the study and were tested with a traditional CPT or with VC, along with several cognitive measures and an adapted version of the Cognitive Absorption Scale. A mixed multivariate analysis of covariance (MANCOVA) revealed that the children with ADHD performed worse on correct responses had more commissions and omissions errors than the TD children, as well as slower target reaction times . The results showed significant differences between performance in the virtual environment and the traditional computerized one, with longer reaction times in virtual reality. The data analysis highlighted the negative influence of auditory distractors on attention performance in the case of the children with ADHD, but not for the TD children. Finally, the two measures did not differ on the cognitive absorption perceived by the children.

  15. A Comparison of Selective Auditory Attention Abilities in Open-Space Versus Closed Classroom Students.

    ERIC Educational Resources Information Center

    Reinertsen, Gloria M.

    A study compared performances on a test of selective auditory attention between students educated in open-space versus closed classroom environments. An open-space classroom environment was defined as having no walls separating it from hallways or other classrooms. It was hypothesized that the incidence of auditory figure-ground (ability to focus…

  16. Transcranial fluorescence imaging of auditory cortical plasticity regulated by acoustic environments in mice.

    PubMed

    Takahashi, Kuniyuki; Hishida, Ryuichi; Kubota, Yamato; Kudoh, Masaharu; Takahashi, Sugata; Shibuki, Katsuei

    2006-03-01

    Functional brain imaging using endogenous fluorescence of mitochondrial flavoprotein is useful for investigating mouse cortical activities via the intact skull, which is thin and sufficiently transparent in mice. We applied this method to investigate auditory cortical plasticity regulated by acoustic environments. Normal mice of the C57BL/6 strain, reared in various acoustic environments for at least 4 weeks after birth, were anaesthetized with urethane (1.7 g/kg, i.p.). Auditory cortical images of endogenous green fluorescence in blue light were recorded by a cooled CCD camera via the intact skull. Cortical responses elicited by tonal stimuli (5, 10 and 20 kHz) exhibited mirror-symmetrical tonotopic maps in the primary auditory cortex (AI) and anterior auditory field (AAF). Depression of auditory cortical responses regarding response duration was observed in sound-deprived mice compared with naïve mice reared in a normal acoustic environment. When mice were exposed to an environmental tonal stimulus at 10 kHz for more than 4 weeks after birth, the cortical responses were potentiated in a frequency-specific manner in respect to peak amplitude of the responses in AI, but not for the size of the responsive areas. Changes in AAF were less clear than those in AI. To determine the modified synapses by acoustic environments, neural responses in cortical slices were investigated with endogenous fluorescence imaging. The vertical thickness of responsive areas after supragranular electrical stimulation was significantly reduced in the slices obtained from sound-deprived mice. These results suggest that acoustic environments regulate the development of vertical intracortical circuits in the mouse auditory cortex.

  17. Estimating Distance in Real and Virtual Environments: Does Order Make a Difference?

    PubMed Central

    Ziemer, Christine J.; Plumert, Jodie M.; Cremer, James F.; Kearney, Joseph K.

    2010-01-01

    This investigation examined how the order in which people experience real and virtual environments influences their distance estimates. Participants made two sets of distance estimates in one of the following conditions: 1) real environment first, virtual environment second; 2) virtual environment first, real environment second; 3) real environment first, real environment second; or 4) virtual environment first, virtual environment second. In Experiment 1, participants imagined how long it would take to walk to targets in real and virtual environments. Participants’ first estimates were significantly more accurate in the real than in the virtual environment. When the second environment was the same as the first environment (real-real and virtual-virtual), participants’ second estimates were also more accurate in the real than in the virtual environment. When the second environment differed from the first environment (real-virtual and virtual-real), however, participants’ second estimates did not differ significantly across the two environments. A second experiment in which participants walked blindfolded to targets in the real environment and imagined how long it would take to walk to targets in the virtual environment replicated these results. These subtle, yet persistent order effects suggest that memory can play an important role in distance perception. PMID:19525540

  18. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1994-01-01

    A spatial auditory display was designed for separating the multiple communication channels usually heard over one ear to different virtual auditory positions. The single 19 foot rack mount device utilizes digital filtering algorithms to separate up to four communication channels. The filters use four different binaural transfer functions, synthesized from actual outer ear measurements, to impose localization cues on the incoming sound. Hardware design features include 'fail-safe' operation in the case of power loss, and microphone/headset interfaces to the mobile launch communication system in use at KSC. An experiment designed to verify the intelligibility advantage of the display used 130 different call signs taken from the communications protocol used at NASA KSC. A 6 to 7 dB intelligibility advantage was found when multiple channels were spatially displayed, compared to monaural listening. The findings suggest that the use of a spatial auditory display could enhance both occupational and operational safety and efficiency of NASA operations.

  19. The effects of auditory and visual cues on timing synchronicity for robotic rehabilitation.

    PubMed

    English, Brittney A; Howard, Ayanna M

    2017-07-01

    In this paper, we explore how the integration of auditory and visual cues can help teach the timing of motor skills for the purpose of motor function rehabilitation. We conducted a study using Amazon's Mechanical Turk in which 106 participants played a virtual therapy game requiring wrist movements. To validate that our results would translate to trends that could also be observed during robotic rehabilitation sessions, we recreated this experiment with 11 participants using a robotic wrist rehabilitation system as means to control the therapy game. During interaction with the therapy game, users were asked to learn and reconstruct a tapping sequence as defined by musical notes flashing on the screen. Participants were divided into 2 test groups: (1) control: participants only received visual cues to prompt them on the timing sequence, and (2) experimental: participants received both visual and auditory cues to prompt them on the timing sequence. To evaluate performance, the timing and length of the sequence were measured. Performance was determined by calculating the number of trials needed before the participant was able to master the specific aspect of the timing task. In the virtual experiment, the group that received visual and auditory cues was able to master all aspects of the timing task faster than the visual cue only group with p-values < 0.05. This trend was also verified for participants using the robotic arm exoskeleton in the physical experiment.

  20. Behavioral Indications of Auditory Processing Disorders.

    ERIC Educational Resources Information Center

    Hartman, Kerry McGoldrick

    1988-01-01

    Identifies disruptive behaviors of children that may indicate central auditory processing disorders (CAPDs), perceptual handicaps of auditory discrimination or auditory memory not related to hearing ability. Outlines steps to modify the communication environment for CAPD children at home and in the classroom. (SV)

  1. Simultaneous neural and movement recording in large-scale immersive virtual environments.

    PubMed

    Snider, Joseph; Plank, Markus; Lee, Dongpyo; Poizner, Howard

    2013-10-01

    Virtual reality (VR) allows precise control and manipulation of rich, dynamic stimuli that, when coupled with on-line motion capture and neural monitoring, can provide a powerful means both of understanding brain behavioral relations in the high dimensional world and of assessing and treating a variety of neural disorders. Here we present a system that combines state-of-the-art, fully immersive, 3D, multi-modal VR with temporally aligned electroencephalographic (EEG) recordings. The VR system is dynamic and interactive across visual, auditory, and haptic interactions, providing sight, sound, touch, and force. Crucially, it does so with simultaneous EEG recordings while subjects actively move about a 20 × 20 ft² space. The overall end-to-end latency between real movement and its simulated movement in the VR is approximately 40 ms. Spatial precision of the various devices is on the order of millimeters. The temporal alignment with the neural recordings is accurate to within approximately 1 ms. This powerful combination of systems opens up a new window into brain-behavioral relations and a new means of assessment and rehabilitation of individuals with motor and other disorders.

  2. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  3. Note-Taking and Memory in Different Media Environments

    ERIC Educational Resources Information Center

    Lin, Lin; Bigenho, Chris

    2011-01-01

    Through this study the authors investigated undergraduate students' memory recall in three media environments with three note-taking options, following an A x B design with nine experiments. The three environments included no-distraction, auditory-distraction, and auditory-visual-distraction; while the three note-taking options included…

  4. Feature Assignment in Perception of Auditory Figure

    ERIC Educational Resources Information Center

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  5. How age and linguistic competence alter the interplay of perceptual and cognitive factors when listening to conversations in a noisy environment

    PubMed Central

    Avivi-Reich, Meital; Daneman, Meredyth; Schneider, Bruce A.

    2013-01-01

    Multi-talker conversations challenge the perceptual and cognitive capabilities of older adults and those listening in their second language (L2). In older adults these difficulties could reflect declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. The tendency of L2 listeners to invoke some of the semantic and syntactic processes from their first language (L1) may interfere with speech comprehension in L2. These challenges might also force them to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up vs. top-down processes to speech comprehension. Younger and older L1s as well as young L2s listened to conversations played against a babble background, with or without spatial separation between the talkers and masker, when the spatial positions of the stimuli were specified either by loudspeaker placements (real location), or through use of the precedence effect (virtual location). After listening to a conversation, the participants were asked to answer questions regarding its content. Individual hearing differences were compensated for by creating the same degree of difficulty in identifying individual words in babble. Once compensation was applied, the number of questions correctly answered increased when a real or virtual spatial separation was introduced between babble and talkers. There was no evidence that performance differed between real and virtual locations. The contribution of vocabulary knowledge to dialog comprehension was found to be larger in the virtual conditions than in the real whereas the contribution of reading comprehension skill did not depend on the listening environment but rather differed as a function of age and language proficiency. The results indicate that the acoustic scene and the cognitive and linguistic competencies of listeners modulate how and when top-down resources are engaged in aid of speech comprehension. PMID:24578684

  6. How age and linguistic competence alter the interplay of perceptual and cognitive factors when listening to conversations in a noisy environment.

    PubMed

    Avivi-Reich, Meital; Daneman, Meredyth; Schneider, Bruce A

    2014-01-01

    Multi-talker conversations challenge the perceptual and cognitive capabilities of older adults and those listening in their second language (L2). In older adults these difficulties could reflect declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. The tendency of L2 listeners to invoke some of the semantic and syntactic processes from their first language (L1) may interfere with speech comprehension in L2. These challenges might also force them to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up vs. top-down processes to speech comprehension. Younger and older L1s as well as young L2s listened to conversations played against a babble background, with or without spatial separation between the talkers and masker, when the spatial positions of the stimuli were specified either by loudspeaker placements (real location), or through use of the precedence effect (virtual location). After listening to a conversation, the participants were asked to answer questions regarding its content. Individual hearing differences were compensated for by creating the same degree of difficulty in identifying individual words in babble. Once compensation was applied, the number of questions correctly answered increased when a real or virtual spatial separation was introduced between babble and talkers. There was no evidence that performance differed between real and virtual locations. The contribution of vocabulary knowledge to dialog comprehension was found to be larger in the virtual conditions than in the real whereas the contribution of reading comprehension skill did not depend on the listening environment but rather differed as a function of age and language proficiency. The results indicate that the acoustic scene and the cognitive and linguistic competencies of listeners modulate how and when top-down resources are engaged in aid of speech comprehension.

  7. Linking prenatal experience to the emerging musical mind.

    PubMed

    Ullal-Gupta, Sangeeta; Vanden Bosch der Nederlanden, Christina M; Tichko, Parker; Lahav, Amir; Hannon, Erin E

    2013-09-03

    The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one's culture begins already within the mother's womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind.

  8. Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion

    PubMed Central

    Smith, Ross T.; Hunter, Estin V.; Davis, Miles G.; Sterling, Michele; Moseley, G. Lorimer

    2017-01-01

    Background Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can’t be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. Method In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%–200%—the Motor Offset Visual Illusion (MoOVi)—thus simulating more or less movement than that actually occurring. At 50o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360o immersive virtual reality with and without three-dimensional properties, was also investigated. Results Perception of head movement was dependent on visual-kinaesthetic feedback (p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Discussion Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain. PMID:28243537

  9. Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion.

    PubMed

    Harvie, Daniel S; Smith, Ross T; Hunter, Estin V; Davis, Miles G; Sterling, Michele; Moseley, G Lorimer

    2017-01-01

    Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can't be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50 o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%-200%-the Motor Offset Visual Illusion (MoOVi)-thus simulating more or less movement than that actually occurring. At 50 o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360 o immersive virtual reality with and without three-dimensional properties, was also investigated. Perception of head movement was dependent on visual-kinaesthetic feedback ( p  = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain.

  10. Absence of modulatory action on haptic height perception with musical pitch

    PubMed Central

    Geronazzo, Michele; Avanzini, Federico; Grassi, Massimo

    2015-01-01

    Although acoustic frequency is not a spatial property of physical objects, in common language, pitch, i.e., the psychological correlated of frequency, is often labeled spatially (i.e., “high in pitch” or “low in pitch”). Pitch-height is known to modulate (and interact with) the response of participants when they are asked to judge spatial properties of non-auditory stimuli (e.g., visual) in a variety of behavioral tasks. In the current study we investigated whether the modulatory action of pitch-height extended to the haptic estimation of height of a virtual step. We implemented a HW/SW setup which is able to render virtual 3D objects (stair-steps) haptically through a PHANTOM device, and to provide real-time continuous auditory feedback depending on the user interaction with the object. The haptic exploration was associated with a sinusoidal tone whose pitch varied as a function of the interaction point's height within (i) a narrower and (ii) a wider pitch range, or (iii) a random pitch variation acting as a control audio condition. Explorations were also performed with no sound (haptic only). Participants were instructed to explore the virtual step freely, and to communicate height estimation by opening their thumb and index finger to mimic the step riser height, or verbally by reporting the height in centimeters of the step riser. We analyzed the role of musical expertise by dividing participants into non-musicians and musicians. Results showed no effects of musical pitch on high-realistic haptic feedback. Overall there is no difference between the two groups in the proposed multimodal conditions. Additionally, we observed a different haptic response distribution between musicians and non-musicians when estimations of the auditory conditions are matched with estimations in the no sound condition. PMID:26441745

  11. Improving the Performance of an Auditory Brain-Computer Interface Using Virtual Sound Sources by Shortening Stimulus Onset Asynchrony

    PubMed Central

    Sugi, Miho; Hagimoto, Yutaka; Nambu, Isao; Gonzalez, Alejandro; Takei, Yoshinori; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2018-01-01

    Recently, a brain-computer interface (BCI) using virtual sound sources has been proposed for estimating user intention via electroencephalogram (EEG) in an oddball task. However, its performance is still insufficient for practical use. In this study, we examine the impact that shortening the stimulus onset asynchrony (SOA) has on this auditory BCI. While very short SOA might improve its performance, sound perception and task performance become difficult, and event-related potentials (ERPs) may not be induced if the SOA is too short. Therefore, we carried out behavioral and EEG experiments to determine the optimal SOA. In the experiments, participants were instructed to direct attention to one of six virtual sounds (target direction). We used eight different SOA conditions: 200, 300, 400, 500, 600, 700, 800, and 1,100 ms. In the behavioral experiment, we recorded participant behavioral responses to target direction and evaluated recognition performance of the stimuli. In all SOA conditions, recognition accuracy was over 85%, indicating that participants could recognize the target stimuli correctly. Next, using a silent counting task in the EEG experiment, we found significant differences between target and non-target sound directions in all but the 200-ms SOA condition. When we calculated an identification accuracy using Fisher discriminant analysis (FDA), the SOA could be shortened by 400 ms without decreasing the identification accuracies. Thus, improvements in performance (evaluated by BCI utility) could be achieved. On average, higher BCI utilities were obtained in the 400 and 500-ms SOA conditions. Thus, auditory BCI performance can be optimized for both behavioral and neurophysiological responses by shortening the SOA. PMID:29535602

  12. Using smartphone technology to deliver a virtual pedestrian environment: usability and validation.

    PubMed

    Schwebel, David C; Severson, Joan; He, Yefei

    2017-09-01

    Various programs effectively teach children to cross streets more safely, but all are labor- and cost-intensive. Recent developments in mobile phone technology offer opportunity to deliver virtual reality pedestrian environments to mobile smartphone platforms. Such an environment may offer a cost- and labor-effective strategy to teach children to cross streets safely. This study evaluated usability, feasibility, and validity of a smartphone-based virtual pedestrian environment. A total of 68 adults completed 12 virtual crossings within each of two virtual pedestrian environments, one delivered by smartphone and the other a semi-immersive kiosk virtual environment. Participants completed self-report measures of perceived realism and simulator sickness experienced in each virtual environment, plus self-reported demographic and personality characteristics. All participants followed system instructions and used the smartphone-based virtual environment without difficulty. No significant simulator sickness was reported or observed. Users rated the smartphone virtual environment as highly realistic. Convergent validity was detected, with many aspects of pedestrian behavior in the smartphone-based virtual environment matching behavior in the kiosk virtual environment. Anticipated correlations between personality and kiosk virtual reality pedestrian behavior emerged for the smartphone-based system. A smartphone-based virtual environment can be usable and valid. Future research should develop and evaluate such a training system.

  13. Filling-in visual motion with sounds.

    PubMed

    Väljamäe, A; Soto-Faraco, S

    2008-10-01

    Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.

  14. AULA-Advanced Virtual Reality Tool for the Assessment of Attention: Normative Study in Spain.

    PubMed

    Iriarte, Yahaira; Diaz-Orueta, Unai; Cueto, Eduardo; Irazustabarrena, Paula; Banterla, Flavio; Climent, Gema

    2016-06-01

    The present study describes the obtention of normative data for the AULA test, a virtual reality tool designed to evaluate attention problems, especially in children and adolescents. The normative sample comprised 1,272 participants (48.2% female) with an age range from 6 to 16 years (M = 10.25, SD = 2.83). The AULA test administered to them shows both visual and auditory stimuli, while randomized distractors of ecological nature appear progressively. Variables provided by AULA were clustered in different categories for their posterior analysis. Differences by age and gender were analyzed, resulting in 14 groups, 7 per sex group. Differences between visual and auditory attention were also obtained. Obtained normative data are relevant for the use of AULA for evaluating attention in Spanish children and adolescents in a more ecological way. Further studies will be needed to determine sensitivity and specificity of AULA to measure attention in different clinical populations. (J. of Att. Dis. 2016; 20(6) 542-568). © The Author(s) 2012.

  15. Spatial Cues Provided by Sound Improve Postural Stabilization: Evidence of a Spatial Auditory Map?

    PubMed Central

    Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe

    2017-01-01

    It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize. PMID:28694770

  16. The what, where and how of auditory-object perception.

    PubMed

    Bizley, Jennifer K; Cohen, Yale E

    2013-10-01

    The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.

  17. The what, where and how of auditory-object perception

    PubMed Central

    Bizley, Jennifer K.; Cohen, Yale E.

    2014-01-01

    The fundamental perceptual unit in hearing is the ‘auditory object’. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood. PMID:24052177

  18. Strategies for Analyzing Tone Languages

    ERIC Educational Resources Information Center

    Coupe, Alexander R.

    2014-01-01

    This paper outlines a method of auditory and acoustic analysis for determining the tonemes of a language starting from scratch, drawing on the author's experience of recording and analyzing tone languages of north-east India. The methodology is applied to a preliminary analysis of tone in the Thang dialect of Khiamniungan, a virtually undocumented…

  19. Visual-Auditory Integration during Speech Imitation in Autism

    ERIC Educational Resources Information Center

    Williams, Justin H. G.; Massaro, Dominic W.; Peel, Natalie J.; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional "mirror neuron" systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a "virtual" head (Baldi), delivered speech stimuli for…

  20. Mastoid Cavity Dimensions and Shape: Method of Measurement and Virtual Fitting of Implantable Devices

    PubMed Central

    Handzel, Ophir; Wang, Haobing; Fiering, Jason; Borenstein, Jeffrey T.; Mescher, Mark J.; Leary Swan, Erin E.; Murphy, Brian A.; Chen, Zhiqiang; Peppi, Marcello; Sewell, William F.; Kujawa, Sharon G.; McKenna, Michael J.

    2009-01-01

    Temporal bone implants can be used to electrically stimulate the auditory nerve, to amplify sound, to deliver drugs to the inner ear and potentially for other future applications. The implants require storage space and access to the middle or inner ears. The most acceptable space is the cavity created by a canal wall up mastoidectomy. Detailed knowledge of the available space for implantation and pathways to access the middle and inner ears is necessary for the design of implants and successful implantation. Based on temporal bone CT scans a method for three-dimensional reconstruction of a virtual canal wall up mastoidectomy space is described. Using Amira® software the area to be removed during such surgery is marked on axial CT slices, and a three-dimensional model of that space is created. The average volume of 31 reconstructed models is 12.6 cm3 with standard deviation of 3.69 cm3, ranging from 7.97 to 23.25 cm3. Critical distances were measured directly from the model and their averages were calculated: height 3.69 cm, depth 2.43 cm, length above the external auditory canal (EAC) 4.45 cm and length posterior to EAC 3.16 cm. These linear measurements did not correlate well with volume measurements. The shape of the models was variable to a significant extent making the prediction of successful implantation for a given design based on linear and volumetric measurement unreliable. Hence, to assure successful implantation, preoperative assessment should include a virtual fitting of an implant into the intended storage space. The above-mentioned three-dimensional models were exported from Amira to a Solidworks application where virtual fitting was performed. Our results are compared to other temporal bone implant virtual fitting studies. Virtual fitting has been suggested for other human applications. PMID:19372649

  1. The plastic ear and perceptual relearning in auditory spatial perception

    PubMed Central

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497

  2. Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity.

    PubMed

    Loria, Tristan; de Grosbois, John; Tremblay, Luc

    2016-09-01

    At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.

  3. Theoretical Limitations on Functional Imaging Resolution in Auditory Cortex

    PubMed Central

    Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2010-01-01

    Functional imaging can reveal detailed organizational structure in cerebral cortical areas, but neuronal response features and local neural interconnectivity can influence the resulting images, possibly limiting the inferences that can be drawn about neural function. Discerning the fundamental principles of organizational structure in the auditory cortex of multiple species has been somewhat challenging historically both with functional imaging and with electrophysiology. A possible limitation affecting any methodology using pooled neuronal measures may be the relative distribution of response selectivity throughout the population of auditory cortex neurons. One neuronal response type inherited from the cochlea, for example, exhibits a receptive field that increases in size (i.e., decreases in selectivity) at higher stimulus intensities. Even though these neurons appear to represent a minority of auditory cortex neurons, they are likely to contribute disproportionately to the activity detected in functional images, especially if intense sounds are used for stimulation. To evaluate the potential influence of neuronal subpopulations upon functional images of primary auditory cortex, a model array representing cortical neurons was probed with virtual imaging experiments under various assumptions about the local circuit organization. As expected, different neuronal subpopulations were activated preferentially under different stimulus conditions. In fact, stimulus protocols that can preferentially excite selective neurons, resulting in a relatively sparse activation map, have the potential to improve the effective resolution of functional auditory cortical images. These experimental results also make predictions about auditory cortex organization that can be tested with refined functional imaging experiments. PMID:20079343

  4. The harmonic organization of auditory cortex.

    PubMed

    Wang, Xiaoqin

    2013-12-17

    A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.

  5. Music From the Very Beginning-A Neuroscience-Based Framework for Music as Therapy for Preterm Infants and Their Parents.

    PubMed

    Haslbeck, Friederike Barbara; Bassler, Dirk

    2018-01-01

    Human and animal studies demonstrate that early auditory experiences influence brain development. The findings are particularly crucial following preterm birth as the plasticity of auditory regions, and cortex development are heavily dependent on the quality of auditory stimulation. Brain maturation in preterm infants may be affected among other things by the overwhelming auditory environment of the neonatal intensive care unit (NICU). Conversely, auditory deprivation, (e.g., the lack of the regular intrauterine rhythms of the maternal heartbeat and the maternal voice) may also have an impact on brain maturation. Therefore, a nurturing enrichment of the auditory environment for preterm infants is warranted. Creative music therapy (CMT) addresses these demands by offering infant-directed singing in lullaby-style that is continually adapted to the neonate's needs. The therapeutic approach is tailored to the individual developmental stage, entrained to the breathing rhythm, and adapted to the subtle expressions of the newborn. Not only the therapist and the neonate but also the parents play a role in CMT. In this article, we describe how to apply music therapy in a neonatal intensive care environment to support very preterm infants and their families. We speculate that the enriched musical experience may promote brain development and we critically discuss the available evidence in support of our assumption.

  6. Motor learning from virtual reality to natural environments in individuals with Duchenne muscular dystrophy.

    PubMed

    Quadrado, Virgínia Helena; Silva, Talita Dias da; Favero, Francis Meire; Tonks, James; Massetti, Thais; Monteiro, Carlos Bandeira de Mello

    2017-11-10

    To examine whether performance improvements in the virtual environment generalize to the natural environment. we had 64 individuals, 32 of which were individuals with DMD and 32 were typically developing individuals. The groups practiced two coincidence timing tasks. In the more tangible button-press task, the individuals were required to 'intercept' a falling virtual object at the moment it reached the interception point by pressing a key on the computer. In the more abstract task, they were instructed to 'intercept' the virtual object by making a hand movement in a virtual environment using a webcam. For individuals with DMD, conducting a coincidence timing task in a virtual environment facilitated transfer to the real environment. However, we emphasize that a task practiced in a virtual environment should have higher rates of difficulties than a task practiced in a real environment. IMPLICATIONS FOR REHABILITATION Virtual environments can be used to promote improved performance in ?real-world? environments. Virtual environments offer the opportunity to create paradigms similar ?real-life? tasks, however task complexity and difficulty levels can be manipulated, graded and enhanced to increase likelihood of success in transfer of learning and performance. Individuals with DMD, in particular, showed immediate performance benefits after using virtual reality.

  7. Lean on Wii: physical rehabilitation with virtual reality Wii peripherals.

    PubMed

    Anderson, Fraser; Annett, Michelle; Bischof, Walter F

    2010-01-01

    In recent years, a growing number of occupational therapists have integrated video game technologies, such as the Nintendo Wii, into rehabilitation programs. 'Wiihabilitation', or the use of the Wii in rehabilitation, has been successful in increasing patients' motivation and encouraging full body movement. The non-rehabilitative focus of Wii applications, however, presents a number of problems: games are too difficult for patients, they mainly target upper-body gross motor functions, and they lack support for task customization, grading, and quantitative measurements. To overcome these problems, we have designed a low-cost, virtual-reality based system. Our system, Virtual Wiihab, records performance and behavioral measurements, allows for activity customization, and uses auditory, visual, and haptic elements to provide extrinsic feedback and motivation to patients.

  8. Auditory Environment Across the Life Span of Cochlear Implant Users: Insights From Data Logging.

    PubMed

    Busch, Tobias; Vanpoucke, Filiep; van Wieringen, Astrid

    2017-05-24

    We describe the natural auditory environment of people with cochlear implants (CIs), how it changes across the life span, and how it varies between individuals. We performed a retrospective cross-sectional analysis of Cochlear Nucleus 6 CI sound-processor data logs. The logs were obtained from 1,501 people with CIs (ages 0-96 years). They covered over 2.4 million hr of implant use and indicated how much time the CI users had spent in various acoustical environments. We investigated exposure to spoken language, noise, music, and quiet, and analyzed variation between age groups, users, and countries. CI users spent a substantial part of their daily life in noisy environments. As a consequence, most speech was presented in background noise. We found significant differences between age groups for all auditory scenes. Yet even within the same age group and country, variability between individuals was substantial. Regardless of their age, people with CIs face challenging acoustical environments in their daily life. Our results underline the importance of supporting them with assistive listening technology. Moreover, we found large differences between individuals' auditory diets that might contribute to differences in rehabilitation outcomes. Their causes and effects should be investigated further.

  9. Fit for the frontline? A focus group exploration of auditory tasks carried out by infantry and combat support personnel.

    PubMed

    Bevis, Zoe L; Semeraro, Hannah D; van Besouw, Rachel M; Rowan, Daniel; Lineton, Ben; Allsopp, Adrian J

    2014-01-01

    In order to preserve their operational effectiveness and ultimately their survival, military personnel must be able to detect important acoustic signals and maintain situational awareness. The possession of sufficient hearing ability to perform job-specific auditory tasks is defined as auditory fitness for duty (AFFD). Pure tone audiometry (PTA) is used to assess AFFD in the UK military; however, it is unclear whether PTA is able to accurately predict performance on job-specific auditory tasks. The aim of the current study was to gather information about auditory tasks carried out by infantry personnel on the frontline and the environment these tasks are performed in. The study consisted of 16 focus group interviews with an average of five participants per group. Eighty British army personnel were recruited from five infantry regiments. The focus group guideline included seven open-ended questions designed to elicit information about the auditory tasks performed on operational duty. Content analysis of the data resulted in two main themes: (1) the auditory tasks personnel are expected to perform and (2) situations where personnel felt their hearing ability was reduced. Auditory tasks were divided into subthemes of sound detection, speech communication and sound localization. Reasons for reduced performance included background noise, hearing protection and attention difficulties. The current study provided an important and novel insight to the complex auditory environment experienced by British infantry personnel and identified 17 auditory tasks carried out by personnel on operational duties. These auditory tasks will be used to inform the development of a functional AFFD test for infantry personnel.

  10. Neural correlates of auditory scene analysis and perception

    PubMed Central

    Cohen, Yale E.

    2014-01-01

    The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354

  11. Shared virtual environments for aerospace training

    NASA Technical Reports Server (NTRS)

    Loftin, R. Bowen; Voss, Mark

    1994-01-01

    Virtual environments have the potential to significantly enhance the training of NASA astronauts and ground-based personnel for a variety of activities. A critical requirement is the need to share virtual environments, in real or near real time, between remote sites. It has been hypothesized that the training of international astronaut crews could be done more cheaply and effectively by utilizing such shared virtual environments in the early stages of mission preparation. The Software Technology Branch at NASA's Johnson Space Center has developed the capability for multiple users to simultaneously share the same virtual environment. Each user generates the graphics needed to create the virtual environment. All changes of object position and state are communicated to all users so that each virtual environment maintains its 'currency.' Examples of these shared environments will be discussed and plans for the utilization of the Department of Defense's Distributed Interactive Simulation (DIS) protocols for shared virtual environments will be presented. Finally, the impact of this technology on training and education in general will be explored.

  12. The specificity of memory enhancement during interaction with a virtual environment.

    PubMed

    Brooks, B M; Attree, E A; Rose, F D; Clifford, B R; Leadbetter, A G

    1999-01-01

    Two experiments investigated differences between active and passive participation in a computer-generated virtual environment in terms of spatial memory, object memory, and object location memory. It was found that active participants, who controlled their movements in the virtual environment using a joystick, recalled the spatial layout of the virtual environment better than passive participants, who merely watched the active participants' progress. Conversely, there were no significant differences between the active and passive participants' recall or recognition of the virtual objects, nor in their recall of the correct locations of objects in the virtual environment. These findings are discussed in terms of subject-performed task research and the specificity of memory enhancement in virtual environments.

  13. Validation of virtual reality as a tool to understand and prevent child pedestrian injury.

    PubMed

    Schwebel, David C; Gaines, Joanna; Severson, Joan

    2008-07-01

    In recent years, virtual reality has emerged as an innovative tool for health-related education and training. Among the many benefits of virtual reality is the opportunity for novice users to engage unsupervised in a safe environment when the real environment might be dangerous. Virtual environments are only useful for health-related research, however, if behavior in the virtual world validly matches behavior in the real world. This study was designed to test the validity of an immersive, interactive virtual pedestrian environment. A sample of 102 children and 74 adults was recruited to complete simulated road-crossings in both the virtual environment and the identical real environment. In both the child and adult samples, construct validity was demonstrated via significant correlations between behavior in the virtual and real worlds. Results also indicate construct validity through developmental differences in behavior; convergent validity by showing correlations between parent-reported child temperament and behavior in the virtual world; internal reliability of various measures of pedestrian safety in the virtual world; and face validity, as measured by users' self-reported perception of realism in the virtual world. We discuss issues of generalizability to other virtual environments, and the implications for application of virtual reality to understanding and preventing pediatric pedestrian injuries.

  14. Distractibility in Attention/Deficit/ Hyperactivity Disorder (ADHD): the virtual reality classroom.

    PubMed

    Adams, Rebecca; Finn, Paul; Moes, Elisabeth; Flannery, Kathleen; Rizzo, Albert Skip

    2009-03-01

    Nineteen boys aged 8 to 14 with a diagnosis of ADHD and 16 age-matched controls were compared in a virtual reality (VR) classroom version of a continuous performance task (CPT), with a second standard CPT presentation using the same projection display dome system. The Virtual Classroom included simulated "real-world" auditory and visual distracters. Parent ratings of attention, hyperactivity, internalizing problems, and adaptive skills on the Behavior Assessment System for Children (BASC) Monitor for ADHD confirmed that the ADHD children had more problems in these areas than controls. The difference between the ADHD group (who performed worse) and the control group approached significance (p = .05; adjusted p = .02) in the Virtual Classroom presentation, and the classification rate of the Virtual Classroom was better than when the standard CPT was used (87.5% versus 68.8%). Children with ADHD were more affected by distractions in the VR classroom than those without ADHD. Results are discussed in relation to distractibility in ADHD.

  15. An Audio Architecture Integrating Sound and Live Voice for Virtual Environments

    DTIC Science & Technology

    2002-09-01

    implementation of a virtual environment. As real world training locations become scarce and training budgets are trimmed, training system developers ...look more and more towards virtual environments as the answer. Virtual environments provide training system developers with several key benefits

  16. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  17. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  18. Increasing Accessibility to the Blind of Virtual Environments, Using a Virtual Mobility Aid Based On the "EyeCane": Feasibility Study

    PubMed Central

    Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel-Robert; Amedi, Amir

    2013-01-01

    Virtual worlds and environments are becoming an increasingly central part of our lives, yet they are still far from accessible to the blind. This is especially unfortunate as such environments hold great potential for them for uses such as social interaction, online education and especially for use with familiarizing the visually impaired user with a real environment virtually from the comfort and safety of his own home before visiting it in the real world. We have implemented a simple algorithm to improve this situation using single-point depth information, enabling the blind to use a virtual cane, modeled on the “EyeCane” electronic travel aid, within any virtual environment with minimal pre-processing. Use of the Virtual-EyeCane, enables this experience to potentially be later used in real world environments with identical stimuli to those from the virtual environment. We show the fast-learned practical use of this algorithm for navigation in simple environments. PMID:23977316

  19. G2H--graphics-to-haptic virtual environment development tool for PC's.

    PubMed

    Acosta, E; Temkin, B; Krummel, T M; Heinrichs, W L

    2000-01-01

    For surgical training and preparations, the existing surgical virtual environments have shown great improvement. However, these improvements are more in the visual aspect. The incorporation of haptics into virtual reality base surgical simulations would enhance the sense of realism greatly. To aid in the development of the haptic surgical virtual environment we have created a graphics to haptic, G2H, virtual environment developer tool. G2H transforms graphical virtual environments (created or imported) to haptic virtual environments without programming. The G2H capability has been demonstrated using the complex 3D pelvic model of Lucy 2.0, the Stanford Visible Female. The pelvis was made haptic using G2H without any further programming effort.

  20. Grasping trajectories in a virtual environment adhere to Weber's law.

    PubMed

    Ozana, Aviad; Berman, Sigal; Ganel, Tzvi

    2018-06-01

    Virtual-reality and telerobotic devices simulate local motor control of virtual objects within computerized environments. Here, we explored grasping kinematics within a virtual environment and tested whether, as in normal 3D grasping, trajectories in the virtual environment are performed analytically, violating Weber's law with respect to object's size. Participants were asked to grasp a series of 2D objects using a haptic system, which projected their movements to a virtual space presented on a computer screen. The apparatus also provided object-specific haptic information upon "touching" the edges of the virtual targets. The results showed that grasping movements performed within the virtual environment did not produce the typical analytical trajectory pattern obtained during 3D grasping. Unlike as in 3D grasping, grasping trajectories in the virtual environment adhered to Weber's law, which indicates relative resolution in size processing. In addition, the trajectory patterns differed from typical trajectories obtained during 3D grasping, with longer times to complete the movement, and with maximum grip apertures appearing relatively early in the movement. The results suggest that grasping movements within a virtual environment could differ from those performed in real space, and are subjected to irrelevant effects of perceptual information. Such atypical pattern of visuomotor control may be mediated by the lack of complete transparency between the interface and the virtual environment in terms of the provided visual and haptic feedback. Possible implications of the findings to movement control within robotic and virtual environments are further discussed.

  1. The harmonic organization of auditory cortex

    PubMed Central

    Wang, Xiaoqin

    2013-01-01

    A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds. PMID:24381544

  2. Controlling memory impairment in elderly adults using virtual reality memory training: a randomized controlled pilot study.

    PubMed

    Optale, Gabriele; Urgesi, Cosimo; Busato, Valentina; Marin, Silvia; Piron, Lamberto; Priftis, Konstantinos; Gamberini, Luciano; Capodieci, Salvatore; Bordin, Adalberto

    2010-05-01

    Memory decline is a prevalent aspect of aging but may also be the first sign of cognitive pathology. Virtual reality (VR) using immersion and interaction may provide new approaches to the treatment of memory deficits in elderly individuals. The authors implemented a VR training intervention to try to lessen cognitive decline and improve memory functions. The authors randomly assigned 36 elderly residents of a rest care facility (median age 80 years) who were impaired on the Verbal Story Recall Test either to the experimental group (EG) or the control group (CG). The EG underwent 6 months of VR memory training (VRMT) that involved auditory stimulation and VR experiences in path finding. The initial training phase lasted 3 months (3 auditory and 3 VR sessions every 2 weeks), and there was a booster training phase during the following 3 months (1 auditory and 1 VR session per week). The CG underwent equivalent face-to-face training sessions using music therapy. Both groups participated in social and creative and assisted-mobility activities. Neuropsychological and functional evaluations were performed at baseline, after the initial training phase, and after the booster training phase. The EG showed significant improvements in memory tests, especially in long-term recall with an effect size of 0.7 and in several other aspects of cognition. In contrast, the CG showed progressive decline. The authors suggest that VRMT may improve memory function in elderly adults by enhancing focused attention.

  3. Angle-Dependent Distortions in the Perceptual Topology of Acoustic Space

    PubMed Central

    2018-01-01

    By moving sounds around the head and asking listeners to report which ones moved more, it was found that sound sources at the side of a listener must move at least twice as much as ones in front to be judged as moving the same amount. A relative expansion of space in the front and compression at the side has consequences for spatial perception of moving sounds by both static and moving listeners. An accompanying prediction that the apparent location of static sound sources ought to also be distorted agrees with previous work and suggests that this is a general perceptual phenomenon that is not limited to moving signals. A mathematical model that mimics the measured expansion of space can be used to successfully capture several previous findings in spatial auditory perception. The inverse of this function could be used alongside individualized head-related transfer functions and motion tracking to produce hyperstable virtual acoustic environments. PMID:29764312

  4. A preliminary study of MR sickness evaluation using visual motion aftereffect for advanced driver assistance systems.

    PubMed

    Nakajima, Sawako; Ino, Shuichi; Ifukube, Tohru

    2007-01-01

    Mixed Reality (MR) technologies have recently been explored in many areas of Human-Machine Interface (HMI) such as medicine, manufacturing, entertainment and education. However MR sickness, a kind of motion sickness is caused by sensory conflicts between the real world and virtual world. The purpose of this paper is to find out a new evaluation method of motion and MR sickness. This paper investigates a relationship between the whole-body vibration related to MR technologies and the motion aftereffect (MAE) phenomenon in the human visual system. This MR environment is modeled after advanced driver assistance systems in near-future vehicles. The seated subjects in the MR simulator were shaken in the pitch direction ranging from 0.1 to 2.0 Hz. Results show that MAE is useful for evaluation of MR sickness incidence. In addition, a method to reduce the MR sickness by auditory stimulation is proposed.

  5. Using Virtual Reality Environment to Improve Joint Attention Associated with Pervasive Developmental Disorder

    ERIC Educational Resources Information Center

    Cheng, Yufang; Huang, Ruowen

    2012-01-01

    The focus of this study is using data glove to practice Joint attention skill in virtual reality environment for people with pervasive developmental disorder (PDD). The virtual reality environment provides a safe environment for PDD people. Especially, when they made errors during practice in virtual reality environment, there is no suffering or…

  6. Intelligent Motion and Interaction Within Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R. (Editor); Slater, Mel (Editor); Alexander, Thomas (Editor)

    2007-01-01

    What makes virtual actors and objects in virtual environments seem real? How can the illusion of their reality be supported? What sorts of training or user-interface applications benefit from realistic user-environment interactions? These are some of the central questions that designers of virtual environments face. To be sure simulation realism is not necessarily the major, or even a required goal, of a virtual environment intended to communicate specific information. But for some applications in entertainment, marketing, or aspects of vehicle simulation training, realism is essential. The following chapters will examine how a sense of truly interacting with dynamic, intelligent agents may arise in users of virtual environments. These chapters are based on presentations at the London conference on Intelligent Motion and Interaction within a Virtual Environments which was held at University College, London, U.K., 15-17 September 2003.

  7. A Practical Guide, with Theoretical Underpinnings, for Creating Effective Virtual Reality Learning Environments

    ERIC Educational Resources Information Center

    O'Connor, Eileen A.; Domingo, Jelia

    2017-01-01

    With the advent of open source virtual environments, the associated cost reductions, and the more flexible options, avatar-based virtual reality environments are within reach of educators. By using and repurposing readily available virtual environments, instructors can bring engaging, community-building, and immersive learning opportunities to…

  8. The Fidelity of ’Feel’: Emotional Affordance in Virtual Environments

    DTIC Science & Technology

    2005-07-01

    The Fidelity of “Feel”: Emotional Affordance in Virtual Environments Jacquelyn Ford Morie, Josh Williams, Aimee Dozois, Donat-Pierre Luigi... environment but also the participant. We do this with the focus on what emotional affordances this manipulation will provide. Our first evaluation scenario...emotionally affective VEs. Keywords: Immersive Environments , Virtual Environments , VEs, Virtual Reality, emotion , affordance, fidelity, presence

  9. Coercive Narratives, Motivation and Role Playing in Virtual Worlds

    DTIC Science & Technology

    2002-01-01

    resource for making immersive virtual environments highly engaging. Interaction also appeals to our natural desire to discover. Reading a book contains...participation in an open-ended Virtual Environment (VE). I intend to take advantage of a participants’ natural tendency to prefer interaction when possible...I hope this work will expand the potential of experience within virtual worlds. K e y w o r d s : Immersive Environments , Virtual Environments

  10. An Investigation of Spatial Hearing in Children with Normal Hearing and with Cochlear Implants and the Impact of Executive Function

    NASA Astrophysics Data System (ADS)

    Misurelli, Sara M.

    The ability to analyze an "auditory scene"---that is, to selectively attend to a target source while simultaneously segregating and ignoring distracting information---is one of the most important and complex skills utilized by normal hearing (NH) adults. The NH adult auditory system and brain work rather well to segregate auditory sources in adverse environments. However, for some children and individuals with hearing loss, selectively attending to one source in noisy environments can be extremely challenging. In a normal auditory system, information arriving at each ear is integrated, and thus these binaural cues aid in speech understanding in noise. A growing number of individuals who are deaf now receive cochlear implants (CIs), which supply hearing through electrical stimulation to the auditory nerve. In particular, bilateral cochlear implants (BICIs) are now becoming more prevalent, especially in children. However, because CI sound processing lacks both fine structure cues and coordination between stimulation at the two ears, binaural cues may either be absent or inconsistent. For children with NH and with BiCIs, this difficulty in segregating sources is of particular concern because their learning and development commonly occurs within the context of complex auditory environments. This dissertation intends to explore and understand the ability of children with NH and with BiCIs to function in everyday noisy environments. The goals of this work are to (1) Investigate source segregation abilities in children with NH and with BiCIs; (2) Examine the effect of target-interferer similarity and the benefits of source segregation for children with NH and with BiCIs; (3) Investigate measures of executive function that may predict performance in complex and realistic auditory tasks of source segregation for listeners with NH; and (4) Examine source segregation abilities in NH listeners, from school-age to adults.

  11. Effects of Exercise in Immersive Virtual Environments on Cortical Neural Oscillations and Mental State

    PubMed Central

    Vogt, Tobias; Herpers, Rainer; Askew, Christopher D.; Scherfgen, David; Strüder, Heiko K.; Schneider, Stefan

    2015-01-01

    Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, moderate-intensity Exercise (i.e., self-paced cycling) and No-Exercise (i.e., automatic propulsion) trials were performed within three levels of virtual environment exposure. Each trial was 5 minutes in duration and was followed by posttrial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore, these changes indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence. PMID:26366305

  12. Effects of Exercise in Immersive Virtual Environments on Cortical Neural Oscillations and Mental State.

    PubMed

    Vogt, Tobias; Herpers, Rainer; Askew, Christopher D; Scherfgen, David; Strüder, Heiko K; Schneider, Stefan

    2015-01-01

    Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, moderate-intensity Exercise (i.e., self-paced cycling) and No-Exercise (i.e., automatic propulsion) trials were performed within three levels of virtual environment exposure. Each trial was 5 minutes in duration and was followed by posttrial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore, these changes indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence.

  13. Performance of an Ambulatory Dry-EEG Device for Auditory Closed-Loop Stimulation of Sleep Slow Oscillations in the Home Environment

    PubMed Central

    Debellemaniere, Eden; Chambon, Stanislas; Pinaud, Clemence; Thorey, Valentin; Dehaene, David; Léger, Damien; Chennaoui, Mounir; Arnal, Pierrick J.; Galtier, Mathieu N.

    2018-01-01

    Recent research has shown that auditory closed-loop stimulation can enhance sleep slow oscillations (SO) to improve N3 sleep quality and cognition. Previous studies have been conducted in lab environments. The present study aimed to validate and assess the performance of a novel ambulatory wireless dry-EEG device (WDD), for auditory closed-loop stimulation of SO during N3 sleep at home. The performance of the WDD to detect N3 sleep automatically and to send auditory closed-loop stimulation on SO were tested on 20 young healthy subjects who slept with both the WDD and a miniaturized polysomnography (part 1) in both stimulated and sham nights within a double blind, randomized and crossover design. The effects of auditory closed-loop stimulation on delta power increase were assessed after one and 10 nights of stimulation on an observational pilot study in the home environment including 90 middle-aged subjects (part 2).The first part, aimed at assessing the quality of the WDD as compared to a polysomnograph, showed that the sensitivity and specificity to automatically detect N3 sleep in real-time were 0.70 and 0.90, respectively. The stimulation accuracy of the SO ascending-phase targeting was 45 ± 52°. The second part of the study, conducted in the home environment, showed that the stimulation protocol induced an increase of 43.9% of delta power in the 4 s window following the first stimulation (including evoked potentials and SO entrainment effect). The increase of SO response to auditory stimulation remained at the same level after 10 consecutive nights. The WDD shows good performances to automatically detect in real-time N3 sleep and to send auditory closed-loop stimulation on SO accurately. These stimulation increased the SO amplitude during N3 sleep without any adaptation effect after 10 consecutive nights. This tool provides new perspectives to figure out novel sleep EEG biomarkers in longitudinal studies and can be interesting to conduct broad studies on the effects of auditory stimulation during sleep. PMID:29568267

  14. Pilot study of methods and equipment for in-home noise level measurements.

    PubMed

    Neitzel, Richard L; Heikkinen, Maire S A; Williams, Christopher C; Viet, Susan Marie; Dellarco, Michael

    2015-01-15

    Knowledge of the auditory and non-auditory effects of noise has increased dramatically over the past decade, but indoor noise exposure measurement methods have not advanced appreciably, despite the introduction of applicable new technologies. This study evaluated various conventional and smart devices for exposure assessment in the National Children's Study. Three devices were tested: a sound level meter (SLM), a dosimeter, and a smart device with a noise measurement application installed. Instrument performance was evaluated in a series of semi-controlled tests in office environments over 96-hour periods, followed by measurements made continuously in two rooms (a child's bedroom and a most used room) in nine participating homes over a 7-day period with subsequent computation of a range of noise metrics. The SLMs and dosimeters yielded similar A-weighted average noise levels. Levels measured by the smart devices often differed substantially (showing both positive and negative bias, depending on the metric) from those measured via SLM and dosimeter, and demonstrated attenuation in some frequency bands in spectral analysis compared to SLM results. Virtually all measurements exceeded the Environmental Protection Agency's 45 dBA day-night limit for indoor residential exposures. The measurement protocol developed here can be employed in homes, demonstrates the possibility of measuring long-term noise exposures in homes with technologies beyond traditional SLMs, and highlights potential pitfalls associated with measurements made by smart devices.

  15. Auditory Environment across the Life Span of Cochlear Implant Users: Insights from Data Logging

    ERIC Educational Resources Information Center

    Busch, Tobias; Vanpoucke, Filiep; van Wieringen, Astrid

    2017-01-01

    Purpose: We describe the natural auditory environment of people with cochlear implants (CIs), how it changes across the life span, and how it varies between individuals. Method: We performed a retrospective cross-sectional analysis of Cochlear Nucleus 6 CI sound-processor data logs. The logs were obtained from 1,501 people with CIs (ages 0-96…

  16. Developing an EEG-based on-line closed-loop lapse detection and mitigation system

    PubMed Central

    Wang, Yu-Te; Huang, Kuan-Chih; Wei, Chun-Shu; Huang, Teng-Yi; Ko, Li-Wei; Lin, Chin-Teng; Cheng, Chung-Kuan; Jung, Tzyy-Ping

    2014-01-01

    In America, 60% of adults reported that they have driven a motor vehicle while feeling drowsy, and at least 15–20% of fatal car accidents are fatigue-related. This study translates previous laboratory-oriented neurophysiological research to design, develop, and test an On-line Closed-loop Lapse Detection and Mitigation (OCLDM) System featuring a mobile wireless dry-sensor EEG headgear and a cell-phone based real-time EEG processing platform. Eleven subjects participated in an event-related lane-keeping task, in which they were instructed to manipulate a randomly deviated, fixed-speed cruising car on a 4-lane highway. This was simulated in a 1st person view with an 8-screen and 8-projector immersive virtual-reality environment. When the subjects experienced lapses or failed to respond to events during the experiment, auditory warning was delivered to rectify the performance decrements. However, the arousing auditory signals were not always effective. The EEG spectra exhibited statistically significant differences between effective and ineffective arousing signals, suggesting that EEG spectra could be used as a countermeasure of the efficacy of arousing signals. In this on-line pilot study, the proposed OCLDM System was able to continuously detect EEG signatures of fatigue, deliver arousing warning to subjects suffering momentary cognitive lapses, and assess the efficacy of the warning in near real-time to rectify cognitive lapses. The on-line testing results of the OCLDM System validated the efficacy of the arousing signals in improving subjects' response times to the subsequent lane-departure events. This study may lead to a practical on-line lapse detection and mitigation system in real-world environments. PMID:25352773

  17. Virtual Reality: The Future of Animated Virtual Instructor, the Technology and Its Emergence to a Productive E-Learning Environment.

    ERIC Educational Resources Information Center

    Jiman, Juhanita

    This paper discusses the use of Virtual Reality (VR) in e-learning environments where an intelligent three-dimensional (3D) virtual person plays the role of an instructor. With the existence of this virtual instructor, it is hoped that the teaching and learning in the e-environment will be more effective and productive. This virtual 3D animated…

  18. Cybersickness and Anxiety During Simulated Motion: Implications for VRET.

    PubMed

    Bruck, Susan; Watters, Paul

    2009-01-01

    Some clinicians have suggested using virtual reality environments to deliver psychological interventions to treat anxiety disorders. However, given a significant body of work on cybersickness symptoms which may arise in virtual environments - especially those involving simulated motion - we tested (a) whether being exposed to a virtual reality environment alone causes anxiety to increase, and (b) whether exposure to simulated motion in a virtual reality environment increases anxiety. Using a repeated measures design, we used Kim's Anxiety Scale questionnaire to compare baseline anxiety, anxiety after virtual environment exposure, and anxiety after simulated motion. While there was no significant effect on anxiety for being in a virtual environment with no simulated motion, the introduction of simulated motion caused anxiety to significantly increase, but not to a severe or extreme level. The implications of this work for virtual reality exposure therapy (VRET) are discussed.

  19. Characterizing the audibility of sound field with diffusion in architectural spaces

    NASA Astrophysics Data System (ADS)

    Utami, Sentagi Sesotya

    The significance of diffusion control in room acoustics is that it attempts to avoid echoes by dispersing reflections while removing less valuable sound energy. Some applications place emphasis on the enhancement of late reflections to promote a sense of envelopment, and on methods required to measure the performance of diffusers. What still remains unclear is the impact of diffusion on the audibility quality due to the geometric arrangement of architectural elements. The objective of this research is to characterize the audibility of the sound field with diffusion in architectural space. In order to address this objective, an approach utilizing various methods and new techniques relevant to room acoustics standards was applied. An array of microphones based on beam forming (i.e., an acoustic camera) was utilized for field measurements in a recording studio, classrooms, auditoriums, concert halls and sport arenas. Given the ability to combine a visual image with acoustical data, the impulse responses measured were analyzed to identify the impact of diffusive surfaces on the early, late, and reverberant sound fields. The effects of the room geometry and the proportions of the diffusive and absorptive surfaces were observed by utilizing geometrical room acoustics simulations. The degree of diffuseness in each space was measured by coherences from different measurement positions along with the acoustical conditions predicted by well-known objective parameters such as T30, EDT, C80, and C50. Noticeable differences of the auditory experience were investigated by utilizing computer-based survey techniques, including the use of an immersive virtual environment system, given the current software auralization capabilities. The results based on statistical analysis demonstrate the users' ability to localize the sound and to distinguish the intensity, clarity, and reverberation created within the virtual environment. Impact of architectural elements in diffusion control is evaluated by the design variable interaction, objectively and subjectively. Effectiveness of the diffusive surfaces is determined by the echo reduction and the sense of complete immersion in a given room acoustics volume. Application of such methodology at various stages of design provides the ability to create a better auditory experience by the users. The results based on the cases studied have contributed to the development of new acoustical treatment based on the diffusion characteristics.

  20. Training Enhances Both Locomotor and Cognitive Adaptability to a Novel Sensory Environment

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Brady, R. A.; Batson, C. D.; Ploutz-Snyder, R. J.; Cohen, H. S.

    2010-01-01

    During adaptation to novel gravitational environments, sensorimotor disturbances have the potential to disrupt the ability of astronauts to perform required mission tasks. The goal of this project is to develop a sensorimotor adaptability (SA) training program to facilitate rapid adaptation. We have developed a unique training system comprised of a treadmill placed on a motion-base facing a virtual visual scene that provides an unstable walking surface combined with incongruent visual flow designed to enhance sensorimotor adaptability. The goal of our present study was to determine if SA training improved both the locomotor and cognitive responses to a novel sensory environment and to quantify the extent to which training would be retained. Methods: Twenty subjects (10 training, 10 control) completed three, 30-minute training sessions during which they walked on the treadmill while receiving discordant support surface and visual input. Control subjects walked on the treadmill but did not receive any support surface or visual alterations. To determine the efficacy of training all subjects performed the Transfer Test upon completion of training. For this test, subjects were exposed to novel visual flow and support surface movement, not previously experienced during training. The Transfer Test was performed 20 minutes, 1 week, 1, 3 and 6 months after the final training session. Stride frequency, auditory reaction time, and heart rate data were collected as measures of postural stability, cognitive effort and anxiety, respectively. Results: Using mixed effects regression methods we determined that subjects who received SA training showed less alterations in stride frequency, auditory reaction time and heart rate compared to controls. Conclusion: Subjects who received SA training improved performance across a number of modalities including enhanced locomotor function, increased multi-tasking capability and reduced anxiety during adaptation to novel discordant sensory information. Trained subjects maintained their level of performance over six months.

  1. Brain activity in patients with unilateral sensorineural hearing loss during auditory perception in noisy environments.

    PubMed

    Yamamoto, Katsura; Tabei, Kenichi; Katsuyama, Narumi; Taira, Masato; Kitamura, Ken

    2017-01-01

    Patients with unilateral sensorineural hearing loss (UHL) often complain of hearing difficulties in noisy environments. To clarify this, we compared brain activation in patients with UHL with that of healthy participants during speech perception in a noisy environment, using functional magnetic resonance imaging (fMRI). A pure tone of 1 kHz, or 14 monosyllabic speech sounds at 65‒70 dB accompanied by MRI scan noise at 75 dB, were presented to both ears for 1 second each and participants were instructed to press a button when they could hear the pure tone or speech sound. Based on the activation areas of healthy participants, the primary auditory cortex, the anterior auditory association areas, and the posterior auditory association areas were set as regions of interest (ROI). In each of these regions, we compared brain activity between healthy participants and patients with UHL. The results revealed that patients with right-side UHL showed different brain activity in the right posterior auditory area during perception of pure tones versus monosyllables. Clinically, left-side and right-side UHL are not presently differentiated and are similarly diagnosed and treated; however, the results of this study suggest that a lateralityspecific treatment should be chosen.

  2. Air-Track: a real-world floating environment for active sensing in head-fixed mice.

    PubMed

    Nashaat, Mostafa A; Oraby, Hatem; Sachdev, Robert N S; Winter, York; Larkum, Matthew E

    2016-10-01

    Natural behavior occurs in multiple sensory and motor modalities and in particular is dependent on sensory feedback that constantly adjusts behavior. To investigate the underlying neuronal correlates of natural behavior, it is useful to have access to state-of-the-art recording equipment (e.g., 2-photon imaging, patch recordings, etc.) that frequently requires head fixation. This limitation has been addressed with various approaches such as virtual reality/air ball or treadmill systems. However, achieving multimodal realistic behavior in these systems can be challenging. These systems are often also complex and expensive to implement. Here we present "Air-Track," an easy-to-build head-fixed behavioral environment that requires only minimal computational processing. The Air-Track is a lightweight physical maze floating on an air table that has all the properties of the "real" world, including multiple sensory modalities tightly coupled to motor actions. To test this system, we trained mice in Go/No-Go and two-alternative forced choice tasks in a plus maze. Mice chose lanes and discriminated apertures or textures by moving the Air-Track back and forth and rotating it around themselves. Mice rapidly adapted to moving the track and used visual, auditory, and tactile cues to guide them in performing the tasks. A custom-controlled camera system monitored animal location and generated data that could be used to calculate reaction times in the visual and somatosensory discrimination tasks. We conclude that the Air-Track system is ideal for eliciting natural behavior in concert with virtually any system for monitoring or manipulating brain activity. Copyright © 2016 the American Physiological Society.

  3. Air-Track: a real-world floating environment for active sensing in head-fixed mice

    PubMed Central

    Oraby, Hatem; Sachdev, Robert N. S.; Winter, York

    2016-01-01

    Natural behavior occurs in multiple sensory and motor modalities and in particular is dependent on sensory feedback that constantly adjusts behavior. To investigate the underlying neuronal correlates of natural behavior, it is useful to have access to state-of-the-art recording equipment (e.g., 2-photon imaging, patch recordings, etc.) that frequently requires head fixation. This limitation has been addressed with various approaches such as virtual reality/air ball or treadmill systems. However, achieving multimodal realistic behavior in these systems can be challenging. These systems are often also complex and expensive to implement. Here we present “Air-Track,” an easy-to-build head-fixed behavioral environment that requires only minimal computational processing. The Air-Track is a lightweight physical maze floating on an air table that has all the properties of the “real” world, including multiple sensory modalities tightly coupled to motor actions. To test this system, we trained mice in Go/No-Go and two-alternative forced choice tasks in a plus maze. Mice chose lanes and discriminated apertures or textures by moving the Air-Track back and forth and rotating it around themselves. Mice rapidly adapted to moving the track and used visual, auditory, and tactile cues to guide them in performing the tasks. A custom-controlled camera system monitored animal location and generated data that could be used to calculate reaction times in the visual and somatosensory discrimination tasks. We conclude that the Air-Track system is ideal for eliciting natural behavior in concert with virtually any system for monitoring or manipulating brain activity. PMID:27486102

  4. The Process of Auditory Distraction: Disrupted Attention and Impaired Recall in a Simulated Lecture Environment

    ERIC Educational Resources Information Center

    Zeamer, Charlotte; Fox Tree, Jean E.

    2013-01-01

    Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for…

  5. Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients.

    PubMed

    Golob, Edward J; Winston, Jenna; Mock, Jeffrey R

    2017-01-01

    Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory.

  6. Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients

    PubMed Central

    Golob, Edward J.; Winston, Jenna; Mock, Jeffrey R.

    2017-01-01

    Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory. PMID:29218024

  7. Integration of auditory and kinesthetic information in motion: alterations in Parkinson's disease.

    PubMed

    Sabaté, Magdalena; Llanos, Catalina; Rodríguez, Manuel

    2008-07-01

    The main aim in this work was to study the interaction between auditory and kinesthetic stimuli and its influence on motion control. The study was performed on healthy subjects and patients with Parkinson's disease (PD). Thirty-five right-handed volunteers (young, PD, and age-matched healthy participants, and PD-patients) were studied with three different motor tasks (slow cyclic movements, fast cyclic movements, and slow continuous movements) and under the action of kinesthetic stimuli and sounds at different beat rates. The action of kinesthesia was evaluated by comparing real movements with virtual movements (movements imaged but not executed). The fast cyclic task was accelerated by kinesthetic but not by auditory stimuli. The slow cyclic task changed with the beat rate of sounds but not with kinesthetic stimuli. The slow continuous task showed an integrated response to both sensorial modalities. These data show that the influence of the multisensory integration on motion changes with the motor task and that some motor patterns are modulated by the simultaneous action of auditory and kinesthetic information, a cross-modal integration that was different in PD-patients. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  8. Virtual Education: Guidelines for Using Games Technology

    ERIC Educational Resources Information Center

    Schofield, Damian

    2014-01-01

    Advanced three-dimensional virtual environment technology, similar to that used by the film and computer games industry, can allow educational developers to rapidly create realistic online virtual environments. This technology has been used to generate a range of interactive Virtual Reality (VR) learning environments across a spectrum of…

  9. Effect of sound level on virtual and free-field localization of brief sounds in the anterior median plane.

    PubMed

    Marmel, Frederic; Marrufo-Pérez, Miriam I; Heeren, Jan; Ewert, Stephan; Lopez-Poveda, Enrique A

    2018-06-14

    The detection of high-frequency spectral notches has been shown to be worse at 70-80 dB sound pressure level (SPL) than at higher levels up to 100 dB SPL. The performance improvement at levels higher than 70-80 dB SPL has been related to an 'ideal observer' comparison of population auditory nerve spike trains to stimuli with and without high-frequency spectral notches. Insofar as vertical localization partly relies on information provided by pinna-based high-frequency spectral notches, we hypothesized that localization would be worse at 70-80 dB SPL than at higher levels. Results from a first experiment using a virtual localization set-up and non-individualized head-related transfer functions (HRTFs) were consistent with this hypothesis, but a second experiment using a free-field set-up showed that vertical localization deteriorates monotonically with increasing level up to 100 dB SPL. These results suggest that listeners use different cues when localizing sound sources in virtual and free-field conditions. In addition, they confirm that the worsening in vertical localization with increasing level continues beyond 70-80 dB SPL, the highest levels tested by previous studies. Further, they suggest that vertical localization, unlike high-frequency spectral notch detection, does not rely on an 'ideal observer' analysis of auditory nerve spike trains. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences.

    PubMed

    Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael

    2014-01-01

    Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called "cocktail-party" problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.

  11. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences

    PubMed Central

    Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael

    2014-01-01

    Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called “cocktail-party” problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments. PMID:25540608

  12. Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.

    PubMed

    Sanchez, Yerly; Pinzon, David; Zheng, Bin

    2017-10-01

    To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.

  13. A selective array activation method for the generation of a focused source considering listening position.

    PubMed

    Song, Min-Ho; Choi, Jung-Woo; Kim, Yang-Hann

    2012-02-01

    A focused source can provide an auditory illusion of a virtual source placed between the loudspeaker array and the listener. When a focused source is generated by time-reversed acoustic focusing solution, its use as a virtual source is limited due to artifacts caused by convergent waves traveling towards the focusing point. This paper proposes an array activation method to reduce the artifacts for a selected listening point inside an array of arbitrary shape. Results show that energy of convergent waves can be reduced up to 60 dB for a large region including the selected listening point. © 2012 Acoustical Society of America

  14. Cognitive/emotional models for human behavior representation in 3D avatar simulations

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.

  15. Babies in traffic: infant vocalizations and listener sex modulate auditory motion perception.

    PubMed

    Neuhoff, John G; Hamilton, Grace R; Gittleson, Amanda L; Mejia, Adolfo

    2014-04-01

    Infant vocalizations and "looming sounds" are classes of environmental stimuli that are critically important to survival but can have dramatically different emotional valences. Here, we simultaneously presented listeners with a stationary infant vocalization and a 3D virtual looming tone for which listeners made auditory time-to-arrival judgments. Negatively valenced infant cries produced more cautious (anticipatory) estimates of auditory arrival time of the tone over a no-vocalization control. Positively valenced laughs had the opposite effect, and across all conditions, men showed smaller anticipatory biases than women. In Experiment 2, vocalization-matched vocoded noise stimuli did not influence concurrent auditory time-to-arrival estimates compared with a control condition. In Experiment 3, listeners estimated the egocentric distance of a looming tone that stopped before arriving. For distant stopping points, women estimated the stopping point as closer when the tone was presented with an infant cry than when it was presented with a laugh. For near stopping points, women showed no differential effect of vocalization type. Men did not show differential effects of vocalization type at either distance. Our results support the idea that both the sex of the listener and the emotional valence of infant vocalizations can influence auditory motion perception and can modulate motor responses to other behaviorally relevant environmental sounds. We also find support for previous work that shows sex differences in emotion processing are diminished under conditions of higher stress.

  16. [Which colours can we hear?: light stimulation of the hearing system].

    PubMed

    Wenzel, G I; Lenarz, T; Schick, B

    2014-02-01

    The success of conventional hearing aids and electrical auditory prostheses for hearing impaired patients is still limited in noisy environments and for sounds more complex than speech (e. g. music). This is partially due to the difficulty of frequency-specific activation of the auditory system using these devices. Stimulation of the auditory system using light pulses represents an alternative to mechanical and electrical stimulation. Light is a source of energy that can be very exactly focused and applied with little scattering, thus offering perspectives for optimal activation of the auditory system. Studies investigating light stimulation of sectors along the auditory pathway have shown stimulation of the auditory system is possible using light pulses. However, further studies and developments are needed before a new generation of light stimulation-based auditory prostheses can be made available for clinical application.

  17. Brain Activity on Navigation in Virtual Environments.

    ERIC Educational Resources Information Center

    Mikropoulos, Tassos A.

    2001-01-01

    Assessed the cognitive processing that takes place in virtual environments by measuring electrical brain activity using Fast Fourier Transform analysis. University students performed the same task in a real and a virtual environment, and eye movement measurements showed that all subjects were more attentive when navigating in the virtual world.…

  18. A Virtual Education: Guidelines for Using Games Technology

    ERIC Educational Resources Information Center

    Schofield, Damian

    2014-01-01

    Advanced three-dimensional virtual environment technology, similar to that used by the film and computer games industry, can allow educational developers to rapidly create realistic online vir-tual environments. This technology has been used to generate a range of interactive Virtual Real-ity (VR) learning environments across a spectrum of…

  19. Microgravity vestibular investigations (10-IML-1)

    NASA Technical Reports Server (NTRS)

    Reschke, Millard F.

    1992-01-01

    Our perception of how we are oriented in space is dependent on the interaction of virtually every sensory system. For example, to move about in our environment we integrate inputs in our brain from visual, haptic (kinesthetic, proprioceptive, and cutaneous), auditory systems, and labyrinths. In addition to this multimodal system for orientation, our expectations about the direction and speed of our chosen movement are also important. Changes in our environment and the way we interact with the new stimuli will result in a different interpretation by the nervous system of the incoming sensory information. We will adapt to the change in appropriate ways. Because our orientation system is adaptable and complex, it is often difficult to trace a response or change in behavior to any one source of information in this synergistic orientation system. However, with a carefully designed investigation, it is possible to measure signals at the appropriate level of response (both electrophysiological and perceptual) and determine the effect that stimulus rearrangement has on our sense of orientation. The environment of orbital flight represents the stimulus arrangement that is our immediate concern. The Microgravity Vestibular Investigations (MVI) represent a group of experiments designed to investigate the effects of orbital flight and a return to Earth on our orientation system.

  20. Virtual reality system for treatment of the fear of public speaking using image-based rendering and moving pictures.

    PubMed

    Lee, Jae M; Ku, Jeong H; Jang, Dong P; Kim, Dong H; Choi, Young H; Kim, In Y; Kim, Sun I

    2002-06-01

    The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology enabled us to use virtual reality (VR) for the treatment of the fear of public speaking. There have been two techniques used to construct a virtual environment for the treatment of the fear of public speaking: model-based and movie-based. Virtual audiences and virtual environments made by model-based technique are unrealistic and unnatural. The movie-based technique has a disadvantage in that each virtual audience cannot be controlled respectively, because all virtual audiences are included in one moving picture file. To address this disadvantage, this paper presents a virtual environment made by using image-based rendering (IBR) and chroma keying simultaneously. IBR enables us to make the virtual environment realistic because the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma keying allows a virtual audience to be controlled individually. In addition, a real-time capture technique was applied in constructing the virtual environment to give the subjects more interaction, in that they can talk with a therapist or another subject.

  1. Usability Studies In Virtual And Traditional Computer Aided Design Environments For Spatial Awareness

    DTIC Science & Technology

    2017-08-08

    Usability Studies In Virtual And Traditional Computer Aided Design Environments For Spatial Awareness Dr. Syed Adeel Ahmed, Xavier University of...virtual environment with wand interfaces compared directly with a workstation non-stereoscopic traditional CAD interface with keyboard and mouse. In...navigate through a virtual environment. The wand interface provides a significantly improved means of interaction. This study quantitatively measures the

  2. A virtual therapeutic environment with user projective agents.

    PubMed

    Ookita, S Y; Tokuda, H

    2001-02-01

    Today, we see the Internet as more than just an information infrastructure, but a socializing place and a safe outlet of inner feelings. Many personalities develop aside from real world life due to its anonymous environment. Virtual world interactions are bringing about new psychological illnesses ranging from netaddiction to technostress, as well as online personality disorders and conflicts in multiple identities that exist in the virtual world. Presently, there are no standard therapy models for the virtual environment. There are very few therapeutic environments, or tools especially made for virtual therapeutic environments. The goal of our research is to provide the therapy model and middleware tools for psychologists to use in virtual therapeutic environments. We propose the Cyber Therapy Model, and Projective Agents, a tool used in the therapeutic environment. To evaluate the effectiveness of the tool, we created a prototype system, called the Virtual Group Counseling System, which is a therapeutic environment that allows the user to participate in group counseling through the eyes of their Projective Agent. Projective Agents inherit the user's personality traits. During the virtual group counseling, the user's Projective Agent interacts and collaborates to recover and increase their psychological growth. The prototype system provides a simulation environment where psychologists can adjust the parameters and customize their own simulation environment. The model and tool is a first attempt toward simulating online personalities that may exist only online, and provide data for observation.

  3. Distributed virtual environment for emergency medical training

    NASA Astrophysics Data System (ADS)

    Stytz, Martin R.; Banks, Sheila B.; Garcia, Brian W.; Godsell-Stytz, Gayl M.

    1997-07-01

    In many professions where individuals must work in a team in a high stress environment to accomplish a time-critical task, individual and team performance can benefit from joint training using distributed virtual environments (DVEs). One professional field that lacks but needs a high-fidelity team training environment is the field of emergency medicine. Currently, emergency department (ED) medical personnel train by using words to create a metal picture of a situation for the physician and staff, who then cooperate to solve the problems portrayed by the word picture. The need in emergency medicine for realistic virtual team training is critical because ED staff typically encounter rarely occurring but life threatening situations only once in their careers and because ED teams currently have no realistic environment in which to practice their team skills. The resulting lack of experience and teamwork makes diagnosis and treatment more difficult. Virtual environment based training has the potential to redress these shortfalls. The objective of our research is to develop a state-of-the-art virtual environment for emergency medicine team training. The virtual emergency room (VER) allows ED physicians and medical staff to realistically prepare for emergency medical situations by performing triage, diagnosis, and treatment on virtual patients within an environment that provides them with the tools they require and the team environment they need to realistically perform these three tasks. There are several issues that must be addressed before this vision is realized. The key issues deal with distribution of computations; the doctor and staff interface to the virtual patient and ED equipment; the accurate simulation of individual patient organs' response to injury, medication, and treatment; and an accurate modeling of the symptoms and appearance of the patient while maintaining a real-time interaction capability. Our ongoing work addresses all of these issues. In this paper we report on our prototype VER system and its distributed system architecture for an emergency department distributed virtual environment for emergency medical staff training. The virtual environment enables emergency department physicians and staff to develop their diagnostic and treatment skills using the virtual tools they need to perform diagnostic and treatment tasks. Virtual human imagery, and real-time virtual human response are used to create the virtual patient and present a scenario. Patient vital signs are available to the emergency department team as they manage the virtual case. The work reported here consists of the system architectures we developed for the distributed components of the virtual emergency room. The architectures we describe consist of the network level architecture as well as the software architecture for each actor within the virtual emergency room. We describe the role of distributed interactive simulation and other enabling technologies within the virtual emergency room project.

  4. Shifty: A Weight-Shifting Dynamic Passive Haptic Proxy to Enhance Object Perception in Virtual Reality.

    PubMed

    Zenner, Andre; Kruger, Antonio

    2017-04-01

    We define the concept of Dynamic Passive Haptic Feedback (DPHF) for virtual reality by introducing the weight-shifting physical DPHF proxy object Shifty. This concept combines actuators known from active haptics and physical proxies known from passive haptics to construct proxies that automatically adapt their passive haptic feedback. We describe the concept behind our ungrounded weight-shifting DPHF proxy Shifty and the implementation of our prototype. We then investigate how Shifty can, by automatically changing its internal weight distribution, enhance the user's perception of virtual objects interacted with in two experiments. In a first experiment, we show that Shifty can enhance the perception of virtual objects changing in shape, especially in length and thickness. Here, Shifty was shown to increase the user's fun and perceived realism significantly, compared to an equivalent passive haptic proxy. In a second experiment, Shifty is used to pick up virtual objects of different virtual weights. The results show that Shifty enhances the perception of weight and thus the perceived realism by adapting its kinesthetic feedback to the picked-up virtual object. In the same experiment, we additionally show that specific combinations of haptic, visual and auditory feedback during the pick-up interaction help to compensate for visual-haptic mismatch perceived during the shifting process.

  5. Ecological validity of virtual environments to assess human navigation ability

    PubMed Central

    van der Ham, Ineke J. M.; Faber, Annemarie M. E.; Venselaar, Matthijs; van Kreveld, Marc J.; Löffler, Maarten

    2015-01-01

    Route memory is frequently assessed in virtual environments. These environments can be presented in a fully controlled manner and are easy to use. Yet they lack the physical involvement that participants have when navigating real environments. For some aspects of route memory this may result in reduced performance in virtual environments. We assessed route memory performance in four different environments: real, virtual, virtual with directional information (compass), and hybrid. In the hybrid environment, participants walked the route outside on an open field, while all route information (i.e., path, landmarks) was shown simultaneously on a handheld tablet computer. Results indicate that performance in the real life environment was better than in the virtual conditions for tasks relying on survey knowledge, like pointing to start and end point, and map drawing. Performance in the hybrid condition however, hardly differed from real life performance. Performance in the virtual environment did not benefit from directional information. Given these findings, the hybrid condition may offer the best of both worlds: the performance level is comparable to that of real life for route memory, yet it offers full control of visual input during route learning. PMID:26074831

  6. The Effect of Desktop Illumination Realism on a User's Sense of Presence in a Virtual Learning Environment

    ERIC Educational Resources Information Center

    Ehrlich, Justin

    2010-01-01

    The application of virtual reality is becoming ever more important as technology reaches new heights allowing virtual environments (VE) complete with global illumination. One successful application of virtual environments is educational interventions meant to treat individuals with autism spectrum disorder (ASD). VEs are effective with these…

  7. Virtual Virtuosos: A Case Study in Learning Music in Virtual Learning Environments in Spain

    ERIC Educational Resources Information Center

    Alberich-Artal, Enric; Sangra, Albert

    2012-01-01

    In recent years, the development of Information and Communication Technologies (ICT) has contributed to the generation of a number of interesting initiatives in the field of music education and training in virtual learning environments. However, music education initiatives employing virtual learning environments have replicated and perpetuated the…

  8. Potentiation of Chemical Ototoxicity by Noise

    PubMed Central

    Steyger, Peter S.

    2010-01-01

    High-intensity and/or prolonged exposure to noise causes temporary or permanent threshold shifts in auditory perception. Occupational exposure to solvents or administration of clinically important drugs, such as aminoglycoside antibiotics and cisplatin, also can induce permanent hearing loss. The mechanisms by which these ototoxic insults cause auditory dysfunction are still being unraveled, yet they share common sequelae, particularly generation of reactive oxygen species, that ultimately lead to hearing loss and deafness. Individuals are frequently exposed to ototoxic chemical contaminants (e.g., fuel) and noise simultaneously in a variety of work and recreational environments. Does simultaneous exposure to chemical ototoxins and noise potentiate auditory dysfunction? Exposure to solvent vapor in noisy environments potentiates the permanent threshold shifts induced by noise alone. Moderate noise levels potentiate both aminoglycoside- and cisplatin-induced ototoxicity in both rate of onset and in severity of auditory dysfunction. Thus, simultaneous exposure to chemical ototoxins and moderate levels of noise can potentiate auditory dysfunction. Preventing the ototoxic synergy of noise and chemical ototoxins requires removing exposure to ototoxins and/or attenuating noise exposure levels when chemical ototoxins are present. PMID:20523755

  9. Tools for evaluation of restriction on auditory participation: systematic review of the literature.

    PubMed

    Souza, Valquíria Conceição; Lemos, Stela Maris Aguiar

    2015-01-01

    To systematically review studies that used questionnaires for the evaluation of restriction on auditory participation in adults and the elderly. Studies from the last five years were selected through a bibliographic collection of data in national and international journals in the following electronic databases: ISI Web of Science and Virtual Health Library - BIREME, which includes the LILACS and MEDLINE databases. Studies available fully; published in Portuguese, English, or Spanish; whose participants were adults and/or the elderly and that used questionnaires for the evaluation of restriction on auditory participation. Initially, the studies were selected based on the reading of titles and abstracts. Then, the articles were fully and the information was included in the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist. Three-hundred seventy studies were found in the researched databases; 14 of these studies were excluded because they were found in more than one database. The titles and abstracts of 356 articles were analyzed; 40 of them were selected for full reading, of which 26 articles were finally selected. In the present review, nine instruments were found for the evaluation of restriction on auditory participation. The most used questionnaires for the assessment of the restriction on auditory participation were the Hearing Handicap Inventory for the Elderly (HHIE), Hearing Handicap Inventory for Adults (HHIA), and Hearing Handicap Inventory for the Elderly - Screening (HHIE-S). The use of restriction on auditory participation questionnaires can assist in validating decisions in audiology practices and be useful in the fitting of hearing aids and results of aural rehabilitation.

  10. Transfer of motor learning from virtual to natural environments in individuals with cerebral palsy.

    PubMed

    de Mello Monteiro, Carlos Bandeira; Massetti, Thais; da Silva, Talita Dias; van der Kamp, John; de Abreu, Luiz Carlos; Leone, Claudio; Savelsbergh, Geert J P

    2014-10-01

    With the growing accessibility of computer-assisted technology, rehabilitation programs for individuals with cerebral palsy (CP) increasingly use virtual reality environments to enhance motor practice. Thus, it is important to examine whether performance improvements in the virtual environment generalize to the natural environment. To examine this issue, we had 64 individuals, 32 of which were individuals with CP and 32 typically developing individuals, practice two coincidence-timing tasks. In the more tangible button-press task, the individuals were required to 'intercept' a falling virtual object at the moment it reached the interception point by pressing a key. In the more abstract, less tangible task, they were instructed to 'intercept' the virtual object by making a hand movement in a virtual environment. The results showed that individuals with CP timed less accurate than typically developing individuals, especially for the more abstract task in the virtual environment. The individuals with CP did-as did their typically developing peers-improve coincidence timing with practice on both tasks. Importantly, however, these improvements were specific to the practice environment; there was no transfer of learning. It is concluded that the implementation of virtual environments for motor rehabilitation in individuals with CP should not be taken for granted but needs to be considered carefully. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Can virtual reality be used to conduct mass prophylaxis clinic training? A pilot program.

    PubMed

    Yellowlees, Peter; Cook, James N; Marks, Shayna L; Wolfe, Daniel; Mangin, Elanor

    2008-03-01

    To create and evaluate a pilot bioterrorism defense training environment using virtual reality technology. The present pilot project used Second Life, an internet-based virtual world system, to construct a virtual reality environment to mimic an actual setting that might be used as a Strategic National Stockpile (SNS) distribution site for northern California in the event of a bioterrorist attack. Scripted characters were integrated into the system as mock patients to analyze various clinic workflow scenarios. Users tested the virtual environment over two sessions. Thirteen users who toured the environment were asked to complete an evaluation survey. Respondents reported that the virtual reality system was relevant to their practice and had potential as a method of bioterrorism defense training. Computer simulations of bioterrorism defense training scenarios are feasible with existing personal computer technology. The use of internet-connected virtual environments holds promise for bioterrorism defense training. Recommendations are made for public health agencies regarding the implementation and benefits of using virtual reality for mass prophylaxis clinic training.

  12. Development of a virtual speaking simulator using Image Based Rendering.

    PubMed

    Lee, J M; Kim, H; Oh, M J; Ku, J H; Jang, D P; Kim, I Y; Kim, S I

    2002-01-01

    The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology has enabled the use of virtual reality (VR) for the treatment of the fear of public speaking. There are two techniques for building virtual environments for the treatment of this fear: a model-based and a movie-based method. Both methods have the weakness that they are unrealistic and not controllable individually. To understand these disadvantages, this paper presents a virtual environment produced with Image Based Rendering (IBR) and a chroma-key simultaneously. IBR enables the creation of realistic virtual environments where the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma-keys puts virtual audience members under individual control in the environment. In addition, real time capture technique is used in constructing the virtual environments enabling spoken interaction between the subject and a therapist or another subject.

  13. Cross-modal links among vision, audition, and touch in complex environments.

    PubMed

    Ferris, Thomas K; Sarter, Nadine B

    2008-02-01

    This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.

  14. Should visual speech cues (speechreading) be considered when fitting hearing aids?

    NASA Astrophysics Data System (ADS)

    Grant, Ken

    2002-05-01

    When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.

  15. Open Source Meets Virtual Reality--An Instructor's Journey Unearths New Opportunities for Learning, Community, and Academia

    ERIC Educational Resources Information Center

    O'Connor, Eileen A.

    2015-01-01

    Opening with the history, recent advances, and emerging ways to use avatar-based virtual reality, an instructor who has used virtual environments since 2007 shares how these environments bring more options to community building, teaching, and education. With the open-source movement, where the source code for virtual environments was made…

  16. Perturbed Communication in a Virtual Environment to Train Medical Team Leaders.

    PubMed

    Huguet, Lauriane; Lourdeaux, Domitile; Sabouret, Nicolas; Ferrer, Marie-Hélène

    2016-01-01

    The VICTEAMS project aims at designing a virtual environment for training medical team leaders to non-technical skills. The virtual environment is populated with autonomous virtual agents who are able to make mistakes (in action or communication) in order to train rescue team leaders and to make them adaptive with all kinds of situations or teams.

  17. Fit for the frontline? identification of mission-critical auditory tasks (MCATs) carried out by infantry and combat-support personnel.

    PubMed

    Semeraro, Hannah D; Bevis, Zoë L; Rowan, Daniel; van Besouw, Rachel M; Allsopp, Adrian J

    2015-01-01

    The ability to listen to commands in noisy environments and understand acoustic signals, while maintaining situational awareness, is an important skill for military personnel and can be critical for mission success. Seventeen auditory tasks carried out by British infantry and combat-support personnel were identified through a series of focus groups conducted by Bevis et al. For military personnel, these auditory tasks are termed mission-critical auditory tasks (MCATs) if they are carried in out in a military-specific environment and have a negative consequence when performed below a specified level. A questionnaire study was conducted to find out which of the auditory tasks identified by Bevis et al. satisfy the characteristics of an MCAT. Seventy-nine British infantry and combat-support personnel from four regiments across the South of England participated. For each auditory task participants indicated: 1) the consequences of poor performance on the task, 2) who performs the task, and 3) how frequently the task is carried out. The data were analysed to determine which tasks are carried out by which personnel, which have the most negative consequences when performed poorly, and which are performed the most frequently. This resulted in a list of 9 MCATs (7 speech communication tasks, 1 sound localization task, and 1 sound detection task) that should be prioritised for representation in a measure of auditory fitness for duty (AFFD) for these personnel. Incorporating MCATs in AFFD measures will help to ensure that personnel have the necessary auditory skills for safe and effective deployment on operational duties.

  18. Fit for the frontline? Identification of mission-critical auditory tasks (MCATs) carried out by infantry and combat-support personnel

    PubMed Central

    Semeraro, Hannah D.; Bevis, Zoë L.; Rowan, Daniel; van Besouw, Rachel M.; Allsopp, Adrian J.

    2015-01-01

    The ability to listen to commands in noisy environments and understand acoustic signals, while maintaining situational awareness, is an important skill for military personnel and can be critical for mission success. Seventeen auditory tasks carried out by British infantry and combat-support personnel were identified through a series of focus groups conducted by Bevis et al. For military personnel, these auditory tasks are termed mission-critical auditory tasks (MCATs) if they are carried in out in a military-specific environment and have a negative consequence when performed below a specified level. A questionnaire study was conducted to find out which of the auditory tasks identified by Bevis et al. satisfy the characteristics of an MCAT. Seventy-nine British infantry and combat-support personnel from four regiments across the South of England participated. For each auditory task participants indicated: 1) the consequences of poor performance on the task, 2) who performs the task, and 3) how frequently the task is carried out. The data were analysed to determine which tasks are carried out by which personnel, which have the most negative consequences when performed poorly, and which are performed the most frequently. This resulted in a list of 9 MCATs (7 speech communication tasks, 1 sound localization task, and 1 sound detection task) that should be prioritised for representation in a measure of auditory fitness for duty (AFFD) for these personnel. Incorporating MCATs in AFFD measures will help to ensure that personnel have the necessary auditory skills for safe and effective deployment on operational duties. PMID:25774613

  19. Neural coding strategies in auditory cortex.

    PubMed

    Wang, Xiaoqin

    2007-07-01

    In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex can be characterized by three types: (1) isomorphic (faithful) representations of acoustic structures; (2) non-isomorphic transformations of acoustic features and (3) transformations from acoustical to perceptual dimensions. The challenge facing auditory neurophysiologists is to understand the nature of the latter two transformations. In this article, I will review recent studies from our laboratory regarding temporal discharge patterns in auditory cortex of awake marmosets and cortical representations of time-varying signals. Findings from these studies show that (1) firing patterns of neurons in auditory cortex are dependent on stimulus optimality and context and (2) the auditory cortex forms internal representations of sounds that are no longer faithful replicas of their acoustic structures.

  20. Aerospace applications of virtual environment technology.

    PubMed

    Loftin, R B

    1996-11-01

    The uses of virtual environment technology in the space program are examined with emphasis on training for the Hubble Space Telescope Repair and Maintenance Mission in 1993. Project ScienceSpace at the Virtual Environment Technology Lab is discussed.

  1. Electrophysiological measurement of interest during walking in a simulated environment.

    PubMed

    Takeda, Yuji; Okuma, Takashi; Kimura, Motohiro; Kurata, Takeshi; Takenaka, Takeshi; Iwaki, Sunao

    2014-09-01

    A reliable neuroscientific technique for objectively estimating the degree of interest in a real environment is currently required in the research fields of neuroergonomics and neuroeconomics. Toward the development of such a technique, the present study explored electrophysiological measures that reflect an observer's interest in a nearly-real visual environment. Participants were asked to walk through a simulated shopping mall and the attractiveness of the shopping mall was manipulated by opening and closing the shutters of stores. During the walking task, participants were exposed to task-irrelevant auditory probes (two-stimulus oddball sequence). The results showed a smaller P2/early P3a component of task-irrelevant auditory event-related potentials and a larger lambda response of eye-fixation-related potentials in an interesting environment (i.e., open-shutter condition) than in a boring environment (i.e., closed-shutter condition); these findings can be reasonably explained by supposing that participants allocated more attentional resources to visual information in an interesting environment than in a boring environment, and thus residual attentional resources that could be allocated to task-irrelevant auditory probes were reduced. The P2/early P3a component and the lambda response may be useful measures of interest in a real visual environment. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Social Interaction Development through Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Beach, Jason; Wendt, Jeremy

    2014-01-01

    The purpose of this pilot study was to determine if participants could improve their social interaction skills by participating in a virtual immersive environment. The participants used a developing virtual reality head-mounted display to engage themselves in a fully-immersive environment. While in the environment, participants had an opportunity…

  3. Virtual environments simulation in research reactor

    NASA Astrophysics Data System (ADS)

    Muhamad, Shalina Bt. Sheik; Bahrin, Muhammad Hannan Bin

    2017-01-01

    Virtual reality based simulations are interactive and engaging. It has the useful potential in improving safety training. Virtual reality technology can be used to train workers who are unfamiliar with the physical layout of an area. In this study, a simulation program based on the virtual environment at research reactor was developed. The platform used for virtual simulation is 3DVia software for which it's rendering capabilities, physics for movement and collision and interactive navigation features have been taken advantage of. A real research reactor was virtually modelled and simulated with the model of avatars adopted to simulate walking. Collision detection algorithms were developed for various parts of the 3D building and avatars to restrain the avatars to certain regions of the virtual environment. A user can control the avatar to move around inside the virtual environment. Thus, this work can assist in the training of personnel, as in evaluating the radiological safety of the research reactor facility.

  4. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    PubMed

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  5. Magnetic resonance imaging of the saccular otolithic mass.

    PubMed Central

    Sbarbati, A; Leclercq, F; Antonakis, K; Osculati, F

    1992-01-01

    The frog's inner ear was studied in vivo by high spatial resolution magnetic resonance imaging at 7 Tesla. The vestibule, the internal acoustic meatus, and the auditory tube have been identified. The large otolithic mass contained in the vestibule showed a virtual absence of magnetic resonance signal probably due to its composition of closely packed otoconia. Images Fig. 1 Fig. 2 Fig. 3 Fig. 5 PMID:1295875

  6. Evaluation of the cognitive effects of travel technique in complex real and virtual environments.

    PubMed

    Suma, Evan A; Finkelstein, Samantha L; Reid, Myra; V Babu, Sabarish; Ulinski, Amy C; Hodges, Larry F

    2010-01-01

    We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques.

  7. Butterfly valve in a virtual environment

    NASA Astrophysics Data System (ADS)

    Talekar, Aniruddha; Patil, Saurabh; Thakre, Prashant; Rajkumar, E.

    2017-11-01

    Assembly of components is one of the processes involved in product design and development. The present paper deals with the assembly of a simple butterfly valve components in a virtual environment. The assembly has been carried out using virtual reality software by trial and error methods. The parts are modelled using parametric software (SolidWorks), meshed accordingly, and then called into virtual environment for assembly.

  8. Ergonomic aspects of a virtual environment.

    PubMed

    Ahasan, M R; Väyrynen, S

    1999-01-01

    A virtual environment is an interactive graphic system mediated through computer technology that allows a certain level of reality or a sense of presence to access virtual information. To create reality in a virtual environment, ergonomics issues are explored in this paper, aiming to develop the design of presentation formats with related information, that is possible to attain and to maintain user-friendly application.

  9. The feasibility and acceptability of virtual environments in the treatment of childhood social anxiety disorder.

    PubMed

    Sarver, Nina Wong; Beidel, Deborah C; Spitalnick, Josh S

    2014-01-01

    Two significant challenges for the dissemination of social skills training programs are the need to assure generalizability and provide sufficient practice opportunities. In the case of social anxiety disorder, virtual environments may provide one strategy to address these issues. This study evaluated the utility of an interactive virtual school environment for the treatment of social anxiety disorder in preadolescent children. Eleven children with a primary diagnosis of social anxiety disorder between 8 to 12 years old participated in this initial feasibility trial. All children were treated with Social Effectiveness Therapy for Children, an empirically supported treatment for children with social anxiety disorder. However, the in vivo peer generalization sessions and standard parent-assisted homework assignments were substituted by practice in a virtual environment. Overall, the virtual environment programs were acceptable, feasible, and credible treatment components. Both children and clinicians were satisfied with using the virtual environment technology, and children believed it was a high-quality program overall. In addition, parents were satisfied with the virtual environment augmented treatment and indicated that they would recommend the program to family and friends. Findings indicate that the virtual environments are viewed as acceptable and credible by potential recipients. Furthermore, they are easy to implement by even novice users and appear to be useful adjunctive elements for the treatment of childhood social anxiety disorder.

  10. Human Rights and Private Ordering in Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Oosterbaan, Olivier

    This paper explores the application of human rights in (persistent) virtual world environments. The paper begins with describing a number of elements that most virtual environments share and that are relevant for the application of human rights in such a setting; and by describing in a general nature the application of human rights between private individuals. The paper then continues by discussing the application in virtual environments of two universally recognized human rights, namely freedom of expression, and freedom from discrimination. As these specific rights are discussed, a number of more general conclusions on the application of human rights in virtual environments are drawn. The first general conclusion being that, because virtual worlds are private environments, participants are subject to private ordering. The second general conclusion being that participants and non-participants alike have to accept at times that in-world expressions are to an extent private speech. The third general conclusion is that, where participants represent themselves in-world, other participants cannot assume that such in-world representation share the characteristics of the human player; and that where virtual environments contain game elements, participants and non-participants alike should not take everything that happens in the virtual environment at face value or literally, which does however not amount to having to accept a higher level of infringement on their rights for things that happen in such an environment.

  11. The feasibility and acceptability of virtual environments in the treatment of childhood social anxiety disorder

    PubMed Central

    Wong, Nina; Beidel, Deborah C.; Spitalnick, Josh

    2013-01-01

    Objective Two significant challenges for the dissemination of social skills training programs are the need to assure generalizability and provide sufficient practice opportunities. In the case of social anxiety disorder, virtual environments may provide one strategy to address these issues. This study evaluated the utility of an interactive virtual school environment for the treatment of social anxiety disorder in preadolescent children. Method Eleven children with a primary diagnosis of social anxiety disorder between 8 to 12 years old participated in this initial feasibility trial. All children were treated with Social Effectiveness Therapy for Children, an empirically supported treatment for children with social anxiety disorder. However, the in vivo peer generalization sessions and standard parent-assisted homework assignments were substituted by practice in a virtual environment. Results Overall, the virtual environment programs were acceptable, feasible, and credible treatment components. Both children and clinicians were satisfied with using the virtual environment technology, and children believed it was a high quality program overall. Additionally, parents were satisfied with the virtual environment augmented treatment and indicated that they would recommend the program to family and friends. Conclusion Virtual environments are viewed as acceptable and credible by potential recipients. Furthermore, they are easy to implement by even novice users and appear to be useful adjunctive elements for the treatment of childhood social anxiety disorder. PMID:24144182

  12. A synthetic computational environment: To control the spread of respiratory infections in a virtual university

    NASA Astrophysics Data System (ADS)

    Ge, Yuanzheng; Chen, Bin; liu, Liang; Qiu, Xiaogang; Song, Hongbin; Wang, Yong

    2018-02-01

    Individual-based computational environment provides an effective solution to study complex social events by reconstructing scenarios. Challenges remain in reconstructing the virtual scenarios and reproducing the complex evolution. In this paper, we propose a framework to reconstruct a synthetic computational environment, reproduce the epidemic outbreak, and evaluate management interventions in a virtual university. The reconstructed computational environment includes 4 fundamental components: the synthetic population, behavior algorithms, multiple social networks, and geographic campus environment. In the virtual university, influenza H1N1 transmission experiments are conducted, and gradually enhanced interventions are evaluated and compared quantitatively. The experiment results indicate that the reconstructed virtual environment provides a solution to reproduce complex emergencies and evaluate policies to be executed in the real world.

  13. The Use of Music and Other Forms of Organized Sound as a Therapeutic Intervention for Students with Auditory Processing Disorder: Providing the Best Auditory Experience for Children with Learning Differences

    ERIC Educational Resources Information Center

    Faronii-Butler, Kishasha O.

    2013-01-01

    This auto-ethnographical inquiry used vignettes and interviews to examine the therapeutic use of music and other forms of organized sound in the learning environment of individuals with Central Auditory Processing Disorders. It is an investigation of the traditions of healing with sound vibrations, from its earliest cultural roots in shamanism and…

  14. Virtual Scavenger Hunt: An AI-Powered Virtual Environment Designed for Training Individuals in Effective Teamwork, and Analyzing Cross-Cultural Behavior

    DTIC Science & Technology

    2009-03-20

    involved the development of an environment within the Multiverse virtual world, oriented toward allowing individuals to acquire and reinforce skills via...PetBrain software G2: Creation of a scavenger hunt scenario in the Multiverse virtual world, in which humans and AIs can collaboratively play scavenger...carried out by Novamente LLC for AOARD during June 2008 ? February 2009. It involved the development of an environment within the Multiverse virtual world

  15. Validation of smoking-related virtual environments for cue exposure therapy.

    PubMed

    García-Rodríguez, Olaya; Pericot-Valverde, Irene; Gutiérrez-Maldonado, José; Ferrer-García, Marta; Secades-Villa, Roberto

    2012-06-01

    Craving is considered one of the main factors responsible for relapse after smoking cessation. Cue exposure therapy (CET) consists of controlled and repeated exposure to drug-related stimuli in order to extinguish associated responses. The main objective of this study was to assess the validity of 7 virtual reality environments for producing craving in smokers that can be used within the CET paradigm. Forty-six smokers and 44 never-smokers were exposed to 7 complex virtual environments with smoking-related cues that reproduce typical situations in which people smoke, and to a neutral virtual environment without smoking cues. Self-reported subjective craving and psychophysiological measures were recorded during the exposure. All virtual environments with smoking-related cues were able to generate subjective craving in smokers, while no increase was observed for the neutral environment. The most sensitive psychophysiological variable to craving increases was heart rate. The findings provide evidence of the utility of virtual reality for simulating real situations capable of eliciting craving. We also discuss how CET for smoking cessation can be improved through these virtual tools. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Pinniped Hearing in Complex Acoustic Environments

    DTIC Science & Technology

    2013-09-30

    published] Mulsow, J. & Reichmuth, C. (2013). The binaural click-evoked auditory brainstem response of the California sea lion (Zalophus...California sea lion can keep the beat : Motor entrainment to rhythmic auditory stimuli in a non vocal mimic. Journal of Comparative Psychology, online first. [published

  17. Separation of concurrent broadband sound sources by human listeners

    NASA Astrophysics Data System (ADS)

    Best, Virginia; van Schaik, André; Carlile, Simon

    2004-01-01

    The effect of spatial separation on the ability of human listeners to resolve a pair of concurrent broadband sounds was examined. Stimuli were presented in a virtual auditory environment using individualized outer ear filter functions. Subjects were presented with two simultaneous noise bursts that were either spatially coincident or separated (horizontally or vertically), and responded as to whether they perceived one or two source locations. Testing was carried out at five reference locations on the audiovisual horizon (0°, 22.5°, 45°, 67.5°, and 90° azimuth). Results from experiment 1 showed that at more lateral locations, a larger horizontal separation was required for the perception of two sounds. The reverse was true for vertical separation. Furthermore, it was observed that subjects were unable to separate stimulus pairs if they delivered the same interaural differences in time (ITD) and level (ILD). These findings suggested that the auditory system exploited differences in one or both of the binaural cues to resolve the sources, and could not use monaural spectral cues effectively for the task. In experiments 2 and 3, separation of concurrent noise sources was examined upon removal of low-frequency content (and ITDs), onset/offset ITDs, both of these in conjunction, and all ITD information. While onset and offset ITDs did not appear to play a major role, differences in ongoing ITDs were robust cues for separation under these conditions, including those in the envelopes of high-frequency channels.

  18. Distracting people from sources of discomfort in a simulated aircraft environment.

    PubMed

    Lewis, Laura; Patel, Harshada; Cobb, Sue; D'Cruz, Mirabelle; Bues, Matthias; Stefani, Oliver; Grobler, Tredeaux

    2016-07-19

    Comfort is an important factor in the acceptance of transport systems. In 2010 and 2011, the European Commission (EC) put forward its vision for air travel in the year 2050 which envisaged the use of in-flight virtual reality. This paper addressed the EC vision by investigating the effect of virtual environments on comfort. Research has shown that virtual environments can provide entertaining experiences and can be effective distracters from painful experiences. To determine the extent to which a virtual environment could distract people from sources of discomfort. Experiments which involved inducing discomfort commonly experienced in-flight (e.g. limited space, noise) in order to determine the extent to which viewing a virtual environment could distract people from discomfort. Virtual environments can fully or partially distract people from sources of discomfort, becoming more effective when they are interesting. They are also more effective at distracting people from discomfort caused by restricted space than noise disturbances. Virtual environments have the potential to enhance passenger comfort by providing positive distractions from sources of discomfort. Further research is required to understand more fully the reasons why the effect was stronger for one source of discomfort than the other.

  19. Virtual Auditory Space Training-Induced Changes of Auditory Spatial Processing in Listeners with Normal Hearing.

    PubMed

    Nisha, Kavassery Venkateswaran; Kumar, Ajith Uppunda

    2017-04-01

    Localization involves processing of subtle yet highly enriched monaural and binaural spatial cues. Remediation programs aimed at resolving spatial deficits are surprisingly scanty in literature. The present study is designed to explore the changes that occur in the spatial performance of normal-hearing listeners before and after subjecting them to virtual acoustic space (VAS) training paradigm using behavioral and electrophysiological measures. Ten normal-hearing listeners participated in the study, which was conducted in three phases, including a pre-training, training, and post-training phase. At the pre- and post-training phases both behavioral measures of spatial acuity and electrophysiological P300 were administered. The spatial acuity of the participants in the free field and closed field were measured apart from quantifying their binaural processing abilities. The training phase consisted of 5-8 sessions (20 min each) carried out using a hierarchy of graded VAS stimuli. The results obtained from descriptive statistics were indicative of an improvement in all the spatial acuity measures in the post-training phase. Statistically, significant changes were noted in interaural time difference (ITD) and virtual acoustic space identification scores measured in the post-training phase. Effect sizes (r) for all of these measures were substantially large, indicating the clinical relevance of these measures in documenting the impact of training. However, the same was not reflected in P300. The training protocol used in the present study on a preliminary basis proves to be effective in normal-hearing listeners, and its implications can be extended to other clinical population as well.

  20. A Novel Treatment of Fear of Flying Using a Large Virtual Reality System.

    PubMed

    Czerniak, Efrat; Caspi, Asaf; Litvin, Michal; Amiaz, Revital; Bahat, Yotam; Baransi, Hani; Sharon, Hanania; Noy, Shlomo; Plotnik, Meir

    2016-04-01

    Fear of flying (FoF), a common phobia in the developed world, is usually treated with cognitive behavioral therapy, most efficiently when combined with exposure methods, e.g., virtual reality exposure therapy (VRET). We evaluated FoF treatment using VRET in a large motion-based VR system. The treated subjects were seated on a moving platform. The virtual scenery included the interior of an aircraft and a window view to the outside world accompanied by platform movements simulating, e.g., takeoff, landing, and air turbulence. Relevant auditory stimuli were also incorporated. Three male patients with FoF underwent a clinical interview followed by three VRETs in the presence and with the guidance of a therapist. Scores on the Flight Anxiety Situation (FAS) and Flight Anxiety Modality (FAM) questionnaires were obtained on the first and fourth visits. Anxiety levels were assessed using the subjective units of distress (SUDs) scale during the exposure. All three subjects expressed satisfaction regarding the procedure and did not skip or avoid any of its stages. Consistent improvement was seen in the SUDs throughout the VRET session and across sessions, while patients' scores on the FAS and FAM showed inconsistent trends. Two patients participated in actual flights in the months following the treatment, bringing 12 and 16 yr of avoidance to an end. This VR-based treatment includes critical elements for exposure of flying experience beyond visual and auditory stimuli. The current case reports suggest VRET sessions may have a meaningful impact on anxiety levels, yet additional research seems warranted.

  1. Virtual environment architecture for rapid application development

    NASA Technical Reports Server (NTRS)

    Grinstein, Georges G.; Southard, David A.; Lee, J. P.

    1993-01-01

    We describe the MITRE Virtual Environment Architecture (VEA), a product of nearly two years of investigations and prototypes of virtual environment technology. This paper discusses the requirements for rapid prototyping, and an architecture we are developing to support virtual environment construction. VEA supports rapid application development by providing a variety of pre-built modules that can be reconfigured for each application session. The modules supply interfaces for several types of interactive I/O devices, in addition to large-screen or head-mounted displays.

  2. Clandestine Message Passing in Virtual Environments

    DTIC Science & Technology

    2008-09-01

    accessed April 4, 2008). Weir, Laila. “Boring Game? Outsorce It.” (August 24, 2004). http://www.wired.com/ entertainment / music /news/2004/08/ 64638...Multiplayer Online MOVES - Modeling Virtual Environments and Simulation MTV – Music Television NPS - Naval Postgraduate School PAN – Personal Area...Network PSP - PlayStation Portable RPG – Role-playing Game SL - Second Life SVN - Subversion VE – Virtual Environments vMTV – Virtual Music

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruotolo, Francesco, E-mail: francesco.ruotolo@unina2.it; Maffei, Luigi, E-mail: luigi.maffei@unina2.it; Di Gabriele, Maria, E-mail: maria.digabriele@unina2.it

    Several international studies have shown that traffic noise has a negative impact on people's health and that people's annoyance does not depend only on noise energetic levels, but rather on multi-perceptual factors. The combination of virtual reality technology and audio rendering techniques allow us to experiment a new approach for environmental noise assessment that can help to investigate in advance the potential negative effects of noise associated with a specific project and that in turn can help designers to make educated decisions. In the present study, the audio–visual impact of a new motorway project on people has been assessed bymore » means of immersive virtual reality technology. In particular, participants were exposed to 3D reconstructions of an actual landscape without the projected motorway (ante operam condition), and of the same landscape with the projected motorway (post operam condition). Furthermore, individuals' reactions to noise were assessed by means of objective cognitive measures (short term verbal memory and executive functions) and subjective evaluations (noise and visual annoyance). Overall, the results showed that the introduction of a projected motorway in the environment can have immediate detrimental effects of people's well-being depending on the distance from the noise source. In particular, noise due to the new infrastructure seems to exert a negative influence on short term verbal memory and to increase both visual and noise annoyance. The theoretical and practical implications of these findings are discussed. -- Highlights: ► Impact of traffic noise on people's well-being depends on multi-perceptual factors. ► A multisensory virtual reality technology is used to simulate a projected motorway. ► Effects on short-term memory and auditory and visual subjective annoyance were found. ► The closer the distance from the motorway the stronger was the effect. ► Multisensory virtual reality methodologies can be used to study environmental impact.« less

  4. Brain activity during a lower limb functional task in a real and virtual environment: A comparative study.

    PubMed

    Pacheco, Thaiana Barbosa Ferreira; Oliveira Rego, Isabelle Ananda; Campos, Tania Fernandes; Cavalcanti, Fabrícia Azevedo da Costa

    2017-01-01

    Virtual Reality (VR) has been contributing to Neurological Rehabilitation because of its interactive and multisensory nature, providing the potential of brain reorganization. Given the use of mobile EEG devices, there is the possibility of investigating how the virtual therapeutic environment can influence brain activity. To compare theta, alpha, beta and gamma power in healthy young adults during a lower limb motor task in a virtual and real environment. Ten healthy adults were submitted to an EEG assessment while performing a one-minute task consisted of going up and down a step in a virtual environment - Nintendo Wii virtual game "Basic step" - and in a real environment. Real environment caused an increase in theta and alpha power, with small to large size effects mainly in the frontal region. VR caused a greater increase in beta and gamma power, however, with small or negligible effects on a variety of regions regarding beta frequency, and medium to very large effects on the frontal and the occipital regions considering gamma frequency. Theta, alpha, beta and gamma activity during the execution of a motor task differs according to the environment that the individual is exposed - real or virtual - and may have varying size effects if brain area activation and frequency spectrum in each environment are taken into consideration.

  5. Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis.

    PubMed

    Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun

    2015-01-01

    In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it.

  6. Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis

    PubMed Central

    Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun

    2015-01-01

    In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it. PMID:26241496

  7. [Learning virtual routes: what does verbal coding do in working memory?].

    PubMed

    Gyselinck, Valérie; Grison, Élise; Gras, Doriane

    2015-03-01

    Two experiments were run to complete our understanding of the role of verbal and visuospatial encoding in the construction of a spatial model from visual input. In experiment 1 a dual task paradigm was applied to young adults who learned a route in a virtual environment and then performed a series of nonverbal tasks to assess spatial knowledge. Results indicated that landmark knowledge as asserted by the visual recognition of landmarks was not impaired by any of the concurrent task. Route knowledge, assessed by recognition of directions, was impaired both by a tapping task and a concurrent articulation task. Interestingly, the pattern was modulated when no landmarks were available to perform the direction task. A second experiment was designed to explore the role of verbal coding on the construction of landmark and route knowledge. A lexical-decision task was used as a verbal-semantic dual task, and a tone decision task as a nonsemantic auditory task. Results show that these new concurrent tasks impaired differently landmark knowledge and route knowledge. Results can be interpreted as showing that the coding of route knowledge could be grounded on both a coding of the sequence of events and on a semantic coding of information. These findings also point on some limits of Baddeley's working memory model. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  8. The singular nature of auditory and visual scene analysis in autism

    PubMed Central

    Lin, I.-Fan; Shirama, Aya; Kato, Nobumasa

    2017-01-01

    Individuals with autism spectrum disorder often have difficulty acquiring relevant auditory and visual information in daily environments, despite not being diagnosed as hearing impaired or having low vision. Resent psychophysical and neurophysiological studies have shown that autistic individuals have highly specific individual differences at various levels of information processing, including feature extraction, automatic grouping and top-down modulation in auditory and visual scene analysis. Comparison of the characteristics of scene analysis between auditory and visual modalities reveals some essential commonalities, which could provide clues about the underlying neural mechanisms. Further progress in this line of research may suggest effective methods for diagnosing and supporting autistic individuals. This article is part of the themed issue ‘Auditory and visual scene analysis'. PMID:28044025

  9. The effect of viewing a virtual environment through a head-mounted display on balance.

    PubMed

    Robert, Maxime T; Ballaz, Laurent; Lemay, Martin

    2016-07-01

    In the next few years, several head-mounted displays (HMD) will be publicly released making virtual reality more accessible. HMD are expected to be widely popular at home for gaming but also in clinical settings, notably for training and rehabilitation. HMD can be used in both seated and standing positions; however, presently, the impact of HMD on balance remains largely unknown. It is therefore crucial to examine the impact of viewing a virtual environment through a HMD on standing balance. To compare static and dynamic balance in a virtual environment perceived through a HMD and the physical environment. The visual representation of the virtual environment was based on filmed image of the physical environment and was therefore highly similar. This is an observational study in healthy adults. No significant difference was observed between the two environments for static balance. However, dynamic balance was more perturbed in the virtual environment when compared to that of the physical environment. HMD should be used with caution because of its detrimental impact on dynamic balance. Sensorimotor conflict possibly explains the impact of HMD on balance. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Vision-based navigation in a dynamic environment for virtual human

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Sun, Ji-Zhou; Zhang, Jia-Wan; Li, Ming-Chu

    2004-06-01

    Intelligent virtual human is widely required in computer games, ergonomics software, virtual environment and so on. We present a vision-based behavior modeling method to realize smart navigation in a dynamic environment. This behavior model can be divided into three modules: vision, global planning and local planning. Vision is the only channel for smart virtual actor to get information from the outside world. Then, the global and local planning module use A* and D* algorithm to find a way for virtual human in a dynamic environment. Finally, the experiments on our test platform (Smart Human System) verify the feasibility of this behavior model.

  11. Audiovisual integration increases the intentional step synchronization of side-by-side walkers.

    PubMed

    Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A

    2017-12-01

    When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Effects of virtual reality-based training and task-oriented training on balance performance in stroke patients.

    PubMed

    Lee, Hyung Young; Kim, You Lim; Lee, Suk Min

    2015-06-01

    [Purpose] This study aimed to investigate the clinical effects of virtual reality-based training and task-oriented training on balance performance in stroke patients. [Subjects and Methods] The subjects were randomly allocated to 2 groups: virtual reality-based training group (n = 12) and task-oriented training group (n = 12). The patients in the virtual reality-based training group used the Nintendo Wii Fit Plus, which provided visual and auditory feedback as well as the movements that enabled shifting of weight to the right and left sides, for 30 min/day, 3 times/week for 6 weeks. The patients in the task-oriented training group practiced additional task-oriented programs for 30 min/day, 3 times/week for 6 weeks. Patients in both groups also underwent conventional physical therapy for 60 min/day, 5 times/week for 6 weeks. [Results] Balance and functional reach test outcomes were examined in both groups. The results showed that the static balance and functional reach test outcomes were significantly higher in the virtual reality-based training group than in the task-oriented training group. [Conclusion] This study suggested that virtual reality-based training might be a more feasible and suitable therapeutic intervention for dynamic balance in stroke patients compared to task-oriented training.

  13. Effects of virtual reality-based training and task-oriented training on balance performance in stroke patients

    PubMed Central

    Lee, Hyung Young; Kim, You Lim; Lee, Suk Min

    2015-01-01

    [Purpose] This study aimed to investigate the clinical effects of virtual reality-based training and task-oriented training on balance performance in stroke patients. [Subjects and Methods] The subjects were randomly allocated to 2 groups: virtual reality-based training group (n = 12) and task-oriented training group (n = 12). The patients in the virtual reality-based training group used the Nintendo Wii Fit Plus, which provided visual and auditory feedback as well as the movements that enabled shifting of weight to the right and left sides, for 30 min/day, 3 times/week for 6 weeks. The patients in the task-oriented training group practiced additional task-oriented programs for 30 min/day, 3 times/week for 6 weeks. Patients in both groups also underwent conventional physical therapy for 60 min/day, 5 times/week for 6 weeks. [Results] Balance and functional reach test outcomes were examined in both groups. The results showed that the static balance and functional reach test outcomes were significantly higher in the virtual reality-based training group than in the task-oriented training group. [Conclusion] This study suggested that virtual reality-based training might be a more feasible and suitable therapeutic intervention for dynamic balance in stroke patients compared to task-oriented training. PMID:26180341

  14. Emotional context enhances auditory novelty processing in superior temporal gyrus.

    PubMed

    Domínguez-Borràs, Judith; Trautmann, Sina-Alexa; Erhard, Peter; Fehr, Thorsten; Herrmann, Manfred; Escera, Carles

    2009-07-01

    Visualizing emotionally loaded pictures intensifies peripheral reflexes toward sudden auditory stimuli, suggesting that the emotional context may potentiate responses elicited by novel events in the acoustic environment. However, psychophysiological results have reported that attentional resources available to sounds become depleted, as attention allocation to emotional pictures increases. These findings have raised the challenging question of whether an emotional context actually enhances or attenuates auditory novelty processing at a central level in the brain. To solve this issue, we used functional magnetic resonance imaging to first identify brain activations induced by novel sounds (NOV) when participants made a color decision on visual stimuli containing both negative (NEG) and neutral (NEU) facial expressions. We then measured modulation of these auditory responses by the emotional load of the task. Contrary to what was assumed, activation induced by NOV in superior temporal gyrus (STG) was enhanced when subjects responded to faces with a NEG emotional expression compared with NEU ones. Accordingly, NOV yielded stronger behavioral disruption on subjects' performance in the NEG context. These results demonstrate that the emotional context modulates the excitability of auditory and possibly multimodal novelty cerebral regions, enhancing acoustic novelty processing in a potentially harming environment.

  15. Establishing a virtual learning environment: a nursing experience.

    PubMed

    Wood, Anya; McPhee, Carolyn

    2011-11-01

    The use of virtual worlds has exploded in popularity, but getting started may not be easy. In this article, the authors, members of the corporate nursing education team at University Health Network, outline their experience with incorporating virtual technology into their learning environment. Over a period of several months, a virtual hospital, including two nursing units, was created in Second Life®, allowing more than 500 nurses to role-play in a safe environment without the fear of making a mistake. This experience has provided valuable insight into the best ways to develop and learn in a virtual environment. The authors discuss the challenges of installing and building the Second Life® platform and provide guidelines for preparing users and suggestions for crafting educational activities. This article provides a starting point for organizations planning to incorporate virtual worlds into their learning environment. Copyright 2011, SLACK Incorporated.

  16. Height effects in real and virtual environments.

    PubMed

    Simeonov, Peter I; Hsiao, Hongwei; Dotson, Brian W; Ammons, Douglas E

    2005-01-01

    The study compared human perceptions of height, danger, and anxiety, as well as skin conductance and heart rate responses and postural instability effects, in real and virtual height environments. The 24 participants (12 men, 12 women), whose average age was 23.6 years, performed "lean-over-the-railing" and standing tasks on real and comparable virtual balconies, using a surround-screen virtual reality (SSVR) system. The results indicate that the virtual display of elevation provided realistic perceptual experience and induced some physiological responses and postural instability effects comparable to those found in a real environment. It appears that a simulation of elevated work environment in a SSVR system, although with reduced visual fidelity, is a valid tool for safety research. Potential applications of this study include the design of virtual environments that will help in safe evaluation of human performance at elevation, identification of risk factors leading to fall incidents, and assessment of new fall prevention strategies.

  17. Auditory Attentional Control and Selection during Cocktail Party Listening

    PubMed Central

    Hill, Kevin T.

    2010-01-01

    In realistic auditory environments, people rely on both attentional control and attentional selection to extract intelligible signals from a cluttered background. We used functional magnetic resonance imaging to examine auditory attention to natural speech under such high processing-load conditions. Participants attended to a single talker in a group of 3, identified by the target talker's pitch or spatial location. A catch-trial design allowed us to distinguish activity due to top-down control of attention versus attentional selection of bottom-up information in both the spatial and spectral (pitch) feature domains. For attentional control, we found a left-dominant fronto-parietal network with a bias toward spatial processing in dorsal precentral sulcus and superior parietal lobule, and a bias toward pitch in inferior frontal gyrus. During selection of the talker, attention modulated activity in left intraparietal sulcus when using talker location and in bilateral but right-dominant superior temporal sulcus when using talker pitch. We argue that these networks represent the sources and targets of selective attention in rich auditory environments. PMID:19574393

  18. Virtual button interface

    DOEpatents

    Jones, Jake S.

    1999-01-01

    An apparatus and method of issuing commands to a computer by a user interfacing with a virtual reality environment. To issue a command, the user directs gaze at a virtual button within the virtual reality environment, causing a perceptible change in the virtual button, which then sends a command corresponding to the virtual button to the computer, optionally after a confirming action is performed by the user, such as depressing a thumb switch.

  19. Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution

    PubMed Central

    Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir

    2016-01-01

    Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks–walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience. PMID:26882473

  20. Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution.

    PubMed

    Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir

    2016-01-01

    Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks-walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.

  1. Evaluation of procedural learning transfer from a virtual environment to a real situation: a case study on tank maintenance training.

    PubMed

    Ganier, Franck; Hoareau, Charlotte; Tisseau, Jacques

    2014-01-01

    Virtual reality opens new opportunities for operator training in complex tasks. It lowers costs and has fewer constraints than traditional training. The ultimate goal of virtual training is to transfer knowledge gained in a virtual environment to an actual real-world setting. This study tested whether a maintenance procedure could be learnt equally well by virtual-environment and conventional training. Forty-two adults were divided into three equally sized groups: virtual training (GVT® [generic virtual training]), conventional training (using a real tank suspension and preparation station) and control (no training). Participants then performed the procedure individually in the real environment. Both training types (conventional and virtual) produced similar levels of performance when the procedure was carried out in real conditions. Performance level for the two trained groups was better in terms of success and time taken to complete the task, time spent consulting job instructions and number of times the instructor provided guidance.

  2. Focus, locus, and sensus: the three dimensions of virtual experience.

    PubMed

    Waterworth, E L; Waterworth, J A

    2001-04-01

    A model of virtual/physical experience is presented, which provides a three dimensional conceptual space for virtual and augmented reality (VR and AR) comprising the dimensions of focus, locus, and sensus. Focus is most closely related to what is generally termed presence in the VR literature. When in a virtual environment, presence is typically shared between the VR and the physical world. "Breaks in presence" are actually shifts of presence away from the VR and toward the external environment. But we can also have "breaks in presence" when attention moves toward absence--when an observer is not attending to stimuli present in the virtual environment, nor to stimuli present in the surrounding physical environment--when the observer is present in neither the virtual nor the physical world. We thus have two dimensions of presence: focus of attention (between presence and absence) and the locus of attention (the virtual vs. the physical world). A third dimension is the sensus of attention--the level of arousal determining whether the observer is highly conscious or relatively unconscious while interacting with the environment. After expanding on each of these three dimensions of experience in relation to VR, we present a couple of educational examples as illustrations, and also relate our model to a suggested spectrum of evaluation methods for virtual environments.

  3. Virtual Environments in Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Lisinski, T. A. (Technical Monitor)

    1994-01-01

    Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.

  4. An association between auditory-visual synchrony processing and reading comprehension: Behavioral and electrophysiological evidence

    PubMed Central

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2016-01-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060

  5. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    PubMed

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  6. Digital Immersive Virtual Environments and Instructional Computing

    ERIC Educational Resources Information Center

    Blascovich, Jim; Beall, Andrew C.

    2010-01-01

    This article reviews theory and research relevant to the development of digital immersive virtual environment-based instructional computing systems. The review is organized within the context of a multidimensional model of social influence and interaction within virtual environments that models the interaction of four theoretical factors: theory…

  7. Listening Into 2030 Workshop: An Experiment in Envisioning the Future of Hearing and Communication Science

    PubMed Central

    Carlile, Simon; Ciccarelli, Gregory; Cockburn, Jane; Diedesch, Anna C.; Finnegan, Megan K.; Hafter, Ervin; Henin, Simon; Kalluri, Sridhar; Kell, Alexander J. E.; Ozmeral, Erol J.; Roark, Casey L.

    2017-01-01

    Here we report the methods and output of a workshop examining possible futures of speech and hearing science out to 2030. Using a design thinking approach, a range of human-centered problems in communication were identified that could provide the motivation for a wide range of research. Nine main research programs were distilled and are summarized: (a) measuring brain and other physiological parameters, (b) auditory and multimodal displays of information, (c) auditory scene analysis, (d) enabling and understanding shared auditory virtual spaces, (e) holistic approaches to health management and hearing impairment, (f) universal access to evolving and individualized technologies, (g) biological intervention for hearing dysfunction, (h) understanding the psychosocial interactions with technology and other humans as mediated by technology, and (i) the impact of changing models of security and privacy. The design thinking approach attempted to link the judged level of importance of different research areas to the “end in mind” through empathy for the real-life problems embodied in the personas created during the workshop. PMID:29090640

  8. Development of Virtual Auditory Interfaces

    DTIC Science & Technology

    2001-03-01

    reference to compare the sound in the VE with the real 4. Lessons from the Entertainment Industry world experience. The entertainment industry has...systems are currently being evaluated. even though we have the technology to create astounding The first system uses a portable Sony TCD-D8 DAT audio...data set created a system called "Fantasound" which wrapped the including sound recordings and sound measurements musical compositions and sound

  9. Abnormal neural activities of directional brain networks in patients with long-term bilateral hearing loss.

    PubMed

    Xu, Long-Chun; Zhang, Gang; Zou, Yue; Zhang, Min-Feng; Zhang, Dong-Sheng; Ma, Hua; Zhao, Wen-Bo; Zhang, Guang-Yu

    2017-10-13

    The objective of the study is to provide some implications for rehabilitation of hearing impairment by investigating changes of neural activities of directional brain networks in patients with long-term bilateral hearing loss. Firstly, we implemented neuropsychological tests of 21 subjects (11 patients with long-term bilateral hearing loss, and 10 subjects with normal hearing), and these tests revealed significant differences between the deaf group and the controls. Then we constructed the individual specific virtual brain based on functional magnetic resonance data of participants by utilizing effective connectivity and multivariate regression methods. We exerted the stimulating signal to the primary auditory cortices of the virtual brain and observed the brain region activations. We found that patients with long-term bilateral hearing loss presented weaker brain region activations in the auditory and language networks, but enhanced neural activities in the default mode network as compared with normally hearing subjects. Especially, the right cerebral hemisphere presented more changes than the left. Additionally, weaker neural activities in the primary auditor cortices were also strongly associated with poorer cognitive performance. Finally, causal analysis revealed several interactional circuits among activated brain regions, and these interregional causal interactions implied that abnormal neural activities of the directional brain networks in the deaf patients impacted cognitive function.

  10. Auditory Processing Disorders: Acquisition and Treatment

    ERIC Educational Resources Information Center

    Moore, David R.

    2007-01-01

    Auditory processing disorder (APD) describes a mixed and poorly understood listening problem characterised by poor speech perception, especially in challenging environments. APD may include an inherited component, and this may be major, but studies reviewed here of children with long-term otitis media with effusion (OME) provide strong evidence…

  11. Virtual button interface

    DOEpatents

    Jones, J.S.

    1999-01-12

    An apparatus and method of issuing commands to a computer by a user interfacing with a virtual reality environment are disclosed. To issue a command, the user directs gaze at a virtual button within the virtual reality environment, causing a perceptible change in the virtual button, which then sends a command corresponding to the virtual button to the computer, optionally after a confirming action is performed by the user, such as depressing a thumb switch. 4 figs.

  12. Virtual Learning Environment for Interactive Engagement with Advanced Quantum Mechanics

    ERIC Educational Resources Information Center

    Pedersen, Mads Kock; Skyum, Birk; Heck, Robert; Müller, Romain; Bason, Mark; Lieberoth, Andreas; Sherson, Jacob F.

    2016-01-01

    A virtual learning environment can engage university students in the learning process in ways that the traditional lectures and lab formats cannot. We present our virtual learning environment "StudentResearcher," which incorporates simulations, multiple-choice quizzes, video lectures, and gamification into a learning path for quantum…

  13. Design and Implementation of an Intelligent Virtual Environment for Improving Speaking and Listening Skills

    ERIC Educational Resources Information Center

    Hassani, Kaveh; Nahvi, Ali; Ahmadi, Ali

    2016-01-01

    In this paper, we present an intelligent architecture, called intelligent virtual environment for language learning, with embedded pedagogical agents for improving listening and speaking skills of non-native English language learners. The proposed architecture integrates virtual environments into the Intelligent Computer-Assisted Language…

  14. Usability Evaluation of an Adaptive 3D Virtual Learning Environment

    ERIC Educational Resources Information Center

    Ewais, Ahmed; De Troyer, Olga

    2013-01-01

    Using 3D virtual environments for educational purposes is becoming attractive because of their rich presentation and interaction capabilities. Furthermore, dynamically adapting the 3D virtual environment to the personal preferences, prior knowledge, skills and competence, learning goals, and the personal or (social) context in which the learning…

  15. Exploring Non-Traditional Learning Methods in Virtual and Real-World Environments

    ERIC Educational Resources Information Center

    Lukman, Rebeka; Krajnc, Majda

    2012-01-01

    This paper identifies the commonalities and differences within non-traditional learning methods regarding virtual and real-world environments. The non-traditional learning methods in real-world have been introduced within the following courses: Process Balances, Process Calculation, and Process Synthesis, and within the virtual environment through…

  16. Learning Relative Motion Concepts in Immersive and Non-Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Kozhevnikov, Michael; Gurlitt, Johannes; Kozhevnikov, Maria

    2013-01-01

    The focus of the current study is to understand which unique features of an immersive virtual reality environment have the potential to improve learning relative motion concepts. Thirty-seven undergraduate students learned relative motion concepts using computer simulation either in immersive virtual environment (IVE) or non-immersive desktop…

  17. Strategies for Increasing the Interactivity of Children's Synchronous Learning in Virtual Environments

    ERIC Educational Resources Information Center

    Katlianik, Ivan

    2013-01-01

    Enabling distant individuals to assemble in one virtual environment, synchronous distance learning appeals to researchers and practitioners alike because of its unique educational opportunities. One of the vital components of successful synchronous distance learning is interactivity. In virtual environments, interactivity is limited by the…

  18. Nature and origins of virtual environments - A bibliographical essay

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.

    1991-01-01

    Virtual environments presented via head-mounted, computer-driven displays provide a new media for communication. They may be analyzed by considering: (1) what may be meant by an environment; (2) what is meant by the process of virtualization; and (3) some aspects of human performance that constrain environmental design. Their origins are traced from previous work in vehicle simulation and multimedia research. Pointers are provided to key technical references, in the dispersed, archival literature, that are relevant to the development and evaluation of virtual-environment interface systems.

  19. Ontological implications of being in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Morie, Jacquelyn F.

    2008-02-01

    The idea of Virtual Reality once conjured up visions of new territories to explore, and expectations of awaiting worlds of wonder. VR has matured to become a practical tool for therapy, medicine and commercial interests, yet artists, in particular, continue to expand the possibilities for the medium. Artistic virtual environments created over the past two decades probe the phenomenological nature of these virtual environments. When we inhabit a fully immersive virtual environment, we have entered into a new form of Being. Not only does our body continue to exist in the real, physical world, we are also embodied within the virtual by means of technology that translates our bodied actions into interactions with the virtual environment. Very few states in human existence allow this bifurcation of our Being, where we can exist simultaneously in two spaces at once, with the possible exception of meta-physical states such as shamanistic trance and out-of-body experiences. This paper discusses the nature of this simultaneous Being, how we enter the virtual space, what forms of persona we can don there, what forms of spaces we can inhabit, and what type of wondrous experiences we can both hope for and expect.

  20. Facilitation of listening comprehension by visual information under noisy listening condition

    NASA Astrophysics Data System (ADS)

    Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi

    2009-02-01

    Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.

  1. Augmented Virtuality: A Real-time Process for Presenting Real-world Visual Sensory Information in an Immersive Virtual Environment for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.

    2017-12-01

    Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.

  2. The virtual environment display system

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1991-01-01

    Virtual environment technology is a display and control technology that can surround a person in an interactive computer generated or computer mediated virtual environment. It has evolved at NASA-Ames since 1984 to serve NASA's missions and goals. The exciting potential of this technology, sometimes called Virtual Reality, Artificial Reality, or Cyberspace, has been recognized recently by the popular media, industry, academia, and government organizations. Much research and development will be necessary to bring it to fruition.

  3. Measurement Tools for the Immersive Visualization Environment: Steps Toward the Virtual Laboratory.

    PubMed

    Hagedorn, John G; Dunkers, Joy P; Satterfield, Steven G; Peskin, Adele P; Kelso, John T; Terrill, Judith E

    2007-01-01

    This paper describes a set of tools for performing measurements of objects in a virtual reality based immersive visualization environment. These tools enable the use of the immersive environment as an instrument for extracting quantitative information from data representations that hitherto had be used solely for qualitative examination. We provide, within the virtual environment, ways for the user to analyze and interact with the quantitative data generated. We describe results generated by these methods to obtain dimensional descriptors of tissue engineered medical products. We regard this toolbox as our first step in the implementation of a virtual measurement laboratory within an immersive visualization environment.

  4. Auditory Processing Testing: In the Booth versus Outside the Booth.

    PubMed

    Lucker, Jay R

    2017-09-01

    Many audiologists believe that auditory processing testing must be carried out in a soundproof booth. This expectation is especially a problem in places such as elementary schools. Research comparing pure-tone thresholds obtained in sound booths compared to quiet test environments outside of these booths does not support that belief. Auditory processing testing is generally carried out at above threshold levels, and therefore may be even less likely to require a soundproof booth. The present study was carried out to compare test results in soundproof booths versus quiet rooms. The purpose of this study was to determine whether auditory processing tests can be administered in a quiet test room rather than in the soundproof test suite. The outcomes would identify that audiologists can provide auditory processing testing for children under various test conditions including quiet rooms at their school. A battery of auditory processing tests was administered at a test level equivalent to 50 dB HL through headphones. The same equipment was used for testing in both locations. Twenty participants identified with normal hearing were included in this study, ten having no auditory processing concerns and ten exhibiting auditory processing problems. All participants underwent a battery of tests, both inside the test booth and outside the booth in a quiet room. Order of testing (inside versus outside) was counterbalanced. Participants were first determined to have normal hearing thresholds for tones and speech. Auditory processing tests were recorded and presented from an HP EliteBook laptop computer with noise-canceling headphones attached to a y-cord that not only presented the test stimuli to the participants but also allowed monitor headphones to be worn by the evaluator. The same equipment was used inside as well as outside the booth. No differences were found for each auditory processing measure as a function of the test setting or the order in which testing was done, that is, in the booth or in the room. Results from the present study indicate that one can obtain the same results on auditory processing tests, regardless of whether testing is completed in a soundproof booth or in a quiet test environment. Therefore, audiologists should not be required to test for auditory processing in a soundproof booth. This study shows that audiologists can conduct testing in a quiet room so long as the background noise is sufficiently controlled. American Academy of Audiology

  5. Assessment and Mitigation of the Effects of Noise on Habitability in Deep Space Environments: Report on Non-Auditory Effects of Noise

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    2018-01-01

    This document reviews non-auditory effects of noise relevant to habitable volume requirements in cislunar space. The non-auditory effects of noise in future long-term space habitats are likely to be impactful on team and individual performance, sleep, and cognitive well-being. This report has provided several recommendations for future standards and procedures for long-term space flight habitats, along with recommendations for NASA's Human Research Program in support of DST mission success.

  6. Technologies That Capitalize on Study Skills with Learning Style Strengths

    ERIC Educational Resources Information Center

    Howell, Dusti D.

    2008-01-01

    This article addresses the tools available in the rapidly changing digital learning environment and offers a variety of approaches for how they can assist students with visual, auditory, or kinesthetic learning strengths. Teachers can use visual, auditory, and kinesthetic assessment tests to identify learning preferences and then recommend…

  7. Modeling Environmental Impacts on Cognitive Performance for Artificially Intelligent Entities

    DTIC Science & Technology

    2017-06-01

    of the agent behavior model is presented in a military-relevant virtual game environment. We then outline a quantitative approach to test the agent...relevant virtual game environment. We then outline a quantitative approach to test the agent behavior model within the virtual environment. Results show...x Game View of Hot Environment Condition Displaying Total “f” Cost for Each Searched Waypoint Node

  8. A Case Study of the Experiences of Instructors and Students in a Virtual Learning Environment (VLE) with Different Cultural Backgrounds

    ERIC Educational Resources Information Center

    Lim, Keol; Kim, Mi Hwa

    2015-01-01

    The use of virtual learning environments (VLEs) has become more common and educators recognized the potential of VLEs as educational environments. The learning community in VLEs can be a mixture of people from all over the world with different cultural backgrounds. However, despite many studies about the use of virtual environments for learning,…

  9. Early multisensory interactions affect the competition among multiple visual objects.

    PubMed

    Van der Burg, Erik; Talsma, Durk; Olivers, Christian N L; Hickey, Clayton; Theeuwes, Jan

    2011-04-01

    In dynamic cluttered environments, audition and vision may benefit from each other in determining what deserves further attention and what does not. We investigated the underlying neural mechanisms responsible for attentional guidance by audiovisual stimuli in such an environment. Event-related potentials (ERPs) were measured during visual search through dynamic displays consisting of line elements that randomly changed orientation. Search accuracy improved when a target orientation change was synchronized with an auditory signal as compared to when the auditory signal was absent or synchronized with a distractor orientation change. The ERP data show that behavioral benefits were related to an early multisensory interaction over left parieto-occipital cortex (50-60 ms post-stimulus onset), which was followed by an early positive modulation (80-100 ms) over occipital and temporal areas contralateral to the audiovisual event, an enhanced N2pc (210-250 ms), and a contralateral negative slow wave (CNSW). The early multisensory interaction was correlated with behavioral search benefits, indicating that participants with a strong multisensory interaction benefited the most from the synchronized auditory signal. We suggest that an auditory signal enhances the neural response to a synchronized visual event, which increases the chances of selection in a multiple object environment. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Methods and systems relating to an augmented virtuality environment

    DOEpatents

    Nielsen, Curtis W; Anderson, Matthew O; McKay, Mark D; Wadsworth, Derek C; Boyce, Jodie R; Hruska, Ryan C; Koudelka, John A; Whetten, Jonathan; Bruemmer, David J

    2014-05-20

    Systems and methods relating to an augmented virtuality system are disclosed. A method of operating an augmented virtuality system may comprise displaying imagery of a real-world environment in an operating picture. The method may further include displaying a plurality of virtual icons in the operating picture representing at least some assets of a plurality of assets positioned in the real-world environment. Additionally, the method may include displaying at least one virtual item in the operating picture representing data sensed by one or more of the assets of the plurality of assets and remotely controlling at least one asset of the plurality of assets by interacting with a virtual icon associated with the at least one asset.

  11. Object Creation and Human Factors Evaluation for Virtual Environments

    NASA Technical Reports Server (NTRS)

    Lindsey, Patricia F.

    1998-01-01

    The main objective of this project is to provide test objects for simulated environments utilized by the recently established Army/NASA Virtual Innovations Lab (ANVIL) at Marshall Space Flight Center, Huntsville, Al. The objective of the ANVIL lab is to provide virtual reality (VR) models and environments and to provide visualization and manipulation methods for the purpose of training and testing. Visualization equipment used in the ANVIL lab includes head-mounted and boom-mounted immersive virtual reality display devices. Objects in the environment are manipulated using data glove, hand controller, or mouse. These simulated objects are solid or surfaced three dimensional models. They may be viewed or manipulated from any location within the environment and may be viewed on-screen or via immersive VR. The objects are created using various CAD modeling packages and are converted into the virtual environment using dVise. This enables the object or environment to be viewed from any angle or distance for training or testing purposes.

  12. The Influence of Virtual Learning Environments in Students' Performance

    ERIC Educational Resources Information Center

    Alves, Paulo; Miranda, Luísa; Morais, Carlos

    2017-01-01

    This paper focuses mainly on the relation between the use of a virtual learning environment (VLE) and students' performance. Therefore, virtual learning environments are characterised and a study is presented emphasising the frequency of access to a VLE and its relation with the students' performance from a public higher education institution…

  13. Full Immersive Virtual Environment Cave[TM] in Chemistry Education

    ERIC Educational Resources Information Center

    Limniou, M.; Roberts, D.; Papadopoulos, N.

    2008-01-01

    By comparing two-dimensional (2D) chemical animations designed for computer's desktop with three-dimensional (3D) chemical animations designed for the full immersive virtual reality environment CAVE[TM] we studied how virtual reality environments could raise student's interest and motivation for learning. By using the 3ds max[TM], we can visualize…

  14. Temporal Issues in the Design of Virtual Learning Environments.

    ERIC Educational Resources Information Center

    Bergeron, Bryan; Obeid, Jihad

    1995-01-01

    Describes design methods used to influence user perception of time in virtual learning environments. Examines the use of temporal cues in medical education and clinical competence testing. Finds that user perceptions of time affects user acceptance, ease of use, and the level of realism of a virtual learning environment. Contains 51 references.…

  15. The Doubtful Guest? A Virtual Research Environment for Education

    ERIC Educational Resources Information Center

    Laterza, Vito; Carmichael, Patrick; Procter, Richard

    2007-01-01

    In this paper the authors describe a novel "Virtual Research Environment" (VRE) based on the Sakai Virtual Collaboration Environment and designed to support education research. This VRE has been used for the past two years by projects of the UK Economic and Social Research Council's Teaching and Learning Research Programme, 10 of which…

  16. Using Virtual Reality to Help Students with Social Interaction Skills

    ERIC Educational Resources Information Center

    Beach, Jason; Wendt, Jeremy

    2015-01-01

    The purpose of this study was to determine if participants could improve their social interaction skills by participating in a virtual immersive environment. The participants used a developing virtual reality head-mounted display to engage themselves in a fully-immersive environment. While in the environment, participants had an opportunity to…

  17. Students' Collective Knowledge Construction in the Virtual Learning Environment ""ToLigado"--Your School Interactive Newspaper"

    ERIC Educational Resources Information Center

    Passarelli, Brasilina

    2008-01-01

    Introduction: The ToLigado Project--Your School Interactive Newspaper is an interactive virtual learning environment conceived, developed, implemented and supported by researchers at the School of the Future Research Laboratory of the University of Sao Paulo, Brazil. Method: This virtual learning environment aims to motivate trans-disciplinary…

  18. Using SOLO to Evaluate an Educational Virtual Environment in a Technology Education Setting

    ERIC Educational Resources Information Center

    Padiotis, Ioannis; Mikropoulos, Tassos A.

    2010-01-01

    The present research investigates the contribution of an interactive educational virtual environment on milk pasteurization to the learning outcomes of 40 students in a technical secondary school using SOLO taxonomy. After the interaction with the virtual environment the majority of the students moved to higher hierarchical levels of understanding…

  19. Virtual Environments Supporting Learning and Communication in Special Needs Education

    ERIC Educational Resources Information Center

    Cobb, Sue V. G.

    2007-01-01

    Virtual reality (VR) describes a set of technologies that allow users to explore and experience 3-dimensional computer-generated "worlds" or "environments." These virtual environments can contain representations of real or imaginary objects on a small or large scale (from modeling of molecular structures to buildings, streets, and scenery of a…

  20. Prospective Teachers' Likelihood of Performing Unethical Behaviors in the Real and Virtual Environments

    ERIC Educational Resources Information Center

    Akdemir, Ömür; Vural, Ömer F.; Çolakoglu, Özgür M.

    2015-01-01

    Individuals act different in virtual environment than real life. The primary purpose of this study is to investigate the prospective teachers' likelihood of performing unethical behaviors in the real and virtual environments. Prospective teachers are surveyed online and their perceptions have been collected for various scenarios. Findings revealed…

  1. [Auditory training in workshops: group therapy option].

    PubMed

    Santos, Juliana Nunes; do Couto, Isabel Cristina Plais; Amorim, Raquel Martins da Costa

    2006-01-01

    auditory training in groups. to verify in a group of individuals with mental retardation the efficacy of auditory training in a workshop environment. METHOD a longitudinal prospective study with 13 mentally retarded individuals from the Associação de Pais e Amigos do Excepcional (APAE) of Congonhas divided in two groups: case (n=5) and control (n=8) and who were submitted to ten auditory training sessions after verifying the integrity of the peripheral auditory system through evoked otoacoustic emissions. Participants were evaluated using a specific protocol concerning the auditory abilities (sound localization, auditory identification, memory, sequencing, auditory discrimination and auditory comprehension) at the beginning and at the end of the project. Data (entering, processing and analyses) were analyzed by the Epi Info 6.04 software. the groups did not differ regarding aspects of age (mean = 23.6 years) and gender (40% male). In the first evaluation both groups presented similar performances. In the final evaluation an improvement in the auditory abilities was observed for the individuals in the case group. When comparing the mean number of correct answers obtained by both groups in the first and final evaluations, a statistically significant result was obtained for sound localization (p=0.02), auditory sequencing (p=0.006) and auditory discrimination (p=0.03). group auditory training demonstrated to be effective in individuals with mental retardation, observing an improvement in the auditory abilities. More studies, with a larger number of participants, are necessary in order to confirm the findings of the present research. These results will help public health professionals to reanalyze the theory models used for therapy, so that they can use specific methods according to individual needs, such as auditory training workshops.

  2. Virtual restorative environment therapy as an adjunct to pain control during burn dressing changes: study protocol for a randomised controlled trial.

    PubMed

    Small, Charlotte; Stone, Robert; Pilsbury, Jane; Bowden, Michael; Bion, Julian

    2015-08-05

    The pain of a severe burn injury is often characterised by intense background pain, coupled with severe exacerbations associated with essential procedures such as dressing changes. The experience of pain is affected by patients' psychological state and can be enhanced by the anxiety, fear and distress caused by environmental and visual inputs. Virtual Reality (VR) distraction has been used with success in areas such as burns, paediatrics and oncology. The underlying principle of VR is that attention is diverted from the painful stimulus by the use of engaging, dynamic 3D visual content and associated auditory stimuli. Functional magnetic resonance imaging (fMRI) studies undertaken during VR distraction from experimental pain have demonstrated enhancement of the descending cortical pain-control system. The present study will evaluate the feasibility of introducing a novel VR system to the Burns Unit at the Queen Elizabeth Hospital Birmingham for dressing changes: virtual restorative environment therapy (VRET). The study will also explore the system's impact on pain during and after the dressing changes compared to conventional analgesia for ward-based burn dressing changes. A within-subject crossover design will be used to compare the following three conditions: 1. Interactive VRET plus conventional analgesics. 2. Passive VRET with conventional analgesics. 3. Conventional analgesics alone. Using the Monte Carlo method, and on the basis of previous local audit data, a sample size of 25 will detect a clinically significant 33 % reduction in worst pain scores experienced during dressing changes. The study accrual rate is currently slower than predicted by previous audits of admission data. A review of the screening log has found that recruitment has been limited by the nature of burn care, the ability of burn inpatients to provide informed consent and the ability of patients to use the VR equipment. Prior to the introduction of novel interactive technologies for patient use, the characteristics and capabilities of the target population needs to be evaluated, to ensure that the interface devices and simulations are usable. Current Controlled Trials ISRCTN23330756 . Date of Registration 25 February 2014.

  3. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    PubMed

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  4. Virtual environments for scene of crime reconstruction and analysis

    NASA Astrophysics Data System (ADS)

    Howard, Toby L. J.; Murta, Alan D.; Gibson, Simon

    2000-02-01

    This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.

  5. Emerging CAE technologies and their role in Future Ambient Intelligence Environments

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.

    2011-03-01

    Dramatic improvements are on the horizon in Computer Aided Engineering (CAE) and various simulation technologies. The improvements are due, in part, to the developments in a number of leading-edge technologies and their synergistic combinations/convergence. The technologies include ubiquitous, cloud, and petascale computing; ultra high-bandwidth networks, pervasive wireless communication; knowledge based engineering; networked immersive virtual environments and virtual worlds; novel human-computer interfaces; and powerful game engines and facilities. This paper describes the frontiers and emerging simulation technologies, and their role in the future virtual product creation and learning/training environments. The environments will be ambient intelligence environments, incorporating a synergistic combination of novel agent-supported visual simulations (with cognitive learning and understanding abilities); immersive 3D virtual world facilities; development chain management systems and facilities (incorporating a synergistic combination of intelligent engineering and management tools); nontraditional methods; intelligent, multimodal and human-like interfaces; and mobile wireless devices. The Virtual product creation environment will significantly enhance the productivity and will stimulate creativity and innovation in future global virtual collaborative enterprises. The facilities in the learning/training environment will provide timely, engaging, personalized/collaborative and tailored visual learning.

  6. The Problem Patron and the Academic Library Web Site as Virtual Reference Desk.

    ERIC Educational Resources Information Center

    Taylor, Daniel; Porter, George S.

    2002-01-01

    Considers problem library patrons in a virtual environment based on experiences at California Institute of Technology's Web site and its use for virtual reference. Discusses the virtual reference desk concept; global visibility and access to the World Wide Web; problematic email; and advantages in the electronic environment. (LRW)

  7. Utility of virtual reality environments to examine physiological reactivity and subjective distress in adults who stutter.

    PubMed

    Brundage, Shelley B; Brinton, James M; Hancock, Adrienne B

    2016-12-01

    Virtual reality environments (VREs) allow for immersion in speaking environments that mimic real-life interactions while maintaining researcher control. VREs have been used successfully to engender arousal in other disorders. The purpose of this study was to investigate the utility of virtual reality environments to examine physiological reactivity and subjective ratings of distress in persons who stutter (PWS). Subjective and objective measures of arousal were collected from 10PWS during four-minute speeches to a virtual audience and to a virtual empty room. Stuttering frequency and physiological measures (skin conductance level and heart rate) did not differ across speaking conditions, but subjective ratings of distress were significantly higher in the virtual audience condition compared to the virtual empty room. VREs have utility in elevating subjective ratings of distress in PWS. VREs have the potential to be useful tools for practicing treatment targets in a safe, controlled, and systematic manner. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. A Proposed Framework for Collaborative Design in a Virtual Environment

    NASA Astrophysics Data System (ADS)

    Breland, Jason S.; Shiratuddin, Mohd Fairuz

    This paper describes a proposed framework for a collaborative design in a virtual environment. The framework consists of components that support a true collaborative design in a real-time 3D virtual environment. In support of the proposed framework, a prototype application is being developed. The authors envision the framework will have, but not limited to the following features: (1) real-time manipulation of 3D objects across the network, (2) support for multi-designer activities and information access, (3) co-existence within same virtual space, etc. This paper also discusses a proposed testing to determine the possible benefits of a collaborative design in a virtual environment over other forms of collaboration, and results from a pilot test.

  9. Information Virtulization in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Kwak, Dochan (Technical Monitor)

    2001-01-01

    Virtual Environments provide a natural setting for a wide range of information visualization applications, particularly wlieit the information to be visualized is defined on a three-dimensional domain (Bryson, 1996). This chapter provides an overview of the issues that arise when designing and implementing an information visualization application in a virtual environment. Many design issues that arise, such as, e.g., issues of display, user tracking are common to any application of virtual environments. In this chapter we focus on those issues that are special to information visualization applications, as issues of wider concern are addressed elsewhere in this book.

  10. Logistic Model to Support Service Modularity for the Promotion of Reusability in a Web Objects-Enabled IoT Environment.

    PubMed

    Kibria, Muhammad Golam; Ali, Sajjad; Jarwar, Muhammad Aslam; Kumar, Sunil; Chong, Ilyoung

    2017-09-22

    Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is studied and implemented.

  11. Logistic Model to Support Service Modularity for the Promotion of Reusability in a Web Objects-Enabled IoT Environment

    PubMed Central

    Chong, Ilyoung

    2017-01-01

    Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is studied and implemented. PMID:28937590

  12. Proof-of-Concept Part Task Trainer for Close Air Support Procedures

    DTIC Science & Technology

    2016-06-01

    TVDL Tactical Video Down Link VE Virtual Environment VR Virtual Reality WTI Weapons and Tactics Instructor xvii ACKNOWLEDGMENTS I would first...in training of USMC pilots for close air support operations? • What is the feasibility of developing a prototype virtual reality (VR) system that...Chapter IV provides a review of virtual reality (VR)/ virtual environment (VE) and part-task trainers currently used in military training

  13. School performance and wellbeing of children with CI in different communicative-educational environments.

    PubMed

    Langereis, Margreet; Vermeulen, Anneke

    2015-06-01

    This study aimed to evaluate the long term effects of CI on auditory, language, educational and social-emotional development of deaf children in different educational-communicative settings. The outcomes of 58 children with profound hearing loss and normal non-verbal cognition, after 60 months of CI use have been analyzed. At testing the children were enrolled in three different educational settings; in mainstream education, where spoken language is used or in hard-of-hearing education where sign supported spoken language is used and in bilingual deaf education, with Sign Language of the Netherlands and Sign Supported Dutch. Children were assessed on auditory speech perception, receptive language, educational attainment and wellbeing. Auditory speech perception of children with CI in mainstream education enable them to acquire language and educational levels that are comparable to those of their normal hearing peers. Although the children in mainstream and hard-of-hearing settings show similar speech perception abilities, language development in children in hard-of-hearing settings lags significantly behind. Speech perception, language and educational attainments of children in deaf education remained extremely poor. Furthermore more children in mainstream and hard-of-hearing environments are resilient than in deaf educational settings. Regression analyses showed an important influence of educational setting. Children with CI who are placed in early intervention environments that facilitate auditory development are able to achieve good auditory speech perception, language and educational levels on the long term. Most parents of these children report no social-emotional concerns. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  15. Auditory evoked functions in ground crew working in high noise environment of Mumbai airport.

    PubMed

    Thakur, L; Anand, J P; Banerjee, P K

    2004-10-01

    The continuous exposure to the relatively high level of noise in the surroundings of an airport is likely to affect the central pathway of the auditory system as well as the cognitive functions of the people working in that environment. The Brainstem Auditory Evoked Responses (BAER), Mid Latency Response (MLR) and P300 response of the ground crew employees working in Mumbai airport were studied to evaluate the effects of continuous exposure to high level of noise of the surroundings of the airport on these responses. BAER, P300 and MLR were recorded by using a Nicolet Compact-4 (USA) instrument. Audiometry was also monitored with the help of GSI-16 Audiometer. There was a significant increase in the peak III latency of the BAER in the subjects exposed to noise compared to controls with no change in their P300 values. The exposed group showed hearing loss at different frequencies. The exposure to the high level of noise caused a considerable decline in the auditory conduction upto the level of the brainstem with no significant change in conduction in the midbrain, subcortical areas, auditory cortex and associated areas. There was also no significant change in cognitive function as measured by P300 response.

  16. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  17. A workout for virtual bodybuilders (design issues for embodiment in multi-actor virtual environments)

    NASA Technical Reports Server (NTRS)

    Benford, Steve; Bowers, John; Fahlen, Lennart E.; Greenhalgh, Chris; Snowdon, Dave

    1994-01-01

    This paper explores the issue of user embodiment within collaborative virtual environments. By user embodiment we mean the provision of users with appropriate body images so as to represent them to others and also to themselves. By collaborative virtual environments we mean multi-user virtual reality systems which support cooperative work (although we argue that the results of our exploration may also be applied to other kinds of collaborative systems). The main part of the paper identifies a list of embodiment design issues including: presence, location, identity, activity, availability, history of activity, viewpoint, action point, gesture, facial expression, voluntary versus involuntary expression, degree of presence, reflecting capabilities, manipulating the user's view of others, representation across multiple media, autonomous and distributed body parts, truthfulness and efficiency. Following this, we show how these issues are reflected in our own DIVE and MASSIVE prototype collaborative virtual environments.

  18. Perfecting Scientists' Collaboration and Problem-Solving Skills in the Virtual Team Environment

    NASA Astrophysics Data System (ADS)

    Jabro, A.; Jabro, J.

    2012-04-01

    PPerfecting Scientists' Collaboration and Problem-Solving Skills in the Virtual Team Environment Numerous factors have contributed to the proliferation of conducting work in virtual teams at the domestic, national, and global levels: innovations in technology, critical developments in software, co-located research partners and diverse funding sources, dynamic economic and political environments, and a changing workforce. Today's scientists must be prepared to not only perform work in the virtual team environment, but to work effectively and efficiently despite physical and cultural barriers. Research supports that students who have been exposed to virtual team experiences are desirable in the professional and academic arenas. Research supports establishing and maintaining established protocols for communication behavior prior to task discussion provides for successful team outcomes. Research conducted on graduate and undergraduate virtual teams' behaviors led to the development of successful pedagogic practices and assessment strategies.

  19. Building interactive virtual environments for simulated training in medicine using VRML and Java/JavaScript.

    PubMed

    Korocsec, D; Holobar, A; Divjak, M; Zazula, D

    2005-12-01

    Medicine is a difficult thing to learn. Experimenting with real patients should not be the only option; simulation deserves a special attention here. Virtual Reality Modelling Language (VRML) as a tool for building virtual objects and scenes has a good record of educational applications in medicine, especially for static and animated visualisations of body parts and organs. However, to create computer simulations resembling situations in real environments the required level of interactivity and dynamics is difficult to achieve. In the present paper we describe some approaches and techniques which we used to push the limits of the current VRML technology further toward dynamic 3D representation of virtual environments (VEs). Our demonstration is based on the implementation of a virtual baby model, whose vital signs can be controlled from an external Java application. The main contributions of this work are: (a) outline and evaluation of the three-level VRML/Java implementation of the dynamic virtual environment, (b) proposal for a modified VRML Timesensor node, which greatly improves the overall control of system performance, and (c) architecture of the prototype distributed virtual environment for training in neonatal resuscitation comprising the interactive virtual newborn, active bedside monitor for vital signs and full 3D representation of the surgery room.

  20. 3D multiplayer virtual pets game using Google Card Board

    NASA Astrophysics Data System (ADS)

    Herumurti, Darlis; Riskahadi, Dimas; Kuswardayan, Imam

    2017-08-01

    Virtual Reality (VR) is a technology which allows user to interact with the virtual environment. This virtual environment is generated and simulated by computer. This technology can make user feel the sensation when they are in the virtual environment. The VR technology provides real virtual environment view for user and it is not viewed from screen. But it needs another additional device to show the view of virtual environment. This device is known as Head Mounted Device (HMD). Oculust Rift and Microsoft Hololens are the most famous HMD devices used in VR. And in 2014, Google Card Board was introduced at Google I/O developers conference. Google Card Board is VR platform which allows user to enjoy the VR with simple and cheap way. In this research, we explore Google Card Board to develop simulation game of raising pet. The Google Card Board is used to create view for the VR environment. The view and control in VR environment is built using Unity game engine. And the simulation process is designed using Finite State Machine (FSM). This FSM can help to design the process clearly. So the simulation process can describe the simulation of raising pet well. Raising pet is fun activity. But sometimes, there are many conditions which cause raising pet become difficult to do, i.e. environment condition, disease, high cost, etc. this research aims to explore and implement Google Card Board in simulation of raising pet.

  1. Virtual Acoustics, Aeronautics and Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An optimal approach to auditory display design for commercial aircraft would utilize both spatialized ("3-D") audio techniques and active noise cancellation for safer operations. Results from several aircraft simulator studies conducted at NASA Ames Research Center are reviewed, including Traffic alert and Collision Avoidance System (TCAS) warnings, spoken orientation "beacons" for gate identification and collision avoidance on the ground, and hardware for improved speech intelligibility. The implications of hearing loss amongst pilots is also considered.

  2. Virtual acoustics, aeronautics, and communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Wenzel, E. M. (Principal Investigator)

    1998-01-01

    An optimal approach to auditory display design for commercial aircraft would utilize both spatialized (3-D) audio techniques and active noise cancellation for safer operations. Results from several aircraft simulator studies conducted at NASA Ames Research Center are reviewed, including Traffic alert and Collision Avoidance System (TCAS) warnings, spoken orientation "beacons" for gate identification and collision avoidance on the ground, and hardware for improved speech intelligibility. The implications of hearing loss among pilots is also considered.

  3. The Sense of Agency Is More Sensitive to Manipulations of Outcome than Movement-Related Feedback Irrespective of Sensory Modality

    PubMed Central

    David, Nicole; Skoruppa, Stefan; Gulberti, Alessandro

    2016-01-01

    The sense of agency describes the ability to experience oneself as the agent of one's own actions. Previous studies of the sense of agency manipulated the predicted sensory feedback related either to movement execution or to the movement’s outcome, for example by delaying the movement of a virtual hand or the onset of a tone that resulted from a button press. Such temporal sensorimotor discrepancies reduce the sense of agency. It remains unclear whether movement-related feedback is processed differently than outcome-related feedback in terms of agency experience, especially if these types of feedback differ with respect to sensory modality. We employed a mixed-reality setup, in which participants tracked their finger movements by means of a virtual hand. They performed a single tap, which elicited a sound. The temporal contingency between the participants’ finger movements and (i) the movement of the virtual hand or (ii) the expected auditory outcome was systematically varied. In a visual control experiment, the tap elicited a visual outcome. For each feedback type and participant, changes in the sense of agency were quantified using a forced-choice paradigm and the Method of Constant Stimuli. Participants were more sensitive to delays of outcome than to delays of movement execution. This effect was very similar for visual or auditory outcome delays. Our results indicate different contributions of movement- versus outcome-related sensory feedback to the sense of agency, irrespective of the modality of the outcome. We propose that this differential sensitivity reflects the behavioral importance of assessing authorship of the outcome of an action. PMID:27536948

  4. Incidental Learning in 3D Virtual Environments: Relationships to Learning Style, Digital Literacy and Information Display

    ERIC Educational Resources Information Center

    Thomas, Wayne W.; Boechler, Patricia M.

    2014-01-01

    With teachers taking more interest in utilizing 3D virtual environments for educational purposes, research is needed to understand how learners perceive and process information within virtual environments (Eschenbrenner, Nah, & Siau, 2008). In this study, the authors sought to determine if learning style or digital literacy predict incidental…

  5. Trends in Studies on Virtual Learning Environments in Turkey between 1996-2014 Years: A Content Analysis

    ERIC Educational Resources Information Center

    Demirer, Veysel; Erbas, Cagdas

    2016-01-01

    This study aims to review studies on virtual learning environments in Turkey through the content analysis method. 63 studies consisting of thesis, articles and proceedings published in Turkish and English between 1996-2014 years were analyzed. It was observed that "Second Life" was mostly preferred as the virtual learning environment.…

  6. An Examination of Usability of a Virtual Environment for Students Enrolled in a College of Agriculture

    ERIC Educational Resources Information Center

    Murphrey, Theresa Pesl; Rutherford, Tracy A.; Doerfert, David L.; Edgar, Leslie D.; Edgar, Don W.

    2014-01-01

    Educational technology continues to expand with multi-user virtual environments (e.g., Second Life™) being the latest technology. Understanding a virtual environment's usability can enhance educational planning and effective use. Usability includes the interaction quality between an individual and the item being assessed. The purpose was to assess…

  7. The Efficacy of an Immersive 3D Virtual versus 2D Web Environment in Intercultural Sensitivity Acquisition

    ERIC Educational Resources Information Center

    Coffey, Amy Jo; Kamhawi, Rasha; Fishwick, Paul; Henderson, Julie

    2017-01-01

    Relatively few studies have empirically tested computer-based immersive virtual environments' efficacy in teaching or enhancing pro-social attitudes, such as intercultural sensitivity. This channel study experiment was conducted (N = 159) to compare what effects, if any, an immersive 3D virtual environment would have upon subjects' intercultural…

  8. Design of Virtual Environments for the Comprehension of Planetary Phenomena Based on Students' Ideas.

    ERIC Educational Resources Information Center

    Bakas, Christos; Mikropoulos, Tassos A.

    2003-01-01

    Explains the design and development of an educational virtual environment to support the teaching of planetary phenomena, particularly the movements of Earth and the sun, day and night cycle, and change of seasons. Uses an interactive, three-dimensional (3D) virtual environment. Initial results show that the majority of students enthused about…

  9. The Validity of Virtual Environments for Eliciting Emotional Responses in Patients with Eating Disorders and in Controls

    ERIC Educational Resources Information Center

    Ferrer-Garcia, Marta; Gutierrez-Maldonado, Jose; Caqueo-Urizar, Alejandra; Moreno, Elena

    2009-01-01

    This article explores the efficacy of virtual environments representing situations that are emotionally significant to patients with eating disorders (ED) to modify depression and anxiety levels both in these patients and in controls. Eighty-five ED patients and 108 students were randomly exposed to five experimental virtual environments (a…

  10. The Effective Use of Virtual Environments in the Education and Rehabilitation of Students with Intellectual Disabilities.

    ERIC Educational Resources Information Center

    Standen, P. J.; Brown, D. J.; Cromby, J. J.

    2001-01-01

    Reviews the use of one type of computer software, virtual environments, for its potential in the education and rehabilitation of people with intellectual disabilities. Topics include virtual environments in special education; transfer of learning; adult learning; the role of the tutor; and future directions, including availability, accessibility,…

  11. Comparative Study of the Effectiveness of Three Learning Environments: Hyper-Realistic Virtual Simulations, Traditional Schematic Simulations and Traditional Laboratory

    ERIC Educational Resources Information Center

    Martinez, Guadalupe; Naranjo, Francisco L.; Perez, Angel L.; Suero, Maria Isabel; Pardo, Pedro J.

    2011-01-01

    This study compared the educational effects of computer simulations developed in a hyper-realistic virtual environment with the educational effects of either traditional schematic simulations or a traditional optics laboratory. The virtual environment was constructed on the basis of Java applets complemented with a photorealistic visual output.…

  12. GEARS a 3D Virtual Learning Environment and Virtual Social and Educational World Used in Online Secondary Schools

    ERIC Educational Resources Information Center

    Barkand, Jonathan; Kush, Joseph

    2009-01-01

    Virtual Learning Environments (VLEs) are becoming increasingly popular in online education environments and have multiple pedagogical advantages over more traditional approaches to education. VLEs include 3D worlds where students can engage in simulated learning activities such as Second Life. According to Claudia L'Amoreaux at Linden Lab, "at…

  13. A Model Supported Interactive Virtual Environment for Natural Resource Sharing in Environmental Education

    ERIC Educational Resources Information Center

    Barbalios, N.; Ioannidou, I.; Tzionas, P.; Paraskeuopoulos, S.

    2013-01-01

    This paper introduces a realistic 3D model supported virtual environment for environmental education, that highlights the importance of water resource sharing by focusing on the tragedy of the commons dilemma. The proposed virtual environment entails simulations that are controlled by a multi-agent simulation model of a real ecosystem consisting…

  14. Learning in 3D Virtual Environments: Collaboration and Knowledge Spirals

    ERIC Educational Resources Information Center

    Burton, Brian G.; Martin, Barbara N.

    2010-01-01

    The purpose of this case study was to determine if learning occurred within a 3D virtual learning environment by determining if elements of collaboration and Nonaka and Takeuchi's (1995) knowledge spiral were present. A key portion of this research was the creation of a Virtual Learning Environment. This 3D VLE utilized the Torque Game Engine…

  15. Virtual hand: a 3D tactile interface to virtual environments

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  16. Optimal resource allocation for novelty detection in a human auditory memory.

    PubMed

    Sinkkonen, J; Kaski, S; Huotilainen, M; Ilmoniemi, R J; Näätänen, R; Kaila, K

    1996-11-04

    A theory of resource allocation for neuronal low-level filtering is presented, based on an analysis of optimal resource allocation in simple environments. A quantitative prediction of the theory was verified in measurements of the magnetic mismatch response (MMR), an auditory event-related magnetic response of the human brain. The amplitude of the MMR was found to be directly proportional to the information conveyed by the stimulus. To the extent that the amplitude of the MMR can be used to measure resource usage by the auditory cortex, this finding supports our theory that, at least for early auditory processing, energy resources are used in proportion to the information content of incoming stimulus flow.

  17. Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex.

    PubMed

    Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier

    2007-08-29

    In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.

  18. Achievement of Virtual and Real Objects Using a Short-Term Motor Learning Protocol in People with Duchenne Muscular Dystrophy: A Crossover Randomized Controlled Trial.

    PubMed

    Massetti, Thais; Fávero, Francis Meire; Menezes, Lilian Del Ciello de; Alvarez, Mayra Priscila Boscolo; Crocetta, Tânia Brusque; Guarnieri, Regiani; Nunes, Fátima L S; Monteiro, Carlos Bandeira de Mello; Silva, Talita Dias da

    2018-04-01

    To evaluate whether people with Duchenne muscular dystrophy (DMD) practicing a task in a virtual environment could improve performance given a similar task in a real environment, as well as distinguishing whether there is transference between performing the practice in virtual environment and then a real environment and vice versa. Twenty-two people with DMD were evaluated and divided into two groups. The goal was to reach out and touch a red cube. Group A began with the real task and had to touch a real object, and Group B began with the virtual task and had to reach a virtual object using the Kinect system. ANOVA showed that all participants decreased the movement time from the first (M = 973 ms) to the last block of acquisition (M = 783 ms) in both virtual and real tasks and motor learning could be inferred by the short-term retention and transfer task (with increasing distance of the target). However, the evaluation of task performance demonstrated that the virtual task provided an inferior performance when compared to the real task in all phases of the study, and there was no effect for sequence. Both virtual and real tasks promoted improvement of performance in the acquisition phase, short-term retention, and transfer. However, there was no transference of learning between environments. In conclusion, it is recommended that the use of virtual environments for individuals with DMD needs to be considered carefully.

  19. Verbalizing, Visualizing, and Navigating: The Effect of Strategies on Encoding a Large-Scale Virtual Environment

    ERIC Educational Resources Information Center

    Kraemer, David J. M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.

    2017-01-01

    Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In 2 experiments, participants watched videos of routes through 4 virtual cities and were subsequently tested on their memory for observed landmarks and their ability to…

  20. Virtual agents in a simulated virtual training environment

    NASA Technical Reports Server (NTRS)

    Achorn, Brett; Badler, Norman L.

    1993-01-01

    A drawback to live-action training simulations is the need to gather a large group of participants in order to train a few individuals. One solution to this difficulty is the use of computer-controlled agents in a virtual training environment. This allows a human participant to be replaced by a virtual, or simulated, agent when only limited responses are needed. Each agent possesses a specified set of behaviors and is capable of limited autonomous action in response to its environment or the direction of a human trainee. The paper describes these agents in the context of a simulated hostage rescue training session, involving two human rescuers assisted by three virtual (computer-controlled) agents and opposed by three other virtual agents.

  1. Auditory Stream Segregation in Autism Spectrum Disorder: Benefits and Downsides of Superior Perceptual Processes

    ERIC Educational Resources Information Center

    Bouvet, Lucie; Mottron, Laurent; Valdois, Sylviane; Donnadieu, Sophie

    2016-01-01

    Auditory stream segregation allows us to organize our sound environment, by focusing on specific information and ignoring what is unimportant. One previous study reported difficulty in stream segregation ability in children with Asperger syndrome. In order to investigate this question further, we used an interleaved melody recognition task with…

  2. Training in virtual environments: putting theory into practice.

    PubMed

    Moskaliuk, Johannes; Bertram, Johanna; Cress, Ulrike

    2013-01-01

    Virtual training environments are used when training in reality is challenging because of the high costs, danger, time or effort involved. In this paper we argue for a theory-driven development of such environments, with the aim of connecting theory to practice and ensuring that the training provided fits the needs of the trained persons and their organisations. As an example, we describe the development of VirtualPolice (ViPOL), a training environment for police officers in a federal state of Germany. We provided the theoretical foundation for ViPOL concerning the feeling of being present, social context, learning motivation and perspective-taking. We developed a framework to put theory into practice. To evaluate our framework we interviewed the stakeholders of ViPOL and surveyed current challenges and limitations of virtual training. The results led to a review of a theory-into-practice framework which is presented in the conclusion. Feeling of presence, social context, learning motivation and perspective-taking are relevant for training in virtual environments. The theory-into-practice framework presented here supports developers and trainers in implementing virtual training tools. The framework was validated with an interview study of stakeholders of a virtual training project. We identified limitations, opportunities and challenges.

  3. Immersive virtual reality improves movement patterns in patients after ACL reconstruction: implications for enhanced criteria-based return-to-sport rehabilitation.

    PubMed

    Gokeler, Alli; Bisschop, Marsha; Myer, Gregory D; Benjaminse, Anne; Dijkstra, Pieter U; van Keeken, Helco G; van Raay, Jos J A M; Burgerhof, Johannes G M; Otten, Egbert

    2016-07-01

    The purpose of this study was to evaluate the influence of immersion in a virtual reality environment on knee biomechanics in patients after ACL reconstruction (ACLR). It was hypothesized that virtual reality techniques aimed to change attentional focus would influence altered knee flexion angle, knee extension moment and peak vertical ground reaction force (vGRF) in patients following ACLR. Twenty athletes following ACLR and 20 healthy controls (CTRL) performed a step-down task in both a non-virtual reality environment and a virtual reality environment displaying a pedestrian traffic scene. A motion analysis system and force plates were used to measure kinematics and kinetics during a step-down task to analyse each single-leg landing. A significant main effect was found for environment for knee flexion excursion (P = n.s.). Significant interaction differences were found between environment and groups for vGRF (P = 0.004), knee moment (P < 0.001), knee angle at peak vGRF (P = 0.01) and knee flexion excursion (P = 0.03). There was larger effect of virtual reality environment on knee biomechanics in patients after ACLR compared with controls. Patients after ACLR immersed in virtual reality environment demonstrated knee joint biomechanics that approximate those of CTRL. The results of this study indicate that a realistic virtual reality scenario may distract patients after ACLR from conscious motor control. Application of clinically available technology may aid in current rehabilitation programmes to target altered movement patterns after ACLR. Diagnostic study, Level III.

  4. Attending to auditory memory.

    PubMed

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Virtual environments special needs and evaluative methods.

    PubMed

    Brown, D J; Standen, P J; Cobb, S V

    1998-01-01

    This paper presents an overview of the development of the Learning in Virtual Environments programme (LIVE), carried out in special education over the last four years. It is more precisely a project chronology, so that the reader can sense the historical development of the programme rather than giving emphasis to any one particular feature or breakthrough, which are covered in other papers and available through the authors. The project conception in a special school in Nottingham is followed by a description of the development of experiential and communicational virtual learning environments. These are followed, in turn, by the results of our testing programmes which show that experience gained in a virtual environment can transfer to the real world and that their use can encourage self-directed activity in students with severe learning difficulties. Also included is a discussion of the role of virtual learning environments (VLEs) in special education and of its attributes in the context of contemporary educational theory.

  6. Seeing ahead: experience and language in spatial perspective.

    PubMed

    Alloway, Tracy Packiam; Corley, Martin; Ramscar, Michael

    2006-03-01

    Spatial perspective can be directed by various reference frames, as well as by the direction of motion. In the present study, we explored how ambiguity in spatial tasks can be resolved. Participants were presented with virtual reality environments in order to stimulate a spatialreference frame based on motion. They interacted with an ego-moving spatial system in Experiment 1 and an object-moving spatial system in Experiment 2. While interacting with the virtual environment, the participants were presented with either a question representing a motion system different from that of the virtual environment or a nonspatial question relating to physical features of the virtual environment. They then performed the target task assign the label front in an ambiguous spatial task. The findings indicate that the disambiguation of spatial terms can be influenced by embodied experiences, as represented by the virtual environment, as well as by linguistic context.

  7. Auditory Exposure in the Neonatal Intensive Care Unit: Room Type and Other Predictors.

    PubMed

    Pineda, Roberta; Durant, Polly; Mathur, Amit; Inder, Terrie; Wallendorf, Michael; Schlaggar, Bradley L

    2017-04-01

    To quantify early auditory exposures in the neonatal intensive care unit (NICU) and evaluate how these are related to medical and environmental factors. We hypothesized that there would be less auditory exposure in the NICU private room, compared with the open ward. Preterm infants born at ≤ 28 weeks gestation (33 in the open ward, 25 in private rooms) had auditory exposure quantified at birth, 30 and 34 weeks postmenstrual age (PMA), and term equivalent age using the Language Environmental Acquisition device. Meaningful language (P < .0001), the number of adult words (P < .0001), and electronic noise (P < .0001) increased across PMA. Silence increased (P = .0007) and noise decreased (P < .0001) across PMA. There was more silence in the private room (P = .02) than the open ward, with an average of 1.9 hours more silence in a 16-hour period. There was an interaction between PMA and room type for distant words (P = .01) and average decibels (P = .04), indicating that changes in auditory exposure across PMA were different for infants in private rooms compared with infants in the open ward. Medical interventions were related to more noise in the environment, although parent presence (P = .009) and engagement (P  = .002) were related to greater language exposure. Average sound levels in the NICU were 58.9 ± 3.6 decibels, with an average peak level of 86.9 ± 1.4 decibels. Understanding the NICU auditory environment paves the way for interventions that reduce high levels of adverse sound and enhance positive forms of auditory exposure, such as language. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation.

  9. [The use of virtual learning environment in teaching basic and advanced life support].

    PubMed

    Cogo, Ana Luísa Petersen; Silveira, Denise Tolfo; Lírio, Aline de Morais; Severo, Carolina Lopes

    2003-12-01

    The present paper is the result of an experiment conducted as part of the Nursing: basic and advanced life support course, which was offered as a semi-online course using the virtual learning environment called Learning Space. The virtual learning environment optimizes classroom dynamics, since in the classroom setting, practical activities may be privileged; besides, learning is customized as students may access the environment whenever and wherever they wish.

  10. Verification of Emmert's law in actual and virtual environments.

    PubMed

    Nakamizo, Sachio; Imamura, Mariko

    2004-11-01

    We examined Emmert's law by measuring the perceived size of an afterimage and the perceived distance of the surface on which the afterimage was projected in actual and virtual environments. The actual environment consisted of a corridor with ample cues as to distance and depth. The virtual environment was made from the CAVE of a virtual reality system. The afterimage, disc-shaped and one degree in diameter, was produced by flashing with an electric photoflash. The observers were asked to estimate the perceived distance to surfaces located at various physical distances (1 to 24 m) by the magnitude estimation method and to estimate the perceived size of the afterimage projected on the surfaces by a matching method. The results show that the perceived size of the afterimage was directly proportional to the perceived distance in both environments; thus, Emmert's law holds in virtual as well as actual environments. We suggest that Emmert's law is a specific case of a functional principle of distance scaling by the visual system.

  11. Effects of Hatchery Rearing on the Structure and Function of Salmonid Mechanosensory Systems.

    PubMed

    Brown, Andrew D; Sisneros, Joseph A; Jurasin, Tyler; Coffin, Allison B

    2016-01-01

    This paper reviews recent studies on the effects of hatchery rearing on the auditory and lateral line systems of salmonid fishes. Major conclusions are that (1) hatchery-reared juveniles exhibit abnormal lateral line morphology (relative to wild-origin conspecifics), suggesting that the hatchery environment affects lateral line structure, perhaps due to differences in the hydrodynamic conditions of hatcheries versus natural rearing environments, and (2) hatchery-reared salmonids have a high proportion of abnormal otoliths, a condition associated with reduced auditory sensitivity and suggestive of inner ear dysfunction.

  12. Involvement of the human midbrain and thalamus in auditory deviance detection.

    PubMed

    Cacciaglia, Raffaele; Escera, Carles; Slabu, Lavinia; Grimm, Sabine; Sanjuán, Ana; Ventura-Campos, Noelia; Ávila, César

    2015-02-01

    Prompt detection of unexpected changes in the sensory environment is critical for survival. In the auditory domain, the occurrence of a rare stimulus triggers a cascade of neurophysiological events spanning over multiple time-scales. Besides the role of the mismatch negativity (MMN), whose cortical generators are located in supratemporal areas, cumulative evidence suggests that violations of auditory regularities can be detected earlier and lower in the auditory hierarchy. Recent human scalp recordings have shown signatures of auditory mismatch responses at shorter latencies than those of the MMN. Moreover, animal single-unit recordings have demonstrated that rare stimulus changes cause a release from stimulus-specific adaptation in neurons of the primary auditory cortex, the medial geniculate body (MGB), and the inferior colliculus (IC). Although these data suggest that change detection is a pervasive property of the auditory system which may reside upstream cortical sites, direct evidence for the involvement of subcortical stages in the human auditory novelty system is lacking. Using event-related functional magnetic resonance imaging during a frequency oddball paradigm, we here report that auditory deviance detection occurs in the MGB and the IC of healthy human participants. By implementing a random condition controlling for neural refractoriness effects, we show that auditory change detection in these subcortical stations involves the encoding of statistical regularities from the acoustic input. These results provide the first direct evidence of the existence of multiple mismatch detectors nested at different levels along the human ascending auditory pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Effects of optokinetic stimulation induced by virtual reality on locomotion: a preliminary study.

    PubMed

    Ohyama, Seizo; Nishiike, Suetaka; Watanabe, Hiroshi; Matsuoka, Katsunori; Takeda, Noriaki

    2008-11-01

    Exposure to a virtual environment for 20 min was sufficient to cause adaptive changes in locomotion in healthy subjects, suggesting that virtual environments might improve locomotor deviation in patients with unilateral labyrinthine defects. Postural and locomotor control in patients with unilateral labyrinthine defects deviates towards the lesion side. The aim of this study was to examine whether active locomotion within a virtual environment can increase the functionality of rehabilitation. We examined the effects of optokinetic stimulation produced by a virtual reality environment on ocular movement and locomotor tracks in 10 healthy subjects. During the 20 min experiment, the mean locomotor deviation and the mean frequency and mean amplitude of optokinetic nystagmus during the last period of the experiment were significantly higher than those during the initial period.

  14. How challenges in auditory fMRI led to general advancements for the field.

    PubMed

    Talavage, Thomas M; Hall, Deborah A

    2012-08-15

    In the early years of fMRI research, the auditory neuroscience community sought to expand its knowledge of the underlying physiology of hearing, while also seeking to come to grips with the inherent acoustic disadvantages of working in the fMRI environment. Early collaborative efforts between prominent auditory research laboratories and prominent fMRI centers led to development of a number of key technical advances that have subsequently been widely used to elucidate principles of auditory neurophysiology. Perhaps the key imaging advance was the simultaneous and parallel development of strategies to use pulse sequences in which the volume acquisitions were "clustered," providing gaps in which stimuli could be presented without direct masking. Such sequences have become widespread in fMRI studies using auditory stimuli and also in a range of translational research domains. This review presents the parallel stories of the people and the auditory neurophysiology research that led to these sequences. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Apparatus for providing sensory substitution of force feedback

    NASA Technical Reports Server (NTRS)

    Massimino, Michael J. (Inventor); Sheridan, Thomas B. (Inventor)

    1995-01-01

    A feedback apparatus for an operator to control an effector that is remote from the operator to interact with a remote environment has a local input device to be manipulated by the operator. Sensors in the effector's environment are capable of sensing the amplitude of forces arising between the effector and its environment, the direction of application of such forces, or both amplitude and direction. A feedback signal corresponding to such a component of the force, is generated and transmitted to the environment of the operator. The signal is transduced into an auditory sensory substitution signal to which the operator is sensitive. Sound production apparatus present the auditory signal to the operator. The full range of the force amplitude may be represented by a single, audio speaker. Auditory display elements may be stereo headphones or free standing audio speakers, numbering from one to many more than two. The location of the application of the force may also be specified by the location of audio speakers that generate signals corresponding to specific forces. Alternatively, the location may be specified by the frequency of an audio signal, or by the apparent location of an audio signal, as simulated by a combination of signals originating at different locations.

  16. Usability Studies in Virtual and Traditional Computer Aided Design Environments for Fault Identification

    DTIC Science & Technology

    2017-08-08

    Usability Studies In Virtual And Traditional Computer Aided Design Environments For Fault Identification Dr. Syed Adeel Ahmed, Xavier University...virtual environment with wand interfaces compared directly with a workstation non-stereoscopic traditional CAD interface with keyboard and mouse. In...the differences in interaction when compared with traditional human computer interfaces. This paper provides analysis via usability study methods

  17. The Use of Immersive Virtual Reality in the Learning Sciences: Digital Transformations of Teachers, Students, and Social Context

    ERIC Educational Resources Information Center

    Bailenson, Jeremy N.; Yee, Nick; Blascovich, Jim; Beall, Andrew C.; Lundblad, Nicole; Jin, Michael

    2008-01-01

    This article illustrates the utility of using virtual environments to transform social interaction via behavior and context, with the goal of improving learning in digital environments. We first describe the technology and theories behind virtual environments and then report data from 4 empirical studies. In Experiment 1, we demonstrated that…

  18. Instructional Features for Training in Virtual Environments

    DTIC Science & Technology

    2006-07-01

    Technical Report 1184 Instructional Features for Training in Virtual Environments Michael J. Singer U. S. Army Research Institute Jason P. Kring...Report 1184 Instructional Features for Training in Virtual Environments Michael J. Singer U. S. Army Research Institute Jason P. Kring University of...provides in comparison to traditional, real world experience training programs (Hays & Singer , 1989; Swezey & Andrews, 2001). First, as with the

  19. Human Machine Interfaces for Teleoperators and Virtual Environments

    NASA Technical Reports Server (NTRS)

    Durlach, Nathaniel I. (Compiler); Sheridan, Thomas B. (Compiler); Ellis, Stephen R. (Compiler)

    1991-01-01

    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models.

  20. Feelings of Challenge and Threat among Pre-Service Teachers Studying in Different Learning Environments--Virtual vs. Blended Courses

    ERIC Educational Resources Information Center

    Zeichner, Orit; Zilka, Gila

    2016-01-01

    This study focused on feelings of threat and challenge among pre-service teachers in different learning environments--virtual and blended courses. The two goals of this study were (1) to define the subjects' feelings in virtual and blended learning environments, and the relationship between them, and (2) to examine how their feelings changed…

  1. Virtual Golden Foods Corporation: Generic Skills in a Virtual Crisis Environment (A Pilot Study)

    ERIC Educational Resources Information Center

    Godat, Meredith

    2007-01-01

    Workplace learning in a crisis-rich environment is often difficult if not impossible to integrate into programs so that students are able to experience and apply crisis management practices and principles. This study presents the results of a pilot project that examined the effective use of a virtual reality (VR) environment as a tool to teach…

  2. Varieties of virtualization

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    1991-01-01

    Natural environments have a content, i.e., the objects in them; a geometry, i.e., a pattern of rules for positioning and displacing the objects; and a dynamics, i.e., a system of rules describing the effects of forces acting on the objects. Human interaction with most common natural environments has been optimized by centuries of evolution. Virtual environments created through the human-computer interface similarly have a content, geometry, and dynamics, but the arbitrary character of the computer simulation creating them does not insure that human interaction with these virtual environments will be natural. The interaction, indeed, could be supernatural but it also could be impossible. An important determinant of the comprehensibility of a virtual environment is the correspondence between the environmental frames of reference and those associated with the control of environmental objects. The effects of rotation and displacement of control frames of reference with respect to corresponding environmental references differ depending upon whether perceptual judgement or manual tracking performance is measured. The perceptual effects of frame of reference displacement may be analyzed in terms of distortions in the process of virtualizing the synthetic environment space. The effects of frame of reference displacement and rotation have been studied by asking subjects to estimate exocentric direction in a virtual space.

  3. A New Continent of Ideas

    NASA Technical Reports Server (NTRS)

    1990-01-01

    While a new technology called 'virtual reality' is still at the 'ground floor' level, one of its basic components, 3D computer graphics is already in wide commercial use and expanding. Other components that permit a human operator to 'virtually' explore an artificial environment and to interact with it are being demonstrated routinely at Ames and elsewhere. Virtual reality might be defined as an environment capable of being virtually entered - telepresence, it is called - or interacted with by a human. The Virtual Interface Environment Workstation (VIEW) is a head-mounted stereoscopic display system in which the display may be an artificial computer-generated environment or a real environment relayed from remote video cameras. Operator can 'step into' this environment and interact with it. The DataGlove has a series of fiber optic cables and sensors that detect any movement of the wearer's fingers and transmit the information to a host computer; a computer generated image of the hand will move exactly as the operator is moving his gloved hand. With appropriate software, the operator can use the glove to interact with the computer scene by grasping an object. The DataSuit is a sensor equipped full body garment that greatly increases the sphere of performance for virtual reality simulations.

  4. Novel Virtual Environment for Alternative Treatment of Children with Cerebral Palsy

    PubMed Central

    de Oliveira, Juliana M.; Fernandes, Rafael Carneiro G.; Pinto, Cristtiano S.; Pinheiro, Plácido R.; Ribeiro, Sidarta

    2016-01-01

    Cerebral palsy is a severe condition usually caused by decreased brain oxygenation during pregnancy, at birth or soon after birth. Conventional treatments for cerebral palsy are often tiresome and expensive, leading patients to quit treatment. In this paper, we describe a virtual environment for patients to engage in a playful therapeutic game for neuropsychomotor rehabilitation, based on the experience of the occupational therapy program of the Nucleus for Integrated Medical Assistance (NAMI) at the University of Fortaleza, Brazil. Integration between patient and virtual environment occurs through the hand motion sensor “Leap Motion,” plus the electroencephalographic sensor “MindWave,” responsible for measuring attention levels during task execution. To evaluate the virtual environment, eight clinical experts on cerebral palsy were subjected to a questionnaire regarding the potential of the experimental virtual environment to promote cognitive and motor rehabilitation, as well as the potential of the treatment to enhance risks and/or negatively influence the patient's development. Based on the very positive appraisal of the experts, we propose that the experimental virtual environment is a promising alternative tool for the rehabilitation of children with cerebral palsy. PMID:27403154

  5. Physical environment virtualization for human activities recognition

    NASA Astrophysics Data System (ADS)

    Poshtkar, Azin; Elangovan, Vinayak; Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen

    2015-05-01

    Human activity recognition research relies heavily on extensive datasets to verify and validate performance of activity recognition algorithms. However, obtaining real datasets are expensive and highly time consuming. A physics-based virtual simulation can accelerate the development of context based human activity recognition algorithms and techniques by generating relevant training and testing videos simulating diverse operational scenarios. In this paper, we discuss in detail the requisite capabilities of a virtual environment to aid as a test bed for evaluating and enhancing activity recognition algorithms. To demonstrate the numerous advantages of virtual environment development, a newly developed virtual environment simulation modeling (VESM) environment is presented here to generate calibrated multisource imagery datasets suitable for development and testing of recognition algorithms for context-based human activities. The VESM environment serves as a versatile test bed to generate a vast amount of realistic data for training and testing of sensor processing algorithms. To demonstrate the effectiveness of VESM environment, we present various simulated scenarios and processed results to infer proper semantic annotations from the high fidelity imagery data for human-vehicle activity recognition under different operational contexts.

  6. Exploring Learner Acceptance of the Use of Virtual Reality in Medical Education: A Case Study of Desktop and Projection-Based Display Systems

    ERIC Educational Resources Information Center

    Huang, Hsiu-Mei; Liaw, Shu-Sheng; Lai, Chung-Min

    2016-01-01

    Advanced technologies have been widely applied in medical education, including human-patient simulators, immersive virtual reality Cave Automatic Virtual Environment systems, and video conferencing. Evaluating learner acceptance of such virtual reality (VR) learning environments is a critical issue for ensuring that such technologies are used to…

  7. Facilitating 3D Virtual World Learning Environments Creation by Non-Technical End Users through Template-Based Virtual World Instantiation

    ERIC Educational Resources Information Center

    Liu, Chang; Zhong, Ying; Ozercan, Sertac; Zhu, Qing

    2013-01-01

    This paper presents a template-based solution to overcome technical barriers non-technical computer end users face when developing functional learning environments in three-dimensional virtual worlds (3DVW). "iVirtualWorld," a prototype of a platform-independent 3DVW creation tool that implements the proposed solution, facilitates 3DVW…

  8. Taking Science Online: Evaluating Presence and Immersion through a Laboratory Experience in a Virtual Learning Environment for Entomology Students

    ERIC Educational Resources Information Center

    Annetta, Leonard; Klesath, Marta; Meyer, John

    2009-01-01

    A 3-D virtual field trip was integrated into an online college entomology course and developed as a trial for the possible incorporation of future virtual environments to supplement online higher education laboratories. This article provides an explanation of the rationale behind creating the virtual experience, the Bug Farm; the method and…

  9. Pre-Service Teachers' Perspectives on Using Scenario-Based Virtual Worlds in Science Education

    ERIC Educational Resources Information Center

    Kennedy-Clark, Shannon

    2011-01-01

    This paper presents the findings of a study on the current knowledge and attitudes of pre-service teachers on the use of scenario-based multi-user virtual environments in science education. The 28 participants involved in the study were introduced to "Virtual Singapura," a multi-user virtual environment, and completed an open-ended questionnaire.…

  10. Introducing and Evaluating the Behavior of Non-Verbal Features in the Virtual Learning

    ERIC Educational Resources Information Center

    Dharmawansa, Asanka D.; Fukumura, Yoshimi; Marasinghe, Ashu; Madhuwanthi, R. A. M.

    2015-01-01

    The objective of this research is to introduce the behavior of non-verbal features of e-Learners in the virtual learning environment to establish a fair representation of the real user by an avatar who represents the e-Learner in the virtual environment and to distinguish the deportment of the non-verbal features during the virtual learning…

  11. Empirical Evidence of Priming, Transfer, Reinforcement, and Learning in the Real and Virtual Trillium Trails

    ERIC Educational Resources Information Center

    Harrington, M. C. R.

    2011-01-01

    Over the past 20 years, there has been a debate on the effectiveness of virtual reality used for learning with young children, producing many ideas but little empirical proof. This empirical study compared learning activity in situ of a real environment (Real) and a desktop virtual reality (Virtual) environment, built with video game technology,…

  12. Virtual reality environments for post-stroke arm rehabilitation.

    PubMed

    Subramanian, Sandeep; Knaut, Luiz A; Beaudoin, Christian; McFadyen, Bradford J; Feldman, Anatol G; Levin, Mindy F

    2007-06-22

    Optimal practice and feedback elements are essential requirements for maximal motor recovery in patients with motor deficits due to central nervous system lesions. A virtual environment (VE) was created that incorporates practice and feedback elements necessary for maximal motor recovery. It permits varied and challenging practice in a motivating environment that provides salient feedback. The VE gives the user knowledge of results feedback about motor behavior and knowledge of performance feedback about the quality of pointing movements made in a virtual elevator. Movement distances are related to length of body segments. We describe an immersive and interactive experimental protocol developed in a virtual reality environment using the CAREN system. The VE can be used as a training environment for the upper limb in patients with motor impairments.

  13. Virtual Satellite

    NASA Technical Reports Server (NTRS)

    Hammrs, Stephan R.

    2008-01-01

    Virtual Satellite (VirtualSat) is a computer program that creates an environment that facilitates the development, verification, and validation of flight software for a single spacecraft or for multiple spacecraft flying in formation. In this environment, enhanced functionality and autonomy of navigation, guidance, and control systems of a spacecraft are provided by a virtual satellite that is, a computational model that simulates the dynamic behavior of the spacecraft. Within this environment, it is possible to execute any associated software, the development of which could benefit from knowledge of, and possible interaction (typically, exchange of data) with, the virtual satellite. Examples of associated software include programs for simulating spacecraft power and thermal- management systems. This environment is independent of the flight hardware that will eventually host the flight software, making it possible to develop the software simultaneously with, or even before, the hardware is delivered. Optionally, by use of interfaces included in VirtualSat, hardware can be used instead of simulated. The flight software, coded in the C or C++ programming language, is compilable and loadable into VirtualSat without any special modifications. Thus, VirtualSat can serve as a relatively inexpensive software test-bed for development test, integration, and post-launch maintenance of spacecraft flight software.

  14. Future Evolution of Virtual Worlds as Communication Environments

    NASA Astrophysics Data System (ADS)

    Prisco, Giulio

    Extensive experience creating locations and activities inside virtual worlds provides the basis for contemplating their future. Users of virtual worlds are diverse in their goals for these online environments; for example, immersionists want them to be alternative realities disconnected from real life, whereas augmentationists want them to be communication media supporting real-life activities. As the technology improves, the diversity of virtual worlds will increase along with their significance. Many will incorporate more advanced virtual reality, or serve as major media for long-distance collaboration, or become the venues for futurist social movements. Key issues are how people can create their own virtual worlds, travel across worlds, and experience a variety of multimedia immersive environments. This chapter concludes by noting the view among some computer scientists that future technologies will permit uploading human personalities to artificial intelligence avatars, thereby enhancing human beings and rendering the virtual worlds entirely real.

  15. Agreements in Virtual Organizations

    NASA Astrophysics Data System (ADS)

    Pankowska, Malgorzata

    This chapter is an attempt to explain the important impact that contract theory delivers with respect to the concept of virtual organization. The author believes that not enough research has been conducted in order to transfer theoretical foundations for networking to the phenomena of virtual organizations and open autonomic computing environment to ensure the controllability and management of them. The main research problem of this chapter is to explain the significance of agreements for virtual organizations governance. The first part of this chapter comprises explanations of differences among virtual machines and virtual organizations for further descriptions of the significance of the first ones to the development of the second. Next, the virtual organization development tendencies are presented and problems of IT governance in highly distributed organizational environment are discussed. The last part of this chapter covers analysis of contracts and agreements management for governance in open computing environments.

  16. Exploring a novel environment improves motivation and promotes recall of words.

    PubMed

    Schomaker, Judith; van Bronkhorst, Marthe L V; Meeter, Martijn

    2014-01-01

    Active exploration of novel environments is known to increase plasticity in animals, promoting long-term potentiation in the hippocampus and enhancing memory formation. These effects can occur during as well as after exploration. In humans novelty's effects on memory have been investigated with other methods, but never in an active exploration paradigm. We therefore investigated whether active spatial exploration of a novel compared to a previously familiarized virtual environment promotes performance on an unrelated word learning task. Exploration of the novel environment enhanced recall, generally thought to be hippocampus-dependent, but not recognition, believed to rely less on the hippocampus. Recall was better for participants that gave higher presence ratings for their experience in the virtual environment. These ratings were higher for the novel compared to the familiar virtual environment, suggesting that novelty increased attention for the virtual rather than real environment; however, this did not explain the effect of novelty on recall.

  17. Virtual interface environment workstations

    NASA Technical Reports Server (NTRS)

    Fisher, S. S.; Wenzel, E. M.; Coler, C.; Mcgreevy, M. W.

    1988-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.

  18. Altering User Movement Behaviour in Virtual Environments.

    PubMed

    Simeone, Adalberto L; Mavridou, Ifigeneia; Powell, Wendy

    2017-04-01

    In immersive Virtual Reality systems, users tend to move in a Virtual Environment as they would in an analogous physical environment. In this work, we investigated how user behaviour is affected when the Virtual Environment differs from the physical space. We created two sets of four environments each, plus a virtual replica of the physical environment as a baseline. The first focused on aesthetic discrepancies, such as a water surface in place of solid ground. The second focused on mixing immaterial objects together with those paired to tangible objects. For example, barring an area with walls or obstacles. We designed a study where participants had to reach three waypoints laid out in such a way to prompt a decision on which path to follow based on the conflict between the mismatching visual stimuli and their awareness of the real layout of the room. We analysed their performances to determine whether their trajectories were altered significantly from the shortest route. Our results indicate that participants altered their trajectories in presence of surfaces representing higher walking difficulty (for example, water instead of grass). However, when the graphical appearance was found to be ambiguous, there was no significant trajectory alteration. The environments mixing immaterial with physical objects had the most impact on trajectories with a mean deviation from the shortest route of 60 cm against the 37 cm of environments with aesthetic alterations. The co-existance of paired and unpaired virtual objects was reported to support the idea that all objects participants saw were backed by physical props. From these results and our observations, we derive guidelines on how to alter user movement behaviour in Virtual Environments.

  19. Using voice input and audio feedback to enhance the reality of a virtual experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miner, N.E.

    1994-04-01

    Virtual Reality (VR) is a rapidly emerging technology which allows participants to experience a virtual environment through stimulation of the participant`s senses. Intuitive and natural interactions with the virtual world help to create a realistic experience. Typically, a participant is immersed in a virtual environment through the use of a 3-D viewer. Realistic, computer-generated environment models and accurate tracking of a participant`s view are important factors for adding realism to a virtual experience. Stimulating a participant`s sense of sound and providing a natural form of communication for interacting with the virtual world are equally important. This paper discusses the advantagesmore » and importance of incorporating voice recognition and audio feedback capabilities into a virtual world experience. Various approaches and levels of complexity are discussed. Examples of the use of voice and sound are presented through the description of a research application developed in the VR laboratory at Sandia National Laboratories.« less

  20. Electrophysiologic Assessment of Auditory Training Benefits in Older Adults

    PubMed Central

    Anderson, Samira; Jenkins, Kimberly

    2015-01-01

    Older adults often exhibit speech perception deficits in difficult listening environments. At present, hearing aids or cochlear implants are the main options for therapeutic remediation; however, they only address audibility and do not compensate for central processing changes that may accompany aging and hearing loss or declines in cognitive function. It is unknown whether long-term hearing aid or cochlear implant use can restore changes in central encoding of temporal and spectral components of speech or improve cognitive function. Therefore, consideration should be given to auditory/cognitive training that targets auditory processing and cognitive declines, taking advantage of the plastic nature of the central auditory system. The demonstration of treatment efficacy is an important component of any training strategy. Electrophysiologic measures can be used to assess training-related benefits. This article will review the evidence for neuroplasticity in the auditory system and the use of evoked potentials to document treatment efficacy. PMID:27587912

  1. Visual-auditory integration during speech imitation in autism.

    PubMed

    Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.

  2. Virtual Workshop Environment (VWE): A Taxonomy and Service Oriented Architecture (SOA) Framework for Modularized Virtual Learning Environments (VLE)--Applying the Learning Object Concept to the VLE

    ERIC Educational Resources Information Center

    Paulsson, Fredrik; Naeve, Ambjorn

    2006-01-01

    Based on existing Learning Object taxonomies, this article suggests an alternative Learning Object taxonomy, combined with a general Service Oriented Architecture (SOA) framework, aiming to transfer the modularized concept of Learning Objects to modularized Virtual Learning Environments. The taxonomy and SOA-framework exposes a need for a clearer…

  3. The Effects of Sound-Field Amplification on Children with Hearing Impairment and Other Diagnoses in Preschool and Primary Classes

    ERIC Educational Resources Information Center

    Furno, Lois Ehrler

    2012-01-01

    Effective learning occurs in auditory environments. Background noise is inherent to classrooms with recommended levels 15 decibels softer than instruction, which is rarely achieved. Learning is diminished by interference to the auditory reception of information, especially for students who are hard of hearing other diagnoses. Sound-field…

  4. Delayed Auditory Feedback in the Treatment of Stuttering: Clients as Consumers

    ERIC Educational Resources Information Center

    Van Borsel, John; Reunes, Gert; Van den Bergh, Nathalie

    2003-01-01

    Purpose: To investigate the effect of repeated exposure to delayed auditory feedback (DAF) during a 3-month period outside a clinical environment and with only minimal clinical guidance on speech fluency in people who stutter. Method: A pretest-post-test design was used with repeated exposure to DAF during 3 months as the independent variable.…

  5. Virtual Heritage Tours: Developing Interactive Narrative-Based Environments for Historical Sites

    NASA Astrophysics Data System (ADS)

    Tuck, Deborah; Kuksa, Iryna

    In the last decade there has been a noticeable growth in the use of virtual reality (VR) technologies for reconstructing cultural heritage sites. However, many of these virtual reconstructions evidence little of sites' social histories. Narrating the Past is a research project that aims to re-address this issue by investigating methods for embedding social histories within cultural heritage sites and by creating narrative based virtual environments (VEs) within them. The project aims to enhance the visitor's knowledge and understanding by developing a navigable 3D story space, in which participants are immersed. This has the potential to create a malleable virtual environment allowing the visitor to configure their own narrative paths.

  6. Formalizing and Promoting Collaboration in 3D Virtual Environments - A Blueprint for the Creation of Group Interaction Patterns

    NASA Astrophysics Data System (ADS)

    Schmeil, Andreas; Eppler, Martin J.

    Despite the fact that virtual worlds and other types of multi-user 3D collaboration spaces have long been subjects of research and of application experiences, it still remains unclear how to best benefit from meeting with colleagues and peers in a virtual environment with the aim of working together. Making use of the potential of virtual embodiment, i.e. being immersed in a space as a personal avatar, allows for innovative new forms of collaboration. In this paper, we present a framework that serves as a systematic formalization of collaboration elements in virtual environments. The framework is based on the semiotic distinctions among pragmatic, semantic and syntactic perspectives. It serves as a blueprint to guide users in designing, implementing, and executing virtual collaboration patterns tailored to their needs. We present two team and two community collaboration pattern examples as a result of the application of the framework: Virtual Meeting, Virtual Design Studio, Spatial Group Configuration, and Virtual Knowledge Fair. In conclusion, we also point out future research directions for this emerging domain.

  7. Perfecting scientists’ collaboration and problem-solving skills in the virtual team environment

    USDA-ARS?s Scientific Manuscript database

    Perfecting Scientists’ Collaboration and Problem-Solving Skills in the Virtual Team Environment Numerous factors have contributed to the proliferation of conducting work in virtual teams at the domestic, national, and global levels: innovations in technology, critical developments in software, co-lo...

  8. Virtual Reality Calibration for Telerobotic Servicing

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1994-01-01

    A virtual reality calibration technique of matching a virtual environment of simulated graphics models in 3-D geometry and perspective with actual camera views of the remote site task environment has been developed to enable high-fidelity preview/predictive displays with calibrated graphics overlay on live video.

  9. Why Virtual, Why Environments? Implementing Virtual Reality Concepts in Computer-Assisted Language Learning.

    ERIC Educational Resources Information Center

    Schwienhorst, Klaus

    2002-01-01

    Discussion of computer-assisted language learning focuses on the benefits of virtual reality environments, particularly for foreign language contexts. Topics include three approaches to learner autonomy; supporting reflection, including self-awareness; supporting interaction, including collaboration; and supporting experimentation, including…

  10. Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Pretto, N.; Poiesi, F.

    2017-11-01

    We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.

  11. Effects of Team Emotional Authenticity on Virtual Team Performance.

    PubMed

    Connelly, Catherine E; Turel, Ofir

    2016-01-01

    Members of virtual teams lack many of the visual or auditory cues that are usually used as the basis for impressions about fellow team members. We focus on the effects of the impressions formed in this context, and use social exchange theory to understand how these impressions affect team performance. Our pilot study, using content analysis (n = 191 students), suggested that most individuals believe that they can assess others' emotional authenticity in online settings by focusing on the content and tone of the messages. Our quantitative study examined the effects of these assessments. Structural equation modeling (SEM) analysis (n = 81 student teams) suggested that team-level trust and teamwork behaviors mediate the relationship between team emotional authenticity and team performance, and illuminate the importance of team emotional authenticity for team processes and outcomes.

  12. Assessment of radiation awareness training in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Whisker, Vaughn E., III

    The prospect of new nuclear power plant orders in the near future and the graying of the current workforce create a need to train new personnel faster and better. Immersive virtual reality (VR) may offer a solution to the training challenge. VR technology presented in a CAVE Automatic Virtual Environment (CAVE) provides a high-fidelity, one-to-one scale environment where areas of the power plant can be recreated and virtual radiation environments can be simulated, making it possible to safely expose workers to virtual radiation in the context of the actual work environment. The use of virtual reality for training is supported by many educational theories; constructivism and discovery learning, in particular. Educational theory describes the importance of matching the training to the task. Plant access training and radiation worker training, common forms of training in the nuclear industry, rely on computer-based training methods in most cases, which effectively transfer declarative knowledge, but are poor at transferring skills. If an activity were to be added, the training would provide personnel with the opportunity to develop skills and apply their knowledge so they could be more effective when working in the radiation environment. An experiment was developed to test immersive virtual reality's suitability for training radiation awareness. Using a mixed methodology of quantitative and qualitative measures, the subjects' performances before and after training were assessed. First, subjects completed a pre-test to measure their knowledge prior to completing any training. Next they completed unsupervised computer-based training, which consisted of a PowerPoint presentation and a PDF document. After completing a brief orientation activity in the virtual environment, one group of participants received supplemental radiation awareness training in a simulated radiation environment presented in the CAVE, while a second group, the control group, moved directly to the assessment phase of the experiment. The CAVE supplied an activity-based training environment where learners were able to use a virtual survey meter to explore the properties of radiation sources and the effects of time and distance on radiation exposure. Once the training stage had ended, the subjects completed an assessment activity where they were asked to complete four tasks in a simulated radiation environment in the CAVE, which was designed to provide a more authentic assessment than simply testing understanding using a quiz. After the practicum, the subjects completed a post-test. Survey information was also collected to assist the researcher with interpretation of the collected data. Response to the training was measured by completion time, radiation exposure received, successful completion of the four tasks in the practicum, and scores on the post-test. These results were combined to create a radiation awareness score. In addition, observational data was collected as the subjects completed the tasks. The radiation awareness scores of the control group and the group that received supplemental training in the virtual environment were compared. T-tests showed that the effect of the supplemental training was not significant; however, calculation of the effect size showed a small-to-medium effect of the training. The CAVE group received significantly less radiation exposure during the assessment activity, and they completed the activities on an average of one minute faster. These results indicate that the training was effective, primarily for instilling radiation sensitivity. Observational data collected during the assessment supports this conclusion. The training environment provided by the immersive virtual reality recreated a radiation environment where learners could apply knowledge they had been taught by computer-based training. Activity-based training has been shown to be a more effective way to transfer skills because of the similarity between the training environment and the application environment. Virtual reality enables the training environment to look and feel like the application environment. Because of this, radiation awareness training in an immersive virtual environment should be considered by the nuclear industry, which is supported by the results of this experiment.

  13. Brave New (Interactive) Worlds: A Review of the Design Affordances and Constraints of Two 3D Virtual Worlds as Interactive Learning Environments

    ERIC Educational Resources Information Center

    Dickey, Michele D.

    2005-01-01

    Three-dimensional virtual worlds are an emerging medium currently being used in both traditional classrooms and for distance education. Three-dimensional (3D) virtual worlds are a combination of desk-top interactive Virtual Reality within a chat environment. This analysis provides an overview of Active Worlds Educational Universe and Adobe…

  14. VECTR: Virtual Environment Computational Training Resource

    NASA Technical Reports Server (NTRS)

    Little, William L.

    2018-01-01

    The Westridge Middle School Curriculum and Community Night is an annual event designed to introduce students and parents to potential employers in the Central Florida area. NASA participated in the event in 2017, and has been asked to come back for the 2018 event on January 25. We will be demonstrating our Microsoft Hololens Virtual Rovers project, and the Virtual Environment Computational Training Resource (VECTR) virtual reality tool.

  15. Virtual acoustic environments for comprehensive evaluation of model-based hearing devices.

    PubMed

    Grimm, Giso; Luberadzka, Joanna; Hohmann, Volker

    2018-06-01

    Create virtual acoustic environments (VAEs) with interactive dynamic rendering for applications in audiology. A toolbox for creation and rendering of dynamic virtual acoustic environments (TASCAR) that allows direct user interaction was developed for application in hearing aid research and audiology. The software architecture and the simulation methods used to produce VAEs are outlined. Example environments are described and analysed. With the proposed software, a tool for simulation of VAEs is available. A set of VAEs rendered with the proposed software was described.

  16. Measuring sense of presence and user characteristics to predict effective training in an online simulated virtual environment.

    PubMed

    De Leo, Gianluca; Diggs, Leigh A; Radici, Elena; Mastaglio, Thomas W

    2014-02-01

    Virtual-reality solutions have successfully been used to train distributed teams. This study aimed to investigate the correlation between user characteristics and sense of presence in an online virtual-reality environment where distributed teams are trained. A greater sense of presence has the potential to make training in the virtual environment more effective, leading to the formation of teams that perform better in a real environment. Being able to identify, before starting online training, those user characteristics that are predictors of a greater sense of presence can lead to the selection of trainees who would benefit most from the online simulated training. This is an observational study with a retrospective postsurvey of participants' user characteristics and degree of sense of presence. Twenty-nine members from 3 Air Force National Guard Medical Service expeditionary medical support teams participated in an online virtual environment training exercise and completed the Independent Television Commission-Sense of Presence Inventory survey, which measures sense of presence and user characteristics. Nonparametric statistics were applied to determine the statistical significance of user characteristics to sense of presence. Comparing user characteristics to the 4 scales of the Independent Television Commission-Sense of Presence Inventory using Kendall τ test gave the following results: the user characteristics "how often you play video games" (τ(26)=-0.458, P<0.01) and "television/film production knowledge" (τ(27)=-0.516, P<0.01) were significantly related to negative effects. Negative effects refer to adverse physiologic reactions owing to the virtual environment experience such as dizziness, nausea, headache, and eyestrain. The user characteristic "knowledge of virtual reality" was significantly related to engagement (τ(26)=0.463, P<0.01) and negative effects (τ(26)=-0.404, P<0.05). Individuals who have knowledge about virtual environments and experience with gaming environments report a higher sense of presence that indicates that they will likely benefit more from online virtual training. Future research studies could include a larger population of expeditionary medical support, and the results obtained could be used to create a model that predicts the level of presence based on the user characteristics. To maximize results and minimize costs, only those individuals who, based on their characteristics, are supposed to have a higher sense of presence and less negative effects could be selected for online simulated virtual environment training.

  17. Deficits in auditory processing contribute to impairments in vocal affect recognition in autism spectrum disorders: A MEG study.

    PubMed

    Demopoulos, Carly; Hopkins, Joyce; Kopald, Brandon E; Paulson, Kim; Doyle, Lauren; Andrews, Whitney E; Lewine, Jeffrey David

    2015-11-01

    The primary aim of this study was to examine whether there is an association between magnetoencephalography-based (MEG) indices of basic cortical auditory processing and vocal affect recognition (VAR) ability in individuals with autism spectrum disorder (ASD). MEG data were collected from 25 children/adolescents with ASD and 12 control participants using a paired-tone paradigm to measure quality of auditory physiology, sensory gating, and rapid auditory processing. Group differences were examined in auditory processing and vocal affect recognition ability. The relationship between differences in auditory processing and vocal affect recognition deficits was examined in the ASD group. Replicating prior studies, participants with ASD showed longer M1n latencies and impaired rapid processing compared with control participants. These variables were significantly related to VAR, with the linear combination of auditory processing variables accounting for approximately 30% of the variability after controlling for age and language skills in participants with ASD. VAR deficits in ASD are typically interpreted as part of a core, higher order dysfunction of the "social brain"; however, these results suggest they also may reflect basic deficits in auditory processing that compromise the extraction of socially relevant cues from the auditory environment. As such, they also suggest that therapeutic targeting of sensory dysfunction in ASD may have additional positive implications for other functional deficits. (c) 2015 APA, all rights reserved).

  18. Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI.

    PubMed

    Zhou, Sijie; Allison, Brendan Z; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing

    2016-01-01

    Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.

  19. A study on haptic collaborative game in shared virtual environment

    NASA Astrophysics Data System (ADS)

    Lu, Keke; Liu, Guanyang; Liu, Lingzhi

    2013-03-01

    A study on collaborative game in shared virtual environment with haptic feedback over computer networks is introduced in this paper. A collaborative task was used where the players located at remote sites and played the game together. The player can feel visual and haptic feedback in virtual environment compared to traditional networked multiplayer games. The experiment was desired in two conditions: visual feedback only and visual-haptic feedback. The goal of the experiment is to assess the impact of force feedback on collaborative task performance. Results indicate that haptic feedback is beneficial for performance enhancement for collaborative game in shared virtual environment. The outcomes of this research can have a powerful impact on the networked computer games.

  20. Proceedings of the 1993 Conference on Intelligent Computer-Aided Training and Virtual Environment Technology, Volume 1

    NASA Technical Reports Server (NTRS)

    Hyde, Patricia R.; Loftin, R. Bowen

    1993-01-01

    These proceedings are organized in the same manner as the conference's contributed sessions, with the papers grouped by topic area. These areas are as follows: VE (virtual environment) training for Space Flight, Virtual Environment Hardware, Knowledge Aquisition for ICAT (Intelligent Computer-Aided Training) & VE, Multimedia in ICAT Systems, VE in Training & Education (1 & 2), Virtual Environment Software (1 & 2), Models in ICAT systems, ICAT Commercial Applications, ICAT Architectures & Authoring Systems, ICAT Education & Medical Applications, Assessing VE for Training, VE & Human Systems (1 & 2), ICAT Theory & Natural Language, ICAT Applications in the Military, VE Applications in Engineering, Knowledge Acquisition for ICAT, and ICAT Applications in Aerospace.

  1. A collaborative molecular modeling environment using a virtual tunneling service.

    PubMed

    Lee, Jun; Kim, Jee-In; Kang, Lin-Woo

    2012-01-01

    Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments.

  2. Auditory environmental context affects visual distance perception.

    PubMed

    Etchemendy, Pablo E; Abregú, Ezequiel; Calcagno, Esteban R; Eguia, Manuel C; Vechiatti, Nilda; Iasi, Federico; Vergara, Ramiro O

    2017-08-03

    In this article, we show that visual distance perception (VDP) is influenced by the auditory environmental context through reverberation-related cues. We performed two VDP experiments in two dark rooms with extremely different reverberation times: an anechoic chamber and a reverberant room. Subjects assigned to the reverberant room perceived the targets farther than subjects assigned to the anechoic chamber. Also, we found a positive correlation between the maximum perceived distance and the auditorily perceived room size. We next performed a second experiment in which the same subjects of Experiment 1 were interchanged between rooms. We found that subjects preserved the responses from the previous experiment provided they were compatible with the present perception of the environment; if not, perceived distance was biased towards the auditorily perceived boundaries of the room. Results of both experiments show that the auditory environment can influence VDP, presumably through reverberation cues related to the perception of room size.

  3. A decrease in brain activation associated with driving when listening to someone speak.

    PubMed

    Just, Marcel Adam; Keller, Timothy A; Cynkar, Jacquelyn

    2008-04-18

    Behavioral studies have shown that engaging in a secondary task, such as talking on a cellular telephone, disrupts driving performance. This study used functional magnetic resonance imaging (fMRI) to investigate the impact of concurrent auditory language comprehension on the brain activity associated with a simulated driving task. Participants steered a vehicle along a curving virtual road, either undisturbed or while listening to spoken sentences that they judged as true or false. The dual-task condition produced a significant deterioration in driving accuracy caused by the processing of the auditory sentences. At the same time, the parietal lobe activation associated with spatial processing in the undisturbed driving task decreased by 37% when participants concurrently listened to sentences. The findings show that language comprehension performed concurrently with driving draws mental resources away from the driving and produces deterioration in driving performance, even when it does not require holding or dialing a phone.

  4. A Decrease in Brain Activation Associated with Driving When Listening to Someone Speak

    PubMed Central

    Just, Marcel Adam; Keller, Timothy A.; Cynkar, Jacquelyn

    2009-01-01

    Behavioral studies have shown that engaging in a secondary task, such as talking on a cellular telephone, disrupts driving performance. This study used functional magnetic resonance imaging (fMRI) to investigate the impact of concurrent auditory language comprehension on the brain activity associated with a simulated driving task. Participants steered a vehicle along a curving virtual road, either undisturbed or while listening to spoken sentences that they judged as true or false. The dual task condition produced a significant deterioration in driving accuracy caused by the processing of the auditory sentences. At the same time, the parietal lobe activation associated with spatial processing in the undisturbed driving task decreased by 37% when participants concurrently listened to sentences. The findings show that language comprehension performed concurrently with driving draws mental resources away from the driving and produces deterioration in driving performance, even when it does not require holding or dialing a phone. PMID:18353285

  5. Characterization of active hair-bundle motility by a mechanical-load clamp

    NASA Astrophysics Data System (ADS)

    Salvi, Joshua D.; Maoiléidigh, Dáibhid Ó.; Fabella, Brian A.; Tobin, Mélanie; Hudspeth, A. J.

    2015-12-01

    Active hair-bundle motility endows hair cells with several traits that augment auditory stimuli. The activity of a hair bundle might be controlled by adjusting its mechanical properties. Indeed, the mechanical properties of bundles vary between different organisms and along the tonotopic axis of a single auditory organ. Motivated by these biological differences and a dynamical model of hair-bundle motility, we explore how adjusting the mass, drag, stiffness, and offset force applied to a bundle control its dynamics and response to external perturbations. Utilizing a mechanical-load clamp, we systematically mapped the two-dimensional state diagram of a hair bundle. The clamp system used a real-time processor to tightly control each of the virtual mechanical elements. Increasing the stiffness of a hair bundle advances its operating point from a spontaneously oscillating regime into a quiescent regime. As predicted by a dynamical model of hair-bundle mechanics, this boundary constitutes a Hopf bifurcation.

  6. Effects of training and motivation on auditory P300 brain-computer interface performance.

    PubMed

    Baykara, E; Ruf, C A; Fioravanti, C; Käthner, I; Simon, N; Kleih, S C; Kübler, A; Halder, S

    2016-01-01

    Brain-computer interface (BCI) technology aims at helping end-users with severe motor paralysis to communicate with their environment without using the natural output pathways of the brain. For end-users in complete paralysis, loss of gaze control may necessitate non-visual BCI systems. The present study investigated the effect of training on performance with an auditory P300 multi-class speller paradigm. For half of the participants, spatial cues were added to the auditory stimuli to see whether performance can be further optimized. The influence of motivation, mood and workload on performance and P300 component was also examined. In five sessions, 16 healthy participants were instructed to spell several words by attending to animal sounds representing the rows and columns of a 5 × 5 letter matrix. 81% of the participants achieved an average online accuracy of ⩾ 70%. From the first to the fifth session information transfer rates increased from 3.72 bits/min to 5.63 bits/min. Motivation significantly influenced P300 amplitude and online ITR. No significant facilitative effect of spatial cues on performance was observed. Training improves performance in an auditory BCI paradigm. Motivation influences performance and P300 amplitude. The described auditory BCI system may help end-users to communicate independently of gaze control with their environment. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  7. Virtual community centre for power wheelchair training: Experience of children and clinicians.

    PubMed

    Torkia, Caryne; Ryan, Stephen E; Reid, Denise; Boissy, Patrick; Lemay, Martin; Routhier, François; Contardo, Resi; Woodhouse, Janet; Archambault, Phillipe S

    2017-11-02

    To: 1) characterize the overall experience in using the McGill immersive wheelchair - community centre (miWe-CC) simulator; and 2) investigate the experience of presence (i.e., sense of being in the virtual rather than in the real, physical environment) while driving a PW in the miWe-CC. A qualitative research design with structured interviews was used. Fifteen clinicians and 11 children were interviewed after driving a power wheelchair (PW) in the miWe-CC simulator. Data were analyzed using the conventional and directed content analysis approaches. Overall, participants enjoyed using the simulator and experienced a sense of presence in the virtual space. They felt a sense of being in the virtual environment, involved and focused on driving the virtual PW rather than on the surroundings of the actual room where they were. Participants reported several similarities between the virtual community centre layout and activities of the miWe-CC and the day-to-day reality of paediatric PW users. The simulator replicated participants' expectations of real-life PW use and promises to have an effect on improving the driving skills of new PW users. Implications for rehabilitation Among young users, the McGill immersive wheelchair (miWe) simulator provides an experience of presence within the virtual environment. This experience of presence is generated by a sense of being in the virtual scene, a sense of being involved, engaged, and focused on interacting within the virtual environment, and by the perception that the virtual environment is consistent with the real world. The miWe is a relevant and accessible approach, complementary to real world power wheelchair training for young users.

  8. Pain modulation during drives through cold and hot virtual environments.

    PubMed

    Mühlberger, Andreas; Wieser, Matthias J; Kenntner-Mabiala, Ramona; Pauli, Paul; Wiederhold, Brenda K

    2007-08-01

    Evidence exists that virtual worlds reduce pain perception by providing distraction. However, there is no experimental study to show that the type of world used in virtual reality (VR) distraction influences pain perception. Therefore, we investigated whether pain triggered by heat or cold stimuli is modulated by "warm "or "cold " virtual environments and whether virtual worlds reduce pain perception more than does static picture presentation. We expected that cold worlds would reduce pain perception from heat stimuli, while warm environments would reduce pain perception from cold stimuli. Additionally, both virtual worlds should reduce pain perception in general. Heat and cold pain stimuli thresholds were assessed outside VR in 48 volunteers in a balanced crossover design. Participants completed three 4-minute assessment periods: virtual "walks " through (1) a winter and (2) an autumn landscape and static exposure to (3) a neutral landscape. During each period, five heat stimuli or three cold stimuli were delivered via a thermode on the participant's arm, and affective and sensory pain perceptions were rated. Then the thermode was changed to the other arm, and the procedure was repeated with the opposite pain stimuli (heat or cold). We found that both warm and cold virtual environments reduced pain intensity and unpleasantness for heat and cold pain stimuli when compared to the control condition. Since participants wore a head-mounted display (HMD) in both the control condition and VR, we concluded that the distracting value of virtual environments is not explained solely by excluding perception of the real world. Although VR reduced pain unpleasantness, we found no difference in efficacy between the types of virtual world used for each pain stimulus.

  9. Virtual Environments: Issues and Opportunities for Researching Inclusive Educational Practices

    NASA Astrophysics Data System (ADS)

    Sheehy, Kieron

    This chapter argues that virtual environments offer new research areas for those concerned with inclusive education. Further, it proposes that they also present opportunities for developing increasingly inclusive research processes. This chapter considers how researchers might approach researching some of these affordances. It discusses the relationship between specific features of inclusive pedagogy, derived from an international systematic literature review, and the affordances of different forms of virtual characters and environments. Examples are drawn from research in Second LifeTM (SL), virtual tutors and augmented reality. In doing this, the chapter challenges a simplistic notion of isolated physical and virtual worlds and, in the context of inclusion, between the practice of research and the research topic itself. There are a growing number of virtual worlds in which identified educational activities are taking place, or whose activities are being noted for their educational merit. These encompasses non-themed worlds such as SL and Active Worlds, game based worlds such as World of Warcraft and Runescape, and even Club Penguin, a themed virtual where younger players interact through a variety of Penguin themed environments and activities. It has been argued that these spaces, outside traditional education, are able to offer pedagogical insights (Twining 2009) i.e. that these global virtual communities have been identified as being useful as creative educational environments (Delwiche 2006; Sheehy 2009). This chapter will explore how researchers might use these spaces to investigative and create inclusive educational experiences for learners. In order to do this the chapter considers three interrelated issues: What is inclusive education?; How might inclusive education influence virtual world research? And, what might inclusive education look like in virtual worlds?

  10. Virtual Learning Environments.

    ERIC Educational Resources Information Center

    Follows, Scott B.

    1999-01-01

    Illustrates the possibilities and educational benefits of virtual learning environments (VLEs), based on experiences with "Thirst for Knowledge," a VLE that simulates the workplace of a major company. While working in this virtual office world, students walk through the building, attend meetings, read reports, receive e-mail, answer the telephone,…

  11. Handbook of Research on Collaborative Teaching Practice in Virtual Learning Environments

    ERIC Educational Resources Information Center

    Panconesi, Gianni, Ed.; Guida, Maria, Ed.

    2017-01-01

    Modern technology has enhanced many aspects of life, including classroom education. By offering virtual learning experiences, educational systems can become more efficient and effective at teaching the student population. The "Handbook of Research on Collaborative Teaching Practice in Virtual Learning Environments" highlights program…

  12. Information Seeking in a Virtual Learning Environment.

    ERIC Educational Resources Information Center

    Byron, Suzanne M.; Young, Jon I.

    2000-01-01

    Examines the applicability of Kuhlthau's Information Search Process Model in the context of a virtual learning environment at the University of North Texas that used virtual collaborative software. Highlights include cognitive and affective aspects of information seeking; computer experience and confidence; and implications for future research.…

  13. The capture and recreation of 3D auditory scenes

    NASA Astrophysics Data System (ADS)

    Li, Zhiyun

    The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.

  14. Apparatus and method for modifying the operation of a robotic vehicle in a real environment, to emulate the operation of the robotic vehicle operating in a mixed reality environment

    DOEpatents

    Garretson, Justin R [Albuquerque, NM; Parker, Eric P [Albuquerque, NM; Gladwell, T Scott [Albuquerque, NM; Rigdon, J Brian [Edgewood, NM; Oppel, III, Fred J.

    2012-05-29

    Apparatus and methods for modifying the operation of a robotic vehicle in a real environment to emulate the operation of the robotic vehicle in a mixed reality environment include a vehicle sensing system having a communications module attached to the robotic vehicle for communicating operating parameters related to the robotic vehicle in a real environment to a simulation controller for simulating the operation of the robotic vehicle in a mixed (live, virtual and constructive) environment wherein the affects of virtual and constructive entities on the operation of the robotic vehicle (and vice versa) are simulated. These effects are communicated to the vehicle sensing system which generates a modified control command for the robotic vehicle including the effects of virtual and constructive entities, causing the robot in the real environment to behave as if virtual and constructive entities existed in the real environment.

  15. Simulation fidelity of a virtual environment display

    NASA Technical Reports Server (NTRS)

    Nemire, Kenneth; Jacoby, Richard H.; Ellis, Stephen R.

    1994-01-01

    We assessed the degree to which a virtual environment system produced a faithful simulation of three-dimensional space by investigating the influence of a pitched optic array on the perception of gravity-referenced eye level (GREL). We compared the results with those obtained in a physical environment. In a within-subjects factorial design, 12 subjects indicated GREL while viewing virtual three-dimensional arrays at different static orientations. A physical array biased GREL more than did a geometrically identical virtual pitched array. However, addition of two sets of orthogonal parallel lines (a grid) to the virtual pitched array resulted in as large a bias as that obtained with the physical pitched array. The increased bias was caused by longitudinal, but not the transverse, components of the grid. We discuss implications of our results for spatial orientation models and for designs of virtual displays.

  16. A Stochastic Model of Plausibility in Live Virtual Constructive Environments

    DTIC Science & Technology

    2017-09-14

    objective in virtual environment research and design is the maintenance of adequate consistency levels in the face of limited system resources such as...provides some commentary with regard to system design considerations and future research directions. II. SYSTEM MODEL DVEs are often designed as a...exceed the system’s requirements. Research into predictive models of virtual environment consistency is needed to provide designers the tools to

  17. Inspiring Equal Contribution and Opportunity in a 3D Multi-User Virtual Environment: Bringing Together Men Gamers and Women Non-Gamers in Second Life[R

    ERIC Educational Resources Information Center

    deNoyelles, Aimee; Seo, Kay Kyeong-Ju

    2012-01-01

    A 3D multi-user virtual environment holds promise to support and enhance student online learning communities due to its ability to promote global synchronous interaction and collaboration, rich multisensory experience and expression, and elaborate design capabilities. Second Life[R], a multi-user virtual environment intended for adult users 18 and…

  18. Topographic Distribution of Stimulus-Specific Adaptation across Auditory Cortical Fields in the Anesthetized Rat

    PubMed Central

    Nieto-Diego, Javier; Malmierca, Manuel S.

    2016-01-01

    Stimulus-specific adaptation (SSA) in single neurons of the auditory cortex was suggested to be a potential neural correlate of the mismatch negativity (MMN), a widely studied component of the auditory event-related potentials (ERP) that is elicited by changes in the auditory environment. However, several aspects on this SSA/MMN relation remain unresolved. SSA occurs in the primary auditory cortex (A1), but detailed studies on SSA beyond A1 are lacking. To study the topographic organization of SSA, we mapped the whole rat auditory cortex with multiunit activity recordings, using an oddball paradigm. We demonstrate that SSA occurs outside A1 and differs between primary and nonprimary cortical fields. In particular, SSA is much stronger and develops faster in the nonprimary than in the primary fields, paralleling the organization of subcortical SSA. Importantly, strong SSA is present in the nonprimary auditory cortex within the latency range of the MMN in the rat and correlates with an MMN-like difference wave in the simultaneously recorded local field potentials (LFP). We present new and strong evidence linking SSA at the cellular level to the MMN, a central tool in cognitive and clinical neuroscience. PMID:26950883

  19. Comparing perceived auditory width to the visual image of a performing ensemble in contrasting bi-modal environmentsa)

    PubMed Central

    Valente, Daniel L.; Braasch, Jonas; Myrbeck, Shane A.

    2012-01-01

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene. PMID:22280585

  20. Touch activates human auditory cortex.

    PubMed

    Schürmann, Martin; Caetano, Gina; Hlushchuk, Yevhen; Jousmäki, Veikko; Hari, Riitta

    2006-05-01

    Vibrotactile stimuli can facilitate hearing, both in hearing-impaired and in normally hearing people. Accordingly, the sounds of hands exploring a surface contribute to the explorer's haptic percepts. As a possible brain basis of such phenomena, functional brain imaging has identified activations specific to audiotactile interaction in secondary somatosensory cortex, auditory belt area, and posterior parietal cortex, depending on the quality and relative salience of the stimuli. We studied 13 subjects with non-invasive functional magnetic resonance imaging (fMRI) to search for auditory brain areas that would be activated by touch. Vibration bursts of 200 Hz were delivered to the subjects' fingers and palm and tactile pressure pulses to their fingertips. Noise bursts served to identify auditory cortex. Vibrotactile-auditory co-activation, addressed with minimal smoothing to obtain a conservative estimate, was found in an 85-mm3 region in the posterior auditory belt area. This co-activation could be related to facilitated hearing at the behavioral level, reflecting the analysis of sound-like temporal patterns in vibration. However, even tactile pulses (without any vibration) activated parts of the posterior auditory belt area, which therefore might subserve processing of audiotactile events that arise during dynamic contact between hands and environment.

  1. Sense of presence and anxiety during virtual social interactions between a human and virtual humans.

    PubMed

    Morina, Nexhmedin; Brinkman, Willem-Paul; Hartanto, Dwi; Emmelkamp, Paul M G

    2014-01-01

    Virtual reality exposure therapy (VRET) has been shown to be effective in treatment of anxiety disorders. Yet, there is lack of research on the extent to which interaction between the individual and virtual humans can be successfully implanted to increase levels of anxiety for therapeutic purposes. This proof-of-concept pilot study aimed at examining levels of the sense of presence and anxiety during exposure to virtual environments involving social interaction with virtual humans and using different virtual reality displays. A non-clinical sample of 38 participants was randomly assigned to either a head-mounted display (HMD) with motion tracker and sterescopic view condition or a one-screen projection-based virtual reality display condition. Participants in both conditions engaged in free speech dialogues with virtual humans controlled by research assistants. It was hypothesized that exposure to virtual social interactions will elicit moderate levels of sense of presence and anxiety in both groups. Further it was expected that participants in the HMD condition will report higher scores of sense of presence and anxiety than participants in the one-screen projection-based display condition. Results revealed that in both conditions virtual social interactions were associated with moderate levels of sense of presence and anxiety. Additionally, participants in the HMD condition reported significantly higher levels of presence than those in the one-screen projection-based display condition (p = .001). However, contrary to the expectations neither the average level of anxiety nor the highest level of anxiety during exposure to social virtual environments differed between the groups (p = .97 and p = .75, respectively). The findings suggest that virtual social interactions can be successfully applied in VRET to enhance sense of presence and anxiety. Furthermore, our results indicate that one-screen projection-based displays can successfully activate levels of anxiety in social virtual environments. The outcome can prove helpful in using low-cost projection-based virtual reality environments for treating individuals with social phobia.

  2. The virtual windtunnel: Visualizing modern CFD datasets with a virtual environment

    NASA Technical Reports Server (NTRS)

    Bryson, Steve

    1993-01-01

    This paper describes work in progress on a virtual environment designed for the visualization of pre-computed fluid flows. The overall problems involved in the visualization of fluid flow are summarized, including computational, data management, and interface issues. Requirements for a flow visualization are summarized. Many aspects of the implementation of the virtual windtunnel were uniquely determined by these requirements. The user interface is described in detail.

  3. Cortical Representations of Speech in a Multitalker Auditory Scene.

    PubMed

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.

  4. Current limitations into the application of virtual reality to mental health research.

    PubMed

    Huang, M P; Alessi, N E

    1998-01-01

    Virtual Reality (VR) environments have significant potential as a tool in mental health research, but are limited by technical factors and by mental health research factors. Technical difficulties include cost and complexity of virtual environment creation. Mental health research difficulties include current inadequacy of standards to specify needed details for virtual environment design. Technical difficulties are disappearing with technological advances, but the mental health research difficulties will take a concerted effort to overcome. Some of this effort will need to be directed at the formation of collaborative projects and standards for how such collaborations should proceed.

  5. Virtual Learning: Possibilities and Realization

    ERIC Educational Resources Information Center

    Kerimbayev, Nurassyl

    2016-01-01

    In the article it was important to consider two basic moments i.e., impact mode of using virtual environment at training process within one faculty of the University, directly at training quality and what outcomes can be reached therewith. The work significance consists of studying the virtual environment effect instead of traditional educational…

  6. Large-Scale Networked Virtual Environments: Architecture and Applications

    ERIC Educational Resources Information Center

    Lamotte, Wim; Quax, Peter; Flerackers, Eddy

    2008-01-01

    Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…

  7. Designing a Virtual-Reality-Based, Gamelike Math Learning Environment

    ERIC Educational Resources Information Center

    Xu, Xinhao; Ke, Fengfeng

    2016-01-01

    This exploratory study examined the design issues related to a virtual-reality-based, gamelike learning environment (VRGLE) developed via OpenSimulator, an open-source virtual reality server. The researchers collected qualitative data to examine the VRGLE's usability, playability, and content integration for math learning. They found it important…

  8. Virtual Reality and Special Needs

    ERIC Educational Resources Information Center

    Jeffs, Tara L.

    2009-01-01

    The use of virtual environments for special needs is as diverse as the field of Special Education itself and the individuals it serves. Individuals with special needs often face challenges with attention, language, spatial abilities, memory, higher reasoning and knowledge acquisition. Research in the use of Virtual Learning Environments (VLE)…

  9. Pedagogical Intercultural Practice of Teachers in Virtual Environments

    ERIC Educational Resources Information Center

    Barreto, Carmen Ricardo; Haydar, Jorge Mizzuno

    2016-01-01

    This study presents some of the results of the project "Training and Development of Intercultural Competency of Teachers in Virtual Environments", carried out in ten Colombian Caribbean higher education institutions (HEI) offering virtual programs. It was performed in three steps: 1-diagnosis, 2-training, and 3-analysis of the…

  10. Meal-Maker: A Virtual Meal Preparation Environment for Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Kirshner, Sharon; Weiss, Patrice L.; Tirosh, Emanuel

    2011-01-01

    Virtual reality (VR) technology enables evaluation and practice of specific skills in a motivating, user-friendly and safe way. The implementation of virtual game environments within clinical settings has increased substantially in recent years. However, the psychometric properties and feasibility of many applications have not been fully…

  11. Two-photon calcium imaging in mice navigating a virtual reality environment.

    PubMed

    Leinweber, Marcus; Zmarz, Pawel; Buchmann, Peter; Argast, Paul; Hübener, Mark; Bonhoeffer, Tobias; Keller, Georg B

    2014-02-20

    In recent years, two-photon imaging has become an invaluable tool in neuroscience, as it allows for chronic measurement of the activity of genetically identified cells during behavior(1-6). Here we describe methods to perform two-photon imaging in mouse cortex while the animal navigates a virtual reality environment. We focus on the aspects of the experimental procedures that are key to imaging in a behaving animal in a brightly lit virtual environment. The key problems that arise in this experimental setup that we here address are: minimizing brain motion related artifacts, minimizing light leak from the virtual reality projection system, and minimizing laser induced tissue damage. We also provide sample software to control the virtual reality environment and to do pupil tracking. With these procedures and resources it should be possible to convert a conventional two-photon microscope for use in behaving mice.

  12. Sounds of silence: How to animate virtual worlds with sound

    NASA Technical Reports Server (NTRS)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  13. A MOO-Based Virtual Training Environment.

    ERIC Educational Resources Information Center

    Mateas, Michael; Lewis, Scott

    1996-01-01

    Describes the implementation of a virtual environment to support the training of engineers in Panels of Experts (POE), a vehicle for gathering customer data. Describes the environment, discusses some issues of communication and interaction raised by the technology, and relays the experiences of new users within this environment. (RS)

  14. Virtual Environments in Biology Teaching

    ERIC Educational Resources Information Center

    Mikropoulos, Tassos A.; Katsikis, Apostolos; Nikolou, Eugenia; Tsakalis, Panayiotis

    2003-01-01

    This article reports on the design, development and evaluation of an educational virtual environment for biology teaching. In particular it proposes a highly interactive three-dimensional synthetic environment involving certain learning tasks for the support of teaching plant cell biology and the process of photosynthesis. The environment has been…

  15. LVC interaction within a mixed-reality training system

    NASA Astrophysics Data System (ADS)

    Pollock, Brice; Winer, Eliot; Gilbert, Stephen; de la Cruz, Julio

    2012-03-01

    The United States military is increasingly pursuing advanced live, virtual, and constructive (LVC) training systems for reduced cost, greater training flexibility, and decreased training times. Combining the advantages of realistic training environments and virtual worlds, mixed reality LVC training systems can enable live and virtual trainee interaction as if co-located. However, LVC interaction in these systems often requires constructing immersive environments, developing hardware for live-virtual interaction, tracking in occluded environments, and an architecture that supports real-time transfer of entity information across many systems. This paper discusses a system that overcomes these challenges to empower LVC interaction in a reconfigurable, mixed reality environment. This system was developed and tested in an immersive, reconfigurable, and mixed reality LVC training system for the dismounted warfighter at ISU, known as the Veldt, to overcome LVC interaction challenges and as a test bed for cuttingedge technology to meet future U.S. Army battlefield requirements. Trainees interact physically in the Veldt and virtually through commercial and developed game engines. Evaluation involving military trained personnel found this system to be effective, immersive, and useful for developing the critical decision-making skills necessary for the battlefield. Procedural terrain modeling, model-matching database techniques, and a central communication server process all live and virtual entity data from system components to create a cohesive virtual world across all distributed simulators and game engines in real-time. This system achieves rare LVC interaction within multiple physical and virtual immersive environments for training in real-time across many distributed systems.

  16. Constraint, Intelligence, and Control Hierarchy in Virtual Environments. Chapter 1

    NASA Technical Reports Server (NTRS)

    Sheridan, Thomas B.

    2007-01-01

    This paper seeks to deal directly with the question of what makes virtual actors and objects that are experienced in virtual environments seem real. (The term virtual reality, while more common in public usage, is an oxymoron; therefore virtual environment is the preferred term in this paper). Reality is difficult topic, treated for centuries in those sub-fields of philosophy called ontology- "of or relating to being or existence" and epistemology- "the study of the method and grounds of knowledge, especially with reference to its limits and validity" (both from Webster s, 1965). Advances in recent decades in the technologies of computers, sensors and graphics software have permitted human users to feel present or experience immersion in computer-generated virtual environments. This has motivated a keen interest in probing this phenomenon of presence and immersion not only philosophically but also psychologically and physiologically in terms of the parameters of the senses and sensory stimulation that correlate with the experience (Ellis, 1991). The pages of the journal Presence: Teleoperators and Virtual Environments have seen much discussion of what makes virtual environments seem real (see, e.g., Slater, 1999; Slater et al. 1994; Sheridan, 1992, 2000). Stephen Ellis, when organizing the meeting that motivated this paper, suggested to invited authors that "We may adopt as an organizing principle for the meeting that the genesis of apparently intelligent interaction arises from an upwelling of constraints determined by a hierarchy of lower levels of behavioral interaction. "My first reaction was "huh?" and my second was "yeah, that seems to make sense." Accordingly the paper seeks to explain from the author s viewpoint, why Ellis s hypothesis makes sense. What is the connection of "presence" or "immersion" of an observer in a virtual environment, to "constraints" and what types of constraints. What of "intelligent interaction," and is it the intelligence of the observer or the intelligence of the environment (whatever the latter may mean) that is salient? And finally, what might be relevant about "upwelling" of constraints as determined by a hierarchy of levels of interaction?

  17. Interdependent encoding of pitch, timbre and spatial location in auditory cortex

    PubMed Central

    Bizley, Jennifer K.; Walker, Kerry M. M.; Silverman, Bernard W.; King, Andrew J.; Schnupp, Jan W. H.

    2009-01-01

    Because we can perceive the pitch, timbre and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from non-spatial attributes. Indeed, recent studies support the existence of anatomically segregated ‘what’ and ‘where’ cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and non-spatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Whilst indicating that neural encoding of pitch, location and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and non-spatial cues at higher cortical levels. Some units exhibited significant non-linear interactions between particular combinations of pitch, timbre and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and non-spatial attributes. Such non-linearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects. PMID:19228960

  18. Outcome survey of auditory-verbal graduates: study of clinical efficacy.

    PubMed

    Goldberg, D M; Flexer, C

    1993-05-01

    Audiologists must be knowledgeable about the efficacy of aural habilitation practices because we are often the first professionals to inform parents about their child's hearing impairment. The purpose of this investigation was to document the status of graduates of one aural habilitation option; auditory-verbal. A consumer survey was completed by graduates from auditory-verbal programs in the United States and Canada. Graduates were queried regarding degree and etiology of hearing loss, age of onset, amplification, and educational and employment history, among other topics. Results indicated that the majority of the respondents were integrated into regular learning and living environments.

  19. Auditory Evidence Grids

    DTIC Science & Technology

    2006-01-01

    information of the robot (Figure 1) acquired via laser-based localization techniques. The results are maps of the global soundscape . The algorithmic...environments than noise maps. Furthermore, provided the acoustic localization algorithm can detect the sources, the soundscape can be mapped with many...gathering information about the auditory soundscape in which it is working. In addition to robustness in the presence of noise, it has also been

  20. Open Touch/Sound Maps: A system to convey street data through haptic and auditory feedback

    NASA Astrophysics Data System (ADS)

    Kaklanis, Nikolaos; Votis, Konstantinos; Tzovaras, Dimitrios

    2013-08-01

    The use of spatial (geographic) information is becoming ever more central and pervasive in today's internet society but the most of it is currently inaccessible to visually impaired users. However, access in visual maps is severely restricted to visually impaired and people with blindness, due to their inability to interpret graphical information. Thus, alternative ways of a map's presentation have to be explored, in order to enforce the accessibility of maps. Multiple types of sensory perception like touch and hearing may work as a substitute of vision for the exploration of maps. The use of multimodal virtual environments seems to be a promising alternative for people with visual impairments. The present paper introduces a tool for automatic multimodal map generation having haptic and audio feedback using OpenStreetMap data. For a desired map area, an elevation map is being automatically generated and can be explored by touch, using a haptic device. A sonification and a text-to-speech (TTS) mechanism provide also audio navigation information during the haptic exploration of the map.

  1. Responses of auditory-cortex neurons to structural features of natural sounds.

    PubMed

    Nelken, I; Rotman, Y; Bar Yosef, O

    1999-01-14

    Sound-processing strategies that use the highly non-random structure of natural sounds may confer evolutionary advantage to many species. Auditory processing of natural sounds has been studied almost exclusively in the context of species-specific vocalizations, although these form only a small part of the acoustic biotope. To study the relationships between properties of natural soundscapes and neuronal processing mechanisms in the auditory system, we analysed sound from a range of different environments. Here we show that for many non-animal sounds and background mixtures of animal sounds, energy in different frequency bands is coherently modulated. Co-modulation of different frequency bands in background noise facilitates the detection of tones in noise by humans, a phenomenon known as co-modulation masking release (CMR). We show that co-modulation also improves the ability of auditory-cortex neurons to detect tones in noise, and we propose that this property of auditory neurons may underlie behavioural CMR. This correspondence may represent an adaptation of the auditory system for the use of an attribute of natural sounds to facilitate real-world processing tasks.

  2. Use of virtual reality technique for the training of motor control in the elderly. Some theoretical considerations.

    PubMed

    de Bruin, E D; Schoene, D; Pichierri, G; Smith, S T

    2010-08-01

    Virtual augmented exercise, an emerging technology that can help to promote physical activity and combine the strengths of indoor and outdoor exercise, has recently been proposed as having the potential to increase exercise behavior in older adults. By creating a strong presence in a virtual, interactive environment, distraction can be taken to greater levels while maintaining the benefits of indoor exercises which may result in a shift from negative to positive thoughts about exercise. Recent findings on young participants show that virtual reality training enhances mood, thus, increasing enjoyment and energy. For older adults virtual, interactive environments can influence postural control and fall events by stimulating the sensory cues that are responsible in maintaining balance and orientation. However, the potential of virtual reality training has yet to be explored for older adults. This manuscript describes the potential of dance pad training protocols in the elderly and reports on the theoretical rationale of combining physical game-like exercises with sensory and cognitive challenges in a virtual environment.

  3. Command & Control in Virtual Environments: Designing a Virtual Environment for Experimentation

    DTIC Science & Technology

    2010-06-01

    proceed with the research: Second Life/ OpenSim A popular leader in the desktop virtual worlds revolution, for many Second Life has become...prototype environments and adapt them quickly within the world. OpenSim is an open-source community built around upon the Second Life platform...functionality natively present in Second Life and the Opensim platform. With the recent release of Second Life Viewer 2.0, which contains a complete

  4. Proceedings of the 1993 Conference on Intelligent Computer-Aided Training and Virtual Environment Technology

    NASA Technical Reports Server (NTRS)

    Hyde, Patricia R.; Loftin, R. Bowen

    1993-01-01

    The volume 2 proceedings from the 1993 Conference on Intelligent Computer-Aided Training and Virtual Environment Technology are presented. Topics discussed include intelligent computer assisted training (ICAT) systems architectures, ICAT educational and medical applications, virtual environment (VE) training and assessment, human factors engineering and VE, ICAT theory and natural language processing, ICAT military applications, VE engineering applications, ICAT knowledge acquisition processes and applications, and ICAT aerospace applications.

  5. The Role of Virtual Learning Environment in Improving Information and Communication Technology Adoption in Teaching Exploring How Virtual Learning Environments Improve University Teacher's Attitudes about the Use of Information and Communication Technology

    ERIC Educational Resources Information Center

    Ageel, Mohammed

    2012-01-01

    The adoption of ICT-enabled teaching in contemporary schools has largely lagged behind despite its obvious and many benefits, mainly because teachers still hold ignorant, misinformed and highly negative attitudes towards ICT-enabled teaching. This article aimed at investigating the effect of Virtual Learning Environments (VLE) on university…

  6. A Collaborative Molecular Modeling Environment Using a Virtual Tunneling Service

    PubMed Central

    Lee, Jun; Kim, Jee-In; Kang, Lin-Woo

    2012-01-01

    Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments. PMID:22927721

  7. Consistency of performance of robot-assisted surgical tasks in virtual reality.

    PubMed

    Suh, I H; Siu, K-C; Mukherjee, M; Monk, E; Oleynikov, D; Stergiou, N

    2009-01-01

    The purpose of this study was to investigate consistency of performance of robot-assisted surgical tasks in a virtual reality environment. Eight subjects performed two surgical tasks, bimanual carrying and needle passing, with both the da Vinci surgical robot and a virtual reality equivalent environment. Nonlinear analysis was utilized to evaluate consistency of performance by calculating the regularity and the amount of divergence in the movement trajectories of the surgical instrument tips. Our results revealed that movement patterns for both training tasks were statistically similar between the two environments. Consistency of performance as measured by nonlinear analysis could be an appropriate methodology to evaluate the complexity of the training tasks between actual and virtual environments and assist in developing better surgical training programs.

  8. [Virtual + 1] * Reality

    NASA Astrophysics Data System (ADS)

    Beckhaus, Steffi

    Virtual Reality aims at creating an artificial environment that can be perceived as a substitute to a real setting. Much effort in research and development goes into the creation of virtual environments that in their majority are perceivable only by eyes and hands. The multisensory nature of our perception, however, allows and, arguably, also expects more than that. As long as we are not able to simulate and deliver a fully sensory believable virtual environment to a user, we could make use of the fully sensory, multi-modal nature of real objects to fill in for this deficiency. The idea is to purposefully integrate real artifacts into the application and interaction, instead of dismissing anything real as hindering the virtual experience. The term virtual reality - denoting the goal, not the technology - shifts from a core virtual reality to an “enriched” reality, technologically encompassing both the computer generated and the real, physical artifacts. Together, either simultaneously or in a hybrid way, real and virtual jointly provide stimuli that are perceived by users through their senses and are later formed into an experience by the user's mind.

  9. NASA Virtual Glovebox: An Immersive Virtual Desktop Environment for Training Astronauts in Life Science Experiments

    NASA Technical Reports Server (NTRS)

    Twombly, I. Alexander; Smith, Jeffrey; Bruyns, Cynthia; Montgomery, Kevin; Boyle, Richard

    2003-01-01

    The International Space Station will soon provide an unparalleled research facility for studying the near- and longer-term effects of microgravity on living systems. Using the Space Station Glovebox Facility - a compact, fully contained reach-in environment - astronauts will conduct technically challenging life sciences experiments. Virtual environment technologies are being developed at NASA Ames Research Center to help realize the scientific potential of this unique resource by facilitating the experimental hardware and protocol designs and by assisting the astronauts in training. The Virtual GloveboX (VGX) integrates high-fidelity graphics, force-feedback devices and real- time computer simulation engines to achieve an immersive training environment. Here, we describe the prototype VGX system, the distributed processing architecture used in the simulation environment, and modifications to the visualization pipeline required to accommodate the display configuration.

  10. A specification of 3D manipulation in virtual environments

    NASA Technical Reports Server (NTRS)

    Su, S. Augustine; Furuta, Richard

    1994-01-01

    In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.

  11. Collaborative Project Work Development in a Virtual Environment with Low-Intermediate Undergraduate Colombian Students (Desarrollo de trabajo colaborativo en un ambiente virtual con estudiantes colombianos de pregrado de nivel intermedio-bajo)

    ERIC Educational Resources Information Center

    Salinas Vacca, Yakelin

    2014-01-01

    This paper reports on an exploratory, descriptive, and interpretive study in which the roles of discussion boards, the students, the teacher, and the monitors were explored as they constructed a collaborative class project in a virtual environment. This research was conducted in the virtual program of a Colombian public university. Data were…

  12. Inducing physiological stress recovery with sounds of nature in a virtual reality forest--results from a pilot study.

    PubMed

    Annerstedt, Matilda; Jönsson, Peter; Wallergård, Mattias; Johansson, Gerd; Karlson, Björn; Grahn, Patrik; Hansen, Ase Marie; Währborg, Peter

    2013-06-13

    Experimental research on stress recovery in natural environments is limited, as is study of the effect of sounds of nature. After inducing stress by means of a virtual stress test, we explored physiological recovery in two different virtual natural environments (with and without exposure to sounds of nature) and in one control condition. Cardiovascular data and saliva cortisol were collected. Repeated ANOVA measurements indicated parasympathetic activation in the group subjected to sounds of nature in a virtual natural environment, suggesting enhanced stress recovery may occur in such surroundings. The group that recovered in virtual nature without sound and the control group displayed no particular autonomic activation or deactivation. The results demonstrate a potential mechanistic link between nature, the sounds of nature, and stress recovery, and suggest the potential importance of virtual reality as a tool in this research field. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. A fast simulation method for radiation maps using interpolation in a virtual environment.

    PubMed

    Li, Meng-Kun; Liu, Yong-Kuo; Peng, Min-Jun; Xie, Chun-Li; Yang, Li-Qun

    2018-05-10

    In nuclear decommissioning, virtual simulation technology is a useful tool to achieve an effective work process by using virtual environments to represent the physical and logical scheme of a real decommissioning project. This technology is cost-saving and time-saving, with the capacity to develop various decommissioning scenarios and reduce the risk of retrofitting. The method utilises a radiation map in a virtual simulation as the basis for the assessment of exposure to a virtual human. In this paper, we propose a fast simulation method using a known radiation source. The method has a unique advantage over point kernel and Monte Carlo methods because it generates the radiation map using interpolation in a virtual environment. The simulation of the radiation map including the calculation and the visualisation were realised using UNITY and MATLAB. The feasibility of the proposed method was tested on a hypothetical case and the results obtained are discussed in this paper.

  14. Declarative Knowledge Acquisition in Immersive Virtual Learning Environments

    ERIC Educational Resources Information Center

    Webster, Rustin

    2016-01-01

    The author investigated the interaction effect of immersive virtual reality (VR) in the classroom. The objective of the project was to develop and provide a low-cost, scalable, and portable VR system containing purposely designed and developed immersive virtual learning environments for the US Army. The purpose of the mixed design experiment was…

  15. A "Second Life" for Gross Anatomy: Applications for Multiuser Virtual Environments in Teaching the Anatomical Sciences

    ERIC Educational Resources Information Center

    Richardson, April; Hazzard, Matthew; Challman, Sandra D.; Morgenstein, Aaron M.; Brueckner, Jennifer K.

    2011-01-01

    This article describes the emerging role of educational multiuser virtual environments, specifically Second Life[TM], in anatomical sciences education. Virtual worlds promote inquiry-based learning and conceptual understanding, potentially making them applicable for teaching and learning gross anatomy. A short introduction to Second Life as an…

  16. A Test of Spatial Contiguity for Virtual Human's Gestures in Multimedia Learning Environments

    ERIC Educational Resources Information Center

    Craig, Scotty D.; Twyford, Jessica; Irigoyen, Norma; Zipp, Sarah A.

    2015-01-01

    Virtual humans are becoming an easily available and popular component of multimedia learning that are often used in online learning environments. There is still a need for systematic research into their effectiveness. The current study investigates the positioning of a virtual human's gestures when guiding the learner through a multimedia…

  17. An Interdisciplinary Design Project in Second Life: Creating a Virtual Marine Science Learning Environment

    ERIC Educational Resources Information Center

    Triggs, Riley; Jarmon, Leslie; Villareal, Tracy A.

    2010-01-01

    Virtual environments can resolve many practical and pedagogical challenges within higher education. Economic considerations, accessibility issues, and safety concerns can all be somewhat alleviated by creating learning activities in a virtual space. Because of the removal of real-world physical limitations like gravity, durability and scope,…

  18. Spatial Integration under Contextual Control in a Virtual Environment

    ERIC Educational Resources Information Center

    Molet, Mikael; Gambet, Boris; Bugallo, Mehdi; Miller, Ralph R.

    2012-01-01

    The role of context was examined in the selection and integration of independently learned spatial relationships. Using a dynamic 3D virtual environment, participants learned one spatial relationship between landmarks A and B which was established in one virtual context (e.g., A is left of B) and a different spatial relationship which was…

  19. Active Learning through the Use of Virtual Environments

    ERIC Educational Resources Information Center

    Mayrose, James

    2012-01-01

    Immersive Virtual Reality (VR) has seen explosive growth over the last decade. Immersive VR attempts to give users the sensation of being fully immersed in a synthetic environment by providing them with 3D hardware, and allowing them to interact with objects in virtual worlds. The technology is extremely effective for learning and exploration, and…

  20. A Virtual World Workshop Environment for Learning Agile Software Development Techniques

    ERIC Educational Resources Information Center

    Parsons, David; Stockdale, Rosemary

    2012-01-01

    Multi-User Virtual Environments (MUVEs) are the subject of increasing interest for educators and trainers. This article reports on a longitudinal project that seeks to establish a virtual agile software development workshop hosted in the Open Wonderland MUVE, designed to help learners to understand the basic principles of some core agile software…

  1. Exploring "Magic Cottage": A Virtual Reality Environment for Stimulating Children's Imaginative Writing

    ERIC Educational Resources Information Center

    Patera, Marianne; Draper, Steve; Naef, Martin

    2008-01-01

    This paper presents an exploratory study that created a virtual reality environment (VRE) to stimulate motivation and creativity in imaginative writing at primary school level. The main aim of the study was to investigate if an interactive, semi-immersive virtual reality world could increase motivation and stimulate pupils' imagination in the…

  2. Pre-Service Teachers Designing Virtual World Learning Environments

    ERIC Educational Resources Information Center

    Jacka, Lisa; Booth, Kate

    2012-01-01

    Integrating Information Technology Communications in the classroom has been an important part of pre-service teacher education for over a decade. The advent of virtual worlds provides the pre-service teacher with an opportunity to study teaching and learning in a highly immersive 3D computer-based environment. Virtual worlds also provide a place…

  3. Impact of Virtual Work Environment on Traditional Team Domains.

    ERIC Educational Resources Information Center

    Geroy, Gary D.; Olson, Joel; Hartman, Jackie

    2002-01-01

    Examines a virtual work team to determine the domains of the team and the effect the virtual work environment had on the domains. Discusses results of a literature review and a phenomenological heuristic case study, including the effects of post-modern philosophy and postindustrial society on changes in the marketplace. (Contains 79 references.)…

  4. Walk, Fly, or Teleport to Learning: Virtual Worlds in the Classroom

    ERIC Educational Resources Information Center

    Yoder, Maureen Brown

    2009-01-01

    For educators looking for new ways to engage their students, multiuser virtual environments (MUVEs) offer a great opportunity for creative teaching and learning. MUVEs teach students social, technical, and practical life skills in a setting that is engaging and playful. One might be surprised how much these virtual environments teach students…

  5. Virtual reality and physical rehabilitation: a new toy or a new research and rehabilitation tool?

    PubMed Central

    Keshner, Emily A

    2004-01-01

    Virtual reality (VR) technology is rapidly becoming a popular application for physical rehabilitation and motor control research. But questions remain about whether this technology really extends our ability to influence the nervous system or whether moving within a virtual environment just motivates the individual to perform. I served as guest editor of this month's issue of the Journal of NeuroEngineering and Rehabilitation (JNER) for a group of papers on augmented and virtual reality in rehabilitation. These papers demonstrate a variety of approaches taken for applying VR technology to physical rehabilitation. The papers by Kenyon et al. and Sparto et al. address critical questions about how this technology can be applied to physical rehabilitation and research. The papers by Sveistrup and Viau et al. explore whether action within a virtual environment is equivalent to motor performance within the physical environment. Finally, papers by Riva et al. and Weiss et al. discuss the important characteristics of a virtual environment that will be most effective for obtaining changes in the motor system. PMID:15679943

  6. Estradiol-dependent Modulation of Serotonergic Markers in Auditory Areas of a Seasonally Breeding Songbird

    PubMed Central

    Matragrano, Lisa L.; Sanford, Sara E.; Salvante, Katrina G.; Beaulieu, Michaël; Sockman, Keith W.; Maney, Donna L.

    2011-01-01

    Because no organism lives in an unchanging environment, sensory processes must remain plastic so that in any context, they emphasize the most relevant signals. As the behavioral relevance of sociosexual signals changes along with reproductive state, the perception of those signals is altered by reproductive hormones such as estradiol (E2). We showed previously that in white-throated sparrows, immediate early gene responses in the auditory pathway of females are selective for conspecific male song only when plasma E2 is elevated to breeding-typical levels. In this study, we looked for evidence that E2-dependent modulation of auditory responses is mediated by serotonergic systems. In female nonbreeding white-throated sparrows treated with E2, the density of fibers immunoreactive for serotonin transporter innervating the auditory midbrain and rostral auditory forebrain increased compared with controls. E2 treatment also increased the concentration of the serotonin metabolite 5-HIAA in the caudomedial mesopallium of the auditory forebrain. In a second experiment, females exposed to 30 min of conspecific male song had higher levels of 5-HIAA in the caudomedial nidopallium of the auditory forebrain than birds not exposed to song. Overall, we show that in this seasonal breeder, (1) serotonergic fibers innervate auditory areas; (2) the density of those fibers is higher in females with breeding-typical levels of E2 than in nonbreeding, untreated females; and (3) serotonin is released in the auditory forebrain within minutes in response to conspecific vocalizations. Our results are consistent with the hypothesis that E2 acts via serotonin systems to alter auditory processing. PMID:21942431

  7. Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.

    PubMed

    Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin

    2018-02-21

    In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. Auditory Attention and Comprehension During a Simulated Night Shift: Effects of Task Characteristics.

    PubMed

    Pilcher, June J; Jennings, Kristen S; Phillips, Ginger E; McCubbin, James A

    2016-11-01

    The current study investigated performance on a dual auditory task during a simulated night shift. Night shifts and sleep deprivation negatively affect performance on vigilance-based tasks, but less is known about the effects on complex tasks. Because language processing is necessary for successful work performance, it is important to understand how it is affected by night work and sleep deprivation. Sixty-two participants completed a simulated night shift resulting in 28 hr of total sleep deprivation. Performance on a vigilance task and a dual auditory language task was examined across four testing sessions. The results indicate that working at night negatively impacts vigilance, auditory attention, and comprehension. The effects on the auditory task varied based on the content of the auditory material. When the material was interesting and easy, the participants performed better. Night work had a greater negative effect when the auditory material was less interesting and more difficult. These findings support research that vigilance decreases during the night. The results suggest that auditory comprehension suffers when individuals are required to work at night. Maintaining attention and controlling effort especially on passages that are less interesting or more difficult could improve performance during night shifts. The results from the current study apply to many work environments where decision making is necessary in response to complex auditory information. Better predicting the effects of night work on language processing is important for developing improved means of coping with shiftwork. © 2016, Human Factors and Ergonomics Society.

  9. The role of auditory transient and deviance processing in distraction of task performance: a combined behavioral and event-related brain potential study

    PubMed Central

    Berti, Stefan

    2013-01-01

    Distraction of goal-oriented performance by a sudden change in the auditory environment is an everyday life experience. Different types of changes can be distracting, including a sudden onset of a transient sound and a slight deviation of otherwise regular auditory background stimulation. With regard to deviance detection, it is assumed that slight changes in a continuous sequence of auditory stimuli are detected by a predictive coding mechanisms and it has been demonstrated that this mechanism is capable of distracting ongoing task performance. In contrast, it is open whether transient detection—which does not rely on predictive coding mechanisms—can trigger behavioral distraction, too. In the present study, the effect of rare auditory changes on visual task performance is tested in an auditory-visual cross-modal distraction paradigm. The rare changes are either embedded within a continuous standard stimulation (triggering deviance detection) or are presented within an otherwise silent situation (triggering transient detection). In the event-related brain potentials, deviants elicited the mismatch negativity (MMN) while transients elicited an enhanced N1 component, mirroring pre-attentive change detection in both conditions but on the basis of different neuro-cognitive processes. These sensory components are followed by attention related ERP components including the P3a and the reorienting negativity (RON). This demonstrates that both types of changes trigger switches of attention. Finally, distraction of task performance is observable, too, but the impact of deviants is higher compared to transients. These findings suggest different routes of distraction allowing for the automatic processing of a wide range of potentially relevant changes in the environment as a pre-requisite for adaptive behavior. PMID:23874278

  10. Neural practice effect during cross-modal selective attention: Supra-modal and modality-specific effects.

    PubMed

    Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi

    2018-05-16

    Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Using SecondLife Online Virtual World Technology to Introduce Educators to the Digital Culture

    NASA Technical Reports Server (NTRS)

    Jamison, John

    2008-01-01

    The rapidly changing culture resulting from new technologies and digital gaming has created an increasing language gap between traditional educators and today's learners (Natkin, 2006; Seely-Brown, 2000). This study seeks to use the online virtual world of SecondLife.com as a tool to introduce educators to this new environment for learning. This study observes the activities and perceptions of a group of educators given unscripted access to this virtual environment. The results 'suggest that although serious technology limitations do currently exist, the potential of this virtual world environment as a learning experience for educators is strong.

  12. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  13. Guidelines for developing distributed virtual environment applications

    NASA Astrophysics Data System (ADS)

    Stytz, Martin R.; Banks, Sheila B.

    1998-08-01

    We have conducted a variety of projects that served to investigate the limits of virtual environments and distributed virtual environment (DVE) technology for the military and medical professions. The projects include an application that allows the user to interactively explore a high-fidelity, dynamic scale model of the Solar System and a high-fidelity, photorealistic, rapidly reconfigurable aircraft simulator. Additional projects are a project for observing, analyzing, and understanding the activity in a military distributed virtual environment, a project to develop a distributed threat simulator for training Air Force pilots, a virtual spaceplane to determine user interface requirements for a planned military spaceplane system, and an automated wingman for use in supplementing or replacing human-controlled systems in a DVE. The last two projects are a virtual environment user interface framework; and a project for training hospital emergency department personnel. In the process of designing and assembling the DVE applications in support of these projects, we have developed rules of thumb and insights into assembling DVE applications and the environment itself. In this paper, we open with a brief review of the applications that were the source for our insights and then present the lessons learned as a result of these projects. The lessons we have learned fall primarily into five areas. These areas are requirements development, software architecture, human-computer interaction, graphical database modeling, and construction of computer-generated forces.

  14. Visual landmarks facilitate rodent spatial navigation in virtual reality environments

    PubMed Central

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain areas. Virtual reality offers a unique approach to ask whether visual landmark cues alone are sufficient to improve performance in a spatial task. We found that mice could learn to navigate between two water reward locations along a virtual bidirectional linear track using a spherical treadmill. Mice exposed to a virtual environment with vivid visual cues rendered on a single monitor increased their performance over a 3-d training regimen. Training significantly increased the percentage of time avatars controlled by the mice spent near reward locations in probe trials without water rewards. Neither improvement during training or spatial learning for reward locations occurred with mice operating a virtual environment without vivid landmarks or with mice deprived of all visual feedback. Mice operating the vivid environment developed stereotyped avatar turning behaviors when alternating between reward zones that were positively correlated with their performance on the probe trial. These results suggest that mice are able to learn to navigate to specific locations using only visual cues presented within a virtual environment rendered on a single computer monitor. PMID:22345484

  15. Influence of anatomic landmarks in the virtual environment on simulated angled laparoscope navigation

    PubMed Central

    Christie, Lorna S.; Goossens, Richard H. M.; de Ridder, Huib; Jakimowicz, Jack J.

    2010-01-01

    Background The aim of this study is to investigate the influence of the presence of anatomic landmarks on the performance of angled laparoscope navigation on the SimSurgery SEP simulator. Methods Twenty-eight experienced laparoscopic surgeons (familiar with 30° angled laparoscope, >100 basic laparoscopic procedures, >5 advanced laparoscopic procedures) and 23 novices (no laparoscopy experience) performed the Camera Navigation task in an abstract virtual environment (CN-box) and in a virtual representation of the lower abdomen (CN-abdomen). They also rated the realism and added value of the virtual environments on seven-point scales. Results Within both groups, the CN-box task was accomplished in less time and with shorter tip trajectory than the CN-abdomen task (Wilcoxon test, p < 0.05). No significant differences were found between the performances of the experienced participants and the novices on the CN tasks (Mann–Whitney U test, p > 0.05). In both groups, the CN tasks were perceived as hard work and more challenging than anticipated. Conclusions Performance of the angled laparoscope navigation task is influenced by the virtual environment surrounding the exercise. The task was performed better in an abstract environment than in a virtual environment with anatomic landmarks. More insight is required into the influence and function of different types of intrinsic and extrinsic feedback on the effectiveness of preclinical simulator training. PMID:20419318

  16. Impossible spaces: maximizing natural walking in virtual environments with self-overlapping architecture.

    PubMed

    Suma, Evan A; Lipps, Zachary; Finkelstein, Samantha; Krum, David M; Bolas, Mark

    2012-04-01

    Walking is only possible within immersive virtual environments that fit inside the boundaries of the user's physical workspace. To reduce the severity of the restrictions imposed by limited physical area, we introduce "impossible spaces," a new design mechanic for virtual environments that wish to maximize the size of the virtual environment that can be explored with natural locomotion. Such environments make use of self-overlapping architectural layouts, effectively compressing comparatively large interior environments into smaller physical areas. We conducted two formal user studies to explore the perception and experience of impossible spaces. In the first experiment, we showed that reasonably small virtual rooms may overlap by as much as 56% before users begin to detect that they are in an impossible space, and that the larger virtual rooms that expanded to maximally fill our available 9.14 m x 9.14 m workspace may overlap by up to 31%. Our results also demonstrate that users perceive distances to objects in adjacent overlapping rooms as if the overall space was uncompressed, even at overlap levels that were overtly noticeable. In our second experiment, we combined several well-known redirection techniques to string together a chain of impossible spaces in an expansive outdoor scene. We then conducted an exploratory analysis of users' verbal feedback during exploration, which indicated that impossible spaces provide an even more powerful illusion when users are naive to the manipulation.

  17. Assessment of emotional reactivity produced by exposure to virtual environments in patients with eating disorders.

    PubMed

    Gutiérrez-Maldonado, José; Ferrer-García, Marta; Caqueo-Urízar, Alejandra; Letosa-Porta, Alex

    2006-10-01

    The aim of this study was to assess the usefulness of virtual environments representing situations that are emotionally significant to subjects with eating disorders (ED). These environments may be applied with both evaluative and therapeutic aims and in simulation procedures to carry out a range of experimental studies. This paper is part of a wider research project analyzing the influence of the situation to which subjects are exposed on their performance on body image estimation tasks. Thirty female patients with eating disorders were exposed to six virtual environments: a living-room (neutral situation), a kitchen with high-calorie food, a kitchen with low-calorie food, a restaurant with high-calorie food, a restaurant with low-calorie food, and a swimming-pool. After exposure to each environment the STAI-S (a measurement of state anxiety) and the CDB (a measurement of depression) were administered to all subjects. The results show that virtual reality instruments are particularly useful for simulating everyday situations that may provoke emotional reactions such as anxiety and depression, in patients with ED. Virtual environments in which subjects are obliged to ingest high-calorie food provoke the highest levels of state anxiety and depression.

  18. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories.

    PubMed

    Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J

    2015-06-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    PubMed Central

    Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.

    2015-01-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721

  20. The Evolution of Constructivist Learning Environments: Immersion in Distributed, Virtual Worlds.

    ERIC Educational Resources Information Center

    Dede, Chris

    1995-01-01

    Discusses the evolution of constructivist learning environments and examines the collaboration of simulated software models, virtual environments, and evolving mental models via immersion in artificial realities. A sidebar gives a realistic example of a student navigating through cyberspace. (JMV)

  1. Real-time recording and classification of eye movements in an immersive virtual environment.

    PubMed

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-10-10

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.

  2. Real-time recording and classification of eye movements in an immersive virtual environment

    PubMed Central

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-01-01

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements. PMID:24113087

  3. Virtual reality training improves students' knowledge structures of medical concepts.

    PubMed

    Stevens, Susan M; Goldsmith, Timothy E; Summers, Kenneth L; Sherstyuk, Andrei; Kihmm, Kathleen; Holten, James R; Davis, Christopher; Speitel, Daniel; Maris, Christina; Stewart, Randall; Wilks, David; Saland, Linda; Wax, Diane; Panaiotis; Saiki, Stanley; Alverson, Dale; Caudell, Thomas P

    2005-01-01

    Virtual environments can provide training that is difficult to achieve under normal circumstances. Medical students can work on high-risk cases in a realistic, time-critical environment, where students practice skills in a cognitively demanding and emotionally compelling situation. Research from cognitive science has shown that as students acquire domain expertise, their semantic organization of core domain concepts become more similar to those of an expert's. In the current study, we hypothesized that students' knowledge structures would become more expert-like as a result of their diagnosing and treating a patient experiencing a hematoma within a virtual environment. Forty-eight medical students diagnosed and treated a hematoma case within a fully immersed virtual environment. Student's semantic organization of 25 case-related concepts was assessed prior to and after training. Students' knowledge structures became more integrated and similar to an expert knowledge structure of the concepts as a result of the learning experience. The methods used here for eliciting, representing, and evaluating knowledge structures offer a sensitive and objective means for evaluating student learning in virtual environments and medical simulations.

  4. Treasure hunt of mineral resources: a serious game in a virtual world

    NASA Astrophysics Data System (ADS)

    Boniello, Annalisa

    2015-04-01

    This posterdescribes a geoscience activities on mineral resources for students of 14-18 years old. The activities are created as a treasure hunt of mineral resources, students must pass test and solve questions, search mineral in different environments: near a volcanos, in the river, in a lake, in a cave, under the sea and on a mountain. The activity is created using a virtual environment a virtual world built with a software, Opensim, a opensource software. In this virtual world every student as avatar, a virtual rapresentation of himself, search information, objects, mineral as in a serious game, a digital serious game. In the serious game buit as a treasure hunt, students interact with environment in a learning by doing, and they interact with other students in a cooperative learning and a collaborative environment. In the hunt there is a challenge that student must overcome: understanding what is a mineral resource collecting data on mineral analyzing environments where they are created so the students can improve motivation and learn, and improve scientific skills.

  5. Virtual Jupiter - Real Learning

    NASA Astrophysics Data System (ADS)

    Ruzhitskaya, Lanika; Speck, A.; Laffey, J.

    2010-01-01

    How many earthlings went to visit Jupiter? None. How many students visited virtual Jupiter to fulfill their introductory astronomy courses’ requirements? Within next six months over 100 students from University of Missouri will get a chance to explore the planet and its Galilean Moons using a 3D virtual environment created especially for them to learn Kepler's and Newton's laws, eclipses, parallax, and other concepts in astronomy. The virtual world of Jupiter system is a unique 3D environment that allows students to learn course material - physical laws and concepts in astronomy - while engaging them into exploration of the Jupiter's system, encouraging their imagination, curiosity, and motivation. The virtual learning environment let students to work individually or collaborate with their teammates. The 3D world is also a great opportunity for research in astronomy education to investigate impact of social interaction, gaming features, and use of manipulatives offered by a learning tool on students’ motivation and learning outcomes. Use of 3D environment is also a valuable source for exploration of how the learners’ spatial awareness can be enhanced by working in 3-dimensional environment.

  6. Real enough: using virtual public speaking environments to evoke feelings and behaviors targeted in stuttering assessment and treatment.

    PubMed

    Brundage, Shelley B; Hancock, Adrienne B

    2015-05-01

    Virtual reality environments (VREs) are computer-generated, 3-dimensional worlds that allow users to experience situations similar to those encountered in the real world. The purpose of this study was to investigate VREs for potential use in assessing and treating persons who stutter (PWS) by determining the extent to which PWS's affective, behavioral, and cognitive measures in a VRE correlate with those same measures in a similar live environment. Ten PWS delivered speeches-first to a live audience and, on another day, to 2 virtual audiences (neutral and challenging audiences). Participants completed standard tests of communication apprehension and confidence prior to each condition, and frequency of stuttering was measured during each speech. Correlational analyses revealed significant, positive correlations between virtual and live conditions for affective and cognitive measures as well as for frequency of stuttering. These findings suggest that virtual public speaking environments engender affective, behavioral, and cognitive reactions in PWS that correspond to those experienced in the real world. Therefore, the authentic, safe, and controlled environments provided by VREs may be useful for stuttering assessment and treatment.

  7. The Effect of the Use of the 3-D Multi-User Virtual Environment "Second Life" on Student Motivation and Language Proficiency in Courses of Spanish as a Foreign Language

    ERIC Educational Resources Information Center

    Pares-Toral, Maria T.

    2013-01-01

    The ever increasing popularity of virtual worlds, also known as 3-D multi-user virtual environments (MUVEs) or simply virtual worlds provides language instructors with a new tool they can exploit in their courses. For now, "Second Life" is one of the most popular MUVEs used for teaching and learning, and although "Second Life"…

  8. Effects of Team Emotional Authenticity on Virtual Team Performance

    PubMed Central

    Connelly, Catherine E.; Turel, Ofir

    2016-01-01

    Members of virtual teams lack many of the visual or auditory cues that are usually used as the basis for impressions about fellow team members. We focus on the effects of the impressions formed in this context, and use social exchange theory to understand how these impressions affect team performance. Our pilot study, using content analysis (n = 191 students), suggested that most individuals believe that they can assess others' emotional authenticity in online settings by focusing on the content and tone of the messages. Our quantitative study examined the effects of these assessments. Structural equation modeling (SEM) analysis (n = 81 student teams) suggested that team-level trust and teamwork behaviors mediate the relationship between team emotional authenticity and team performance, and illuminate the importance of team emotional authenticity for team processes and outcomes. PMID:27630605

  9. Auditory Support in Linguistically Diverse Classrooms: Factors Related to Bilingual Text-to-Speech Use

    ERIC Educational Resources Information Center

    Van Laere, E.; Braak, J.

    2017-01-01

    Text-to-speech technology can act as an important support tool in computer-based learning environments (CBLEs) as it provides auditory input, next to on-screen text. Particularly for students who use a language at home other than the language of instruction (LOI) applied at school, text-to-speech can be useful. The CBLE E-Validiv offers content in…

  10. An investigation of the efficacy of collaborative virtual reality systems for moderated remote usability testing.

    PubMed

    Chalil Madathil, Kapil; Greenstein, Joel S

    2017-11-01

    Collaborative virtual reality-based systems have integrated high fidelity voice-based communication, immersive audio and screen-sharing tools into virtual environments. Such three-dimensional collaborative virtual environments can mirror the collaboration among usability test participants and facilitators when they are physically collocated, potentially enabling moderated usability tests to be conducted effectively when the facilitator and participant are located in different places. We developed a virtual collaborative three-dimensional remote moderated usability testing laboratory and employed it in a controlled study to evaluate the effectiveness of moderated usability testing in a collaborative virtual reality-based environment with two other moderated usability testing methods: the traditional lab approach and Cisco WebEx, a web-based conferencing and screen sharing approach. Using a mixed methods experimental design, 36 test participants and 12 test facilitators were asked to complete representative tasks on a simulated online shopping website. The dependent variables included the time taken to complete the tasks; the usability defects identified and their severity; and the subjective ratings on the workload index, presence and satisfaction questionnaires. Remote moderated usability testing methodology using a collaborative virtual reality system performed similarly in terms of the total number of defects identified, the number of high severity defects identified and the time taken to complete the tasks with the other two methodologies. The overall workload experienced by the test participants and facilitators was the least with the traditional lab condition. No significant differences were identified for the workload experienced with the virtual reality and the WebEx conditions. However, test participants experienced greater involvement and a more immersive experience in the virtual environment than in the WebEx condition. The ratings for the virtual environment condition were not significantly different from those for the traditional lab condition. The results of this study suggest that participants were productive and enjoyed the virtual lab condition, indicating the potential of a virtual world based approach as an alternative to conventional approaches for synchronous usability testing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Envisioning the future of home care: applications of immersive virtual reality.

    PubMed

    Brennan, Patricia Flatley; Arnott Smith, Catherine; Ponto, Kevin; Radwin, Robert; Kreutz, Kendra

    2013-01-01

    Accelerating the design of technologies to support health in the home requires 1) better understanding of how the household context shapes consumer health behaviors and (2) the opportunity to afford engineers, designers, and health professionals the chance to systematically study the home environment. We developed the Living Environments Laboratory (LEL) with a fully immersive, six-sided virtual reality CAVE to enable recreation of a broad range of household environments. We have successfully developed a virtual apartment, including a kitchen, living space, and bathroom. Over 2000 people have visited the LEL CAVE. Participants use an electronic wand to activate common household affordances such as opening a refrigerator door or lifting a cup. Challenges currently being explored include creating natural gesture to interface with virtual objects, developing robust, simple procedures to capture actual living environments and rendering them in a 3D visualization, and devising systematic stable terminologies to characterize home environments.

  12. Development of a method for estimating oesophageal temperature by multi-locational temperature measurement inside the external auditory canal

    NASA Astrophysics Data System (ADS)

    Nakada, Hirofumi; Horie, Seichi; Kawanami, Shoko; Inoue, Jinro; Iijima, Yoshinori; Sato, Kiyoharu; Abe, Takeshi

    2017-09-01

    We aimed to develop a practical method to estimate oesophageal temperature by measuring multi-locational auditory canal temperatures. This method can be applied to prevent heatstroke by simultaneously and continuously monitoring the core temperatures of people working under hot environments. We asked 11 healthy male volunteers to exercise, generating 80 W for 45 min in a climatic chamber set at 24, 32 and 40 °C, at 50% relative humidity. We also exposed the participants to radiation at 32 °C. We continuously measured temperatures at the oesophagus, rectum and three different locations along the external auditory canal. We developed equations for estimating oesophageal temperatures from auditory canal temperatures and compared their fitness and errors. The rectal temperature increased or decreased faster than oesophageal temperature at the start or end of exercise in all conditions. Estimated temperature showed good similarity with oesophageal temperature, and the square of the correlation coefficient of the best fitting model reached 0.904. We observed intermediate values between rectal and oesophageal temperatures during the rest phase. Even under the condition with radiation, estimated oesophageal temperature demonstrated concordant movement with oesophageal temperature at around 0.1 °C overestimation. Our method measured temperatures at three different locations along the external auditory canal. We confirmed that the approach can credibly estimate the oesophageal temperature from 24 to 40 °C for people performing exercise in the same place in a windless environment.

  13. Audition and exhibition to toluene - a contribution for the theme

    PubMed Central

    Augusto, Lívia Sanches Calvi; Kulay, Luiz Alexandre; Franco, Eloisa Sartori

    2012-01-01

    Summary Introduction: With the technological advances and the changes in the productive processes, the workers are displayed the different physical and chemical agents in its labor environment. The toluene is solvent an organic gift in glues, inks, oils, amongst others. Objective: To compare solvent the literary findings that evidence that diligent displayed simultaneously the noise and they have greater probability to develop an auditory loss of peripheral origin. Method: Revision of literature regarding the occupational auditory loss in displayed workers the noise and toluene. Results: The isolated exposition to the toluene also can unchain an alteration of the auditory thresholds. These audiometric findings, for ototoxicity the exposition to the toluene, present similar audiograms to the one for exposition to the noise, what it becomes difficult to differentiate a audiometric result of agreed exposition - noise and toluene - and exposition only to the noise. Conclusion: The majority of the studies was projected to generate hypotheses and would have to be considered as preliminary steps of an additional research. Until today the agents in the environment of work and its effect they have been studied in isolated way and the limits of tolerance of these, do not consider the agreed expositions. Considering that the workers are displayed the multiples agent and that the auditory loss is irreversible, the implemented tests must be more complete and all the workers must be part of the program of auditory prevention exactly displayed the low doses of the recommended limit of exposition. PMID:25991943

  14. Selective entrainment of brain oscillations drives auditory perceptual organization.

    PubMed

    Costa-Faidella, Jordi; Sussman, Elyse S; Escera, Carles

    2017-10-01

    Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Virtual Learning Environment for Interactive Engagement with Advanced Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Pedersen, Mads Kock; Skyum, Birk; Heck, Robert; Müller, Romain; Bason, Mark; Lieberoth, Andreas; Sherson, Jacob F.

    2016-06-01

    A virtual learning environment can engage university students in the learning process in ways that the traditional lectures and lab formats cannot. We present our virtual learning environment StudentResearcher, which incorporates simulations, multiple-choice quizzes, video lectures, and gamification into a learning path for quantum mechanics at the advanced university level. StudentResearcher is built upon the experiences gathered from workshops with the citizen science game Quantum Moves at the high-school and university level, where the games were used extensively to illustrate the basic concepts of quantum mechanics. The first test of this new virtual learning environment was a 2014 course in advanced quantum mechanics at Aarhus University with 47 enrolled students. We found increased learning for the students who were more active on the platform independent of their previous performances.

  16. Visualizing vascular structures in virtual environments

    NASA Astrophysics Data System (ADS)

    Wischgoll, Thomas

    2013-01-01

    In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.

  17. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  18. Knowledge Acquisition and Job Training for Advanced Technical Skills Using Immersive Virtual Environment

    NASA Astrophysics Data System (ADS)

    Watanuki, Keiichi; Kojima, Kazuyuki

    The environment in which Japanese industry has achieved great respect is changing tremendously due to the globalization of world economies, while Asian countries are undergoing economic and technical development as well as benefiting from the advances in information technology. For example, in the design of custom-made casting products, a designer who lacks knowledge of casting may not be able to produce a good design. In order to obtain a good design and manufacturing result, it is necessary to equip the designer and manufacturer with a support system related to casting design, or a so-called knowledge transfer and creation system. This paper proposes a new virtual reality based knowledge acquisition and job training system for casting design, which is composed of the explicit and tacit knowledge transfer systems using synchronized multimedia and the knowledge internalization system using portable virtual environment. In our proposed system, the education content is displayed in the immersive virtual environment, whereby a trainee may experience work in the virtual site operation. Provided that the trainee has gained explicit and tacit knowledge of casting through the multimedia-based knowledge transfer system, the immersive virtual environment catalyzes the internalization of knowledge and also enables the trainee to gain tacit knowledge before undergoing on-the-job training at a real-time operation site.

  19. Learning Rationales and Virtual Reality Technology in Education.

    ERIC Educational Resources Information Center

    Chiou, Guey-Fa

    1995-01-01

    Defines and describes virtual reality technology and differentiates between virtual learning environment, learning material, and learning tools. Links learning rationales to virtual reality technology to pave conceptual foundations for application of virtual reality technology education. Constructivism, case-based learning, problem-based learning,…

  20. A Multi-User Virtual Environment for Building and Assessing Higher Order Inquiry Skills in Science

    ERIC Educational Resources Information Center

    Ketelhut, Diane Jass; Nelson, Brian C.; Clarke, Jody; Dede, Chris

    2010-01-01

    This study investigated novel pedagogies for helping teachers infuse inquiry into a standards-based science curriculum. Using a multi-user virtual environment (MUVE) as a pedagogical vehicle, teams of middle-school students collaboratively solved problems around disease in a virtual town called River City. The students interacted with "avatars" of…

Top