Sample records for visual environments improve

  1. Visual landmarks facilitate rodent spatial navigation in virtual reality environments

    PubMed Central

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain areas. Virtual reality offers a unique approach to ask whether visual landmark cues alone are sufficient to improve performance in a spatial task. We found that mice could learn to navigate between two water reward locations along a virtual bidirectional linear track using a spherical treadmill. Mice exposed to a virtual environment with vivid visual cues rendered on a single monitor increased their performance over a 3-d training regimen. Training significantly increased the percentage of time avatars controlled by the mice spent near reward locations in probe trials without water rewards. Neither improvement during training or spatial learning for reward locations occurred with mice operating a virtual environment without vivid landmarks or with mice deprived of all visual feedback. Mice operating the vivid environment developed stereotyped avatar turning behaviors when alternating between reward zones that were positively correlated with their performance on the probe trial. These results suggest that mice are able to learn to navigate to specific locations using only visual cues presented within a virtual environment rendered on a single computer monitor. PMID:22345484

  2. Headphone and Head-Mounted Visual Displays for Virtual Environments

    NASA Technical Reports Server (NTRS)

    Begault, Duran R.; Ellis, Stephen R.; Wenzel, Elizabeth M.; Trejo, Leonard J. (Technical Monitor)

    1998-01-01

    A realistic auditory environment can contribute to both the overall subjective sense of presence in a virtual display, and to a quantitative metric predicting human performance. Here, the role of audio in a virtual display and the importance of auditory-visual interaction are examined. Conjectures are proposed regarding the effectiveness of audio compared to visual information for creating a sensation of immersion, the frame of reference within a virtual display, and the compensation of visual fidelity by supplying auditory information. Future areas of research are outlined for improving simulations of virtual visual and acoustic spaces. This paper will describe some of the intersensory phenomena that arise during operator interaction within combined visual and auditory virtual environments. Conjectures regarding audio-visual interaction will be proposed.

  3. Adapting the iSNOBAL model for improved visualization in a GIS environment

    NASA Astrophysics Data System (ADS)

    Johansen, W. J.; Delparte, D.

    2014-12-01

    Snowmelt is a primary means of crucial water resources in much of the western United States. Researchers are developing models that estimate snowmelt to aid in water resource management. One such model is the image snowcover energy and mass balance (iSNOBAL) model. It uses input climate grids to simulate the development and melting of snowpack in mountainous regions. This study looks at applying this model to the Reynolds Creek Experimental Watershed in southwestern Idaho, utilizing novel approaches incorporating geographic information systems (GIS). To improve visualization of the iSNOBAL model, we have adapted it to run in a GIS environment. This type of environment is suited to both the input grid creation and the visualization of results. The data used for input grid creation can be stored locally or on a web-server. Kriging interpolation embedded within Python scripts are used to create air temperature, soil temperature, humidity, and precipitation grids, while built-in GIS and existing tools are used to create solar radiation and wind grids. Additional Python scripting is then used to perform model calculations. The final product is a user-friendly and accessible version of the iSNOBAL model, including the ability to easily visualize and interact with model results, all within a web- or desktop-based GIS environment. This environment allows for interactive manipulation of model parameters and visualization of the resulting input grids for the model calculations. Future work is moving towards adapting the model further for use in a 3D gaming engine for improved visualization and interaction.

  4. The Contribution of Visualization to Learning Computer Architecture

    ERIC Educational Resources Information Center

    Yehezkel, Cecile; Ben-Ari, Mordechai; Dreyfus, Tommy

    2007-01-01

    This paper describes a visualization environment and associated learning activities designed to improve learning of computer architecture. The environment, EasyCPU, displays a model of the components of a computer and the dynamic processes involved in program execution. We present the results of a research program that analysed the contribution of…

  5. Audio-Visual Situational Awareness for General Aviation Pilots

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Lodha, Suresh K.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Weather is one of the major causes of general aviation accidents. Researchers are addressing this problem from various perspectives including improving meteorological forecasting techniques, collecting additional weather data automatically via on-board sensors and "flight" modems, and improving weather data dissemination and presentation. We approach the problem from the improved presentation perspective and propose weather visualization and interaction methods tailored for general aviation pilots. Our system, Aviation Weather Data Visualization Environment (AWE), utilizes information visualization techniques, a direct manipulation graphical interface, and a speech-based interface to improve a pilot's situational awareness of relevant weather data. The system design is based on a user study and feedback from pilots.

  6. Audio-visual interactions in environment assessment.

    PubMed

    Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata

    2015-08-01

    The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping. Copyright © 2015. Published by Elsevier B.V.

  7. Scaffolding Learning from Molecular Visualizations

    ERIC Educational Resources Information Center

    Chang, Hsin-Yi; Linn, Marcia C.

    2013-01-01

    Powerful online visualizations can make unobservable scientific phenomena visible and improve student understanding. Instead, they often confuse or mislead students. To clarify the impact of molecular visualizations for middle school students we explored three design variations implemented in a Web-based Inquiry Science Environment (WISE) unit on…

  8. A visual ergonomics intervention in mail sorting facilities: effects on eyes, muscles and productivity.

    PubMed

    Hemphälä, Hillevi; Eklund, Jörgen

    2012-01-01

    Visual requirements are high when sorting mail. The purpose of this visual ergonomics intervention study was to evaluate the visual environment in mail sorting facilities and to explore opportunities for improving the work situation by reducing visual strain, improving the visual work environment and reducing mail sorting time. Twenty-seven postmen/women participated in a pre-intervention study, which included questionnaires on their experiences of light, visual ergonomics, health, and musculoskeletal symptoms. Measurements of lighting conditions and productivity were also performed along with eye examinations of the postmen/women. The results from the pre-intervention study showed that the postmen/women who suffered from eyestrain had a higher prevalence of musculoskeletal disorders (MSD) and sorted slower, than those without eyestrain. Illuminance and illuminance uniformity improved as a result of the intervention. The two post-intervention follow-ups showed a higher prevalence of MSD among the postmen/women with eyestrain than among those without. The previous differences in sorting time for employees with and without eyestrain disappeared. After the intervention, the postmen/women felt better in general, experienced less work induced stress, and considered that the total general lighting had improved. The most pronounced decreases in eyestrain, MSD, and mail sorting time were seen among the younger participants of the group. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. Simulating Visual Attention Allocation of Pilots in an Advanced Cockpit Environment

    NASA Technical Reports Server (NTRS)

    Frische, F.; Osterloh, J.-P.; Luedtke, A.

    2011-01-01

    This paper describes the results of experiments conducted with human line pilots and a cognitive pilot model during interaction with a new 40 Flight Management System (FMS). The aim of these experiments was to gather human pilot behavior data in order to calibrate the behavior of the model. Human behavior is mainly triggered by visual perception. Thus, the main aspect was to setup a profile of human pilots' visual attention allocation in a cockpit environment containing the new FMS. We first performed statistical analyses of eye tracker data and then compared our results to common results of familiar analyses in standard cockpit environments. The comparison has shown a significant influence of the new system on the visual performance of human pilots. Further on, analyses of the pilot models' visual performance have been performed. A comparison to human pilots' visual performance revealed important improvement potentials.

  10. Engaging Direct Care Providers in Improving Infection Prevention and Control Practices Using Participatory Visual Methods.

    PubMed

    Backman, Chantal; Bruce, Natalie; Marck, Patricia; Vanderloo, Saskia

    2016-01-01

    The purpose of this quality improvement project was to determine the feasibility of using provider-led participatory visual methods to scrutinize 4 hospital units' infection prevention and control practices. Methods included provider-led photo walkabouts, photo elicitation sessions, and postimprovement photo walkabouts. Nurses readily engaged in using the methods to examine and improve their units' practices and reorganize their work environment.

  11. A Three Pronged Approach for Improved Data Understanding: 3-D Visualization, Use of Gaming Techniques, and Intelligent Advisory Agents

    DTIC Science & Technology

    2006-10-01

    Pronged Approach for Improved Data Understanding: 3-D Visualization, Use of Gaming Techniques, and Intelligent Advisory Agents. In Visualising Network...University at the start of each fall semester, when numerous new students arrive on campus and begin downloading extensive amounts of audio and...SIGGRAPH ’92 • C. Cruz-Neira, D.J. Sandin, T.A. DeFanti, R.V. Kenyon and J.C. Hart, "The CAVE: Audio Visual Experience Automatic Virtual Environment

  12. Beneficial effects of enriched environment following status epilepticus in immature rats.

    PubMed

    Faverjon, S; Silveira, D C; Fu, D D; Cha, B H; Akman, C; Hu, Y; Holmes, G L

    2002-11-12

    There is increasing evidence that enriching the environment can improve cognitive and motor deficits following a variety of brain injuries. Whether environmental enrichment can improve cognitive impairment following status epilepticus (SE) is not known. To determine whether the environment in which animals are raised influences cognitive function in normal rats and rats subjected to SE. Rats (n = 100) underwent lithium-pilocarpine-induced SE at postnatal (P) day 20 and were then placed in either an enriched environment consisting of a large play area with toys, climbing objects, and music, or in standard vivarium cages for 30 days. Control rats (n = 32) were handled similarly to the SE rats but received saline injections instead of lithium-pilocarpine. Rats were then tested in the water maze, a measure of visual-spatial memory. A subset of the rats were killed during exposure to the enriched or nonenriched environment and the brains examined for dentate granule cell neurogenesis using bromodeoxyuridine (BrdU) and phosphorylated cyclic AMP response element binding protein (pCREB) immunostaining, a brain transcription factor important in long-term memory. Both control and SE rats exposed to the enriched environment performed significantly better than the nonenriched group in the water maze. There was a significant increase in neurogenesis and pCREB immunostaining in the dentate gyrus in both control and SE animals exposed to the enriched environment compared to the nonenriched groups. Environmental enrichment resulted in no change in SE-induced histologic damage. Exposure to an enriched environment in weanling rats significantly improves visual-spatial learning. Even following SE, an enriched environment enhances cognitive function. An increase in neurogenesis and activation of transcription factors may contribute to this enhanced visual-spatial memory.

  13. Should visual speech cues (speechreading) be considered when fitting hearing aids?

    NASA Astrophysics Data System (ADS)

    Grant, Ken

    2002-05-01

    When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.

  14. The effectiveness of visual art on environment in nursing home.

    PubMed

    Chang, Chia-Hsiu; Lu, Ming-Shih; Lin, Tsyr-En; Chen, Chung-Hey

    2013-06-01

    This Taiwan study investigated the effect of a visual art-based friendly environment on nursing home residents' satisfaction with their living environment. A pre-experimental design was used. Thirty-three residents in a nursing home were recruited in a one-group pre- and post-test study. The four-floor living environment was integrated using visual art, reminiscence, and gardening based on the local culture and history. Each floor was given a different theme, one that was familiar to most of the residents on the floor. The Satisfaction with Living Environment at Nursing Home Scale (SLE-NHS) was developed to measure outcomes. Of the 33 participants recruited, 27 (81.8%) were women and 6 (18.2%) were men. Their mean age was 79.24 ± 7.40 years, and 48.5% were severely dependent in activities of daily living. The SLE-NHS showed adequate reliability and validity. Its three domains were generated and defined using factor analysis. After the visual art-based intervention, the score on the "recalling old memories" subscale was significantly higher (t = -13.32, p < .001). However, there were no significant score changes on the "convenience" and "pretty and pleasurable" subscales. In general, the participants were satisfied with the redesigned environment and felt happy in the sunny rooms. Visual art in a nursing home is a novel method for representing the local culture and stressing the spiritual value of the elderly residents who helped create it. Older adults' aesthetic activities through visual art, including reminiscence and local culture, may enrich their spirits in later life. Older adults' aesthetic activities through visual art have been shown to improve their satisfaction with their living environment. The SLE-NHS is a useful tool for evaluating their satisfaction. © 2013 Sigma Theta Tau International.

  15. The development of organized visual search

    PubMed Central

    Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.

    2013-01-01

    Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560

  16. BilKristal 2.0: A tool for pattern information extraction from crystal structures

    NASA Astrophysics Data System (ADS)

    Okuyan, Erhan; Güdükbay, Uğur

    2014-01-01

    We present a revised version of the BilKristal tool of Okuyan et al. (2007). We converted the development environment into Microsoft Visual Studio 2005 in order to resolve compatibility issues. We added multi-core CPU support and improvements are made to graphics functions in order to improve performance. Discovered bugs are fixed and exporting functionality to a material visualization tool is added.

  17. Experiences of visually impaired students in higher education: Bodily perspectives on inclusive education.

    PubMed

    Lourens, Heidi; Swartz, Leslie

    Although previous literature sheds light on the experiences of visually impaired students on tertiary grounds, these studies failed to provide an embodied understanding of their lives. In-depth interviews with 15 visually impaired students at one university demonstrated the ways in which they experienced their disability and the built environment in their bodies. At the same time, lost, fearful, shameful and aching bodies revealed prevailing gaps in provision for disabled students. Through this research it becomes clear how the environment is acutely felt within fleshly worlds, while bodies do not fail to tell of disabling societal structures. Based on the bodily stories, we thus make recommendations to improve the lives of visually impaired students on tertiary campuses.

  18. Experiences of visually impaired students in higher education: Bodily perspectives on inclusive education

    PubMed Central

    Lourens, Heidi; Swartz, Leslie

    2016-01-01

    Although previous literature sheds light on the experiences of visually impaired students on tertiary grounds, these studies failed to provide an embodied understanding of their lives. In-depth interviews with 15 visually impaired students at one university demonstrated the ways in which they experienced their disability and the built environment in their bodies. At the same time, lost, fearful, shameful and aching bodies revealed prevailing gaps in provision for disabled students. Through this research it becomes clear how the environment is acutely felt within fleshly worlds, while bodies do not fail to tell of disabling societal structures. Based on the bodily stories, we thus make recommendations to improve the lives of visually impaired students on tertiary campuses. PMID:27917028

  19. Degraded visual environment image/video quality metrics

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  20. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments †

    PubMed Central

    Guerra, Edmundo

    2018-01-01

    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation. PMID:29701722

  1. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments.

    PubMed

    Trujillo, Juan-Carlos; Munguia, Rodrigo; Guerra, Edmundo; Grau, Antoni

    2018-04-26

    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.

  2. Helmet-mounted display systems for flight simulation

    NASA Technical Reports Server (NTRS)

    Haworth, Loren A.; Bucher, Nancy M.

    1989-01-01

    Simulation scientists are continually improving simulation technology with the goal of more closely replicating the physical environment of the real world. The presentation or display of visual information is one area in which recent technical improvements have been made that are fundamental to conducting simulated operations close to the terrain. Detailed and appropriate visual information is especially critical for nap-of-the-earth helicopter flight simulation where the pilot maintains an 'eyes-out' orientation to avoid obstructions and terrain. This paper describes visually coupled wide field of view helmet-mounted display (WFOVHMD) system technology as a viable visual presentation system for helicopter simulation. Tradeoffs associated with this mode of presentation as well as research and training applications are discussed.

  3. Using high-resolution displays for high-resolution cardiac data.

    PubMed

    Goodyer, Christopher; Hodrien, John; Wood, Jason; Kohl, Peter; Brodlie, Ken

    2009-07-13

    The ability to perform fast, accurate, high-resolution visualization is fundamental to improving our understanding of anatomical data. As the volumes of data increase from improvements in scanning technology, the methods applied to visualization must evolve. In this paper, we address the interactive display of data from high-resolution magnetic resonance imaging scanning of a rabbit heart and subsequent histological imaging. We describe a visualization environment involving a tiled liquid crystal display panel display wall and associated software, which provides an interactive and intuitive user interface. The oView software is an OpenGL application that is written for the VR Juggler environment. This environment abstracts displays and devices away from the application itself, aiding portability between different systems, from desktop PCs to multi-tiled display walls. Portability between display walls has been demonstrated through its use on walls at the universities of both Leeds and Oxford. We discuss important factors to be considered for interactive two-dimensional display of large three-dimensional datasets, including the use of intuitive input devices and level of detail aspects.

  4. Visual and flight performance recovery after PRK or LASIK in helicopter pilots.

    PubMed

    Van de Pol, Corina; Greig, Joanna L; Estrada, Art; Bissette, Gina M; Bower, Kraig S

    2007-06-01

    Refractive surgery, specifically photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK), is becoming more accepted in the military environment. Determination of the impact on visual performance in the more demanding aviation environment was the impetus for this study. A prospective evaluation of 20 Black Hawk pilots pre-surgically and at 1 wk, 1 mo, and 6 mo postsurgery was conducted to assess both PRK and LASIK visual and flight performance outcomes on the return of aviators to duty. Of 20 pilots, 19 returned to flight status at 1 mo after surgery; 1 PRK subject was delayed due to corneal haze and subjective visual symptoms. Improvements were seen under simulator night and night vision goggle flight after LASIK; no significant changes in flight performance were measured in the aircraft. Results indicated a significantly faster recovery of all visual performance outcomes 1 wk after LASIK vs. PRK, with no difference between procedures at 1 and 6 mo. Low contrast acuity and contrast sensitivity only weakly correlated to flight performance in the early post-operative period. Overall flight performance assessed in this study after PRK and LASIK was stable or improved from baseline, indicating a resilience of performance despite measured decrements in visual performance, especially in PRK. More visually demanding flight tasks may be impacted by subtle changes in visual performance. Contrast tests are more sensitive to the effects of refractive surgical intervention and may prove to be a better indicator of visual recovery for return to flight status.

  5. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    PubMed

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  6. COMPARISON OF DATA FROM THE STN AND IMPROVE NETWORKS

    EPA Science Inventory

    Two national chemical speciation-monitoring networks operate currently within the United States. The Interagency Monitoring of Protected Visual Environments (IMPROVE) monitoring network operates primarily in rural areas collecting aerosol and optical data to better understand th...

  7. Visual analysis of fluid dynamics at NASA's numerical aerodynamic simulation facility

    NASA Technical Reports Server (NTRS)

    Watson, Velvin R.

    1991-01-01

    A study aimed at describing and illustrating visualization tools used in Computational Fluid Dynamics (CFD) and indicating how these tools are likely to change by showing a projected resolution of the human computer interface is presented. The following are outlined using a graphically based test format: the revolution of human computer environments for CFD research; comparison of current environments; current environments with the ideal; predictions for the future CFD environments; what can be done to accelerate the improvements. The following comments are given: when acquiring visualization tools, potential rapid changes must be considered; environmental changes over the next ten years due to human computer interface cannot be fathomed; data flow packages such as AVS, apE, Explorer and Data Explorer are easy to learn and use for small problems, excellent for prototyping, but not so efficient for large problems; the approximation techniques used in visualization software must be appropriate for the data; it has become more cost effective to move jobs that fit on workstations and run only memory intensive jobs on the supercomputer; use of three dimensional skills will be maximized when the three dimensional environment is built in from the start.

  8. Sensorimotor Adaptability Training Improves Motor and Dual-Task Performance

    NASA Technical Reports Server (NTRS)

    Bloomberg, J.J.; Peters, B.T.; Mulavara, A.P.; Brady, R.; Batson, C.; Cohen, H.S.

    2009-01-01

    The overall objective of our project is to develop a sensorimotor adaptability (SA) training program designed to facilitate recovery of functional capabilities when astronauts transition to different gravitational environments. The goal of our current study was to determine if SA training using variation in visual flow and support surface motion produces improved performance in a novel sensory environment and demonstrate the retention characteristics of SA training.

  9. Using Real-Time Visual Feedback to Improve Posture at Computer Workstations

    ERIC Educational Resources Information Center

    Sigurdsson, Sigurdur O.; Austin, John

    2008-01-01

    The purpose of the current study was to examine the effects of a multicomponent intervention that included discrimination training, real-time visual feedback, and self-monitoring on postural behavior at a computer workstation in a simulated office environment. Using a nonconcurrent multiple baseline design across 8 participants, the study assessed…

  10. Visemic Processing in Audiovisual Discrimination of Natural Speech: A Simultaneous fMRI-EEG Study

    ERIC Educational Resources Information Center

    Dubois, Cyril; Otzenberger, Helene; Gounot, Daniel; Sock, Rudolph; Metz-Lutz, Marie-Noelle

    2012-01-01

    In a noisy environment, visual perception of articulatory movements improves natural speech intelligibility. Parallel to phonemic processing based on auditory signal, visemic processing constitutes a counterpart based on "visemes", the distinctive visual units of speech. Aiming at investigating the neural substrates of visemic processing in a…

  11. Designing a Visual Interface for Online Searching.

    ERIC Educational Resources Information Center

    Lin, Xia

    1999-01-01

    "MedLine Search Assistant" is a new interface for MEDLINE searching that improves both search precision and recall by helping the user convert a free text search to a controlled vocabulary-based search in a visual environment. Features of the interface are described, followed by details of the conceptual design and the physical design of…

  12. Emerging CAE technologies and their role in Future Ambient Intelligence Environments

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.

    2011-03-01

    Dramatic improvements are on the horizon in Computer Aided Engineering (CAE) and various simulation technologies. The improvements are due, in part, to the developments in a number of leading-edge technologies and their synergistic combinations/convergence. The technologies include ubiquitous, cloud, and petascale computing; ultra high-bandwidth networks, pervasive wireless communication; knowledge based engineering; networked immersive virtual environments and virtual worlds; novel human-computer interfaces; and powerful game engines and facilities. This paper describes the frontiers and emerging simulation technologies, and their role in the future virtual product creation and learning/training environments. The environments will be ambient intelligence environments, incorporating a synergistic combination of novel agent-supported visual simulations (with cognitive learning and understanding abilities); immersive 3D virtual world facilities; development chain management systems and facilities (incorporating a synergistic combination of intelligent engineering and management tools); nontraditional methods; intelligent, multimodal and human-like interfaces; and mobile wireless devices. The Virtual product creation environment will significantly enhance the productivity and will stimulate creativity and innovation in future global virtual collaborative enterprises. The facilities in the learning/training environment will provide timely, engaging, personalized/collaborative and tailored visual learning.

  13. The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality?

    PubMed

    Bach, Benjamin; Sicat, Ronell; Beyer, Johanna; Cordeil, Maxime; Pfister, Hanspeter

    2018-01-01

    We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.

  14. An Environmental Experience. Man: Steward of His Environment.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany.

    Environmental awareness experiences described in this guide are designed to serve as models or suggestions for teachers conducting activities directed toward environmental improvement. "Man: Steward of His Environment" is the theme of the 14 experiences utilizing behavior profiles, audio-visual exhibits, area studies, service projects, mass…

  15. The Effects of a Novel Head-Mounted Symbology on Spatial Disorientation and Flight Performance in U.S. Air Force Pilots

    DTIC Science & Technology

    2012-10-24

    has been tested in a clinical environment and has proven capable of improving vestibular symptoms (e.g., dizziness, spinning, vertigo ) and...vestibular problems (e.g., dizziness, vertigo ). They also had no history of visual deficits and all possessed a Snellen visual acuity of 20/20 or

  16. Does the Integration of Haptic and Visual Cues Reduce the Effect of a Biased Visual Reference Frame on the Subjective Head Orientation?

    PubMed Central

    Gueguen, Marc; Vuillerme, Nicolas; Isableu, Brice

    2012-01-01

    Background The selection of appropriate frames of reference (FOR) is a key factor in the elaboration of spatial perception and the production of robust interaction with our environment. The extent to which we perceive the head axis orientation (subjective head orientation, SHO) with both accuracy and precision likely contributes to the efficiency of these spatial interactions. A first goal of this study was to investigate the relative contribution of both the visual and egocentric FOR (centre-of-mass) in the SHO processing. A second goal was to investigate humans' ability to process SHO in various sensory response modalities (visual, haptic and visuo-haptic), and the way they modify the reliance to either the visual or egocentric FORs. A third goal was to question whether subjects combined visual and haptic cues optimally to increase SHO certainty and to decrease the FORs disruption effect. Methodology/Principal Findings Thirteen subjects were asked to indicate their SHO while the visual and/or egocentric FORs were deviated. Four results emerged from our study. First, visual rod settings to SHO were altered by the tilted visual frame but not by the egocentric FOR alteration, whereas no haptic settings alteration was observed whether due to the egocentric FOR alteration or the tilted visual frame. These results are modulated by individual analysis. Second, visual and egocentric FOR dependency appear to be negatively correlated. Third, the response modality enrichment appears to improve SHO. Fourth, several combination rules of the visuo-haptic cues such as the Maximum Likelihood Estimation (MLE), Winner-Take-All (WTA) or Unweighted Mean (UWM) rule seem to account for SHO improvements. However, the UWM rule seems to best account for the improvement of visuo-haptic estimates, especially in situations with high FOR incongruence. Finally, the data also indicated that FOR reliance resulted from the application of UWM rule. This was observed more particularly, in the visual dependent subject. Conclusions: Taken together, these findings emphasize the importance of identifying individual spatial FOR preferences to assess the efficiency of our interaction with the environment whilst performing spatial tasks. PMID:22509295

  17. Concept of Operations for Commercial and Business Aircraft Synthetic Vision Systems. 1.0

    NASA Technical Reports Server (NTRS)

    Williams Daniel M.; Waller, Marvin C.; Koelling, John H.; Burdette, Daniel W.; Capron, William R.; Barry, John S.; Gifford, Richard B.; Doyle, Thomas M.

    2001-01-01

    A concept of operations (CONOPS) for the Commercial and Business (CaB) aircraft synthetic vision systems (SVS) is described. The CaB SVS is expected to provide increased safety and operational benefits in normal and low visibility conditions. Providing operational benefits will promote SVS implementation in the Net, improve aviation safety, and assist in meeting the national aviation safety goal. SVS will enhance safety and enable consistent gate-to-gate aircraft operations in normal and low visibility conditions. The goal for developing SVS is to support operational minima as low as Category 3b in a variety of environments. For departure and ground operations, the SVS goal is to enable operations with a runway visual range of 300 feet. The system is an integrated display concept that provides a virtual visual environment. The SVS virtual visual environment is composed of three components: an enhanced intuitive view of the flight environment, hazard and obstacle defection and display, and precision navigation guidance. The virtual visual environment will support enhanced operations procedures during all phases of flight - ground operations, departure, en route, and arrival. The applications selected for emphasis in this document include low visibility departures and arrivals including parallel runway operations, and low visibility airport surface operations. These particular applications were selected because of significant potential benefits afforded by SVS.

  18. 38 CFR 21.216 - Special equipment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... aids to closed-circuit TV systems which amplify reading material for veterans with severe visual impairments. (3) Modifications to improve access. This category includes adaptations of environment not...

  19. Visualizing the process of interaction in a 3D environment

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Krishnan, Kajoli; Mullick, Rakesh

    2007-03-01

    As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality environment can be visualized and how this can allow us to gain greater insight into the process of interaction/learning in these systems. Also explored is the possibility of using this method to improve understanding and management of ergonomic issues within an interface.

  20. Students using visual thinking to learn science in a Web-based environment

    NASA Astrophysics Data System (ADS)

    Plough, Jean Margaret

    United States students' science test scores are low, especially in problem solving, and traditional science instruction could be improved. Consequently, visual thinking, constructing science structures, and problem solving in a web-based environment may be valuable strategies for improving science learning. This ethnographic study examined the science learning of fifteen fourth grade students in an after school computer club involving diverse students at an inner city school. The investigation was done from the perspective of the students, and it described the processes of visual thinking, web page construction, and problem solving in a web-based environment. The study utilized informal group interviews, field notes, Visual Learning Logs, and student web pages, and incorporated a Standards-Based Rubric which evaluated students' performance on eight science and technology standards. The Visual Learning Logs were drawings done on the computer to represent science concepts related to the Food Chain. Students used the internet to search for information on a plant or animal of their choice. Next, students used this internet information, with the information from their Visual Learning Logs, to make web pages on their plant or animal. Later, students linked their web pages to form Science Structures. Finally, students linked their Science Structures with the structures of other students, and used these linked structures as models for solving problems. Further, during informal group interviews, students answered questions about visual thinking, problem solving, and science concepts. The results of this study showed clearly that (1) making visual representations helped students understand science knowledge, (2) making links between web pages helped students construct Science Knowledge Structures, and (3) students themselves said that visual thinking helped them learn science. In addition, this study found that when using Visual Learning Logs, the main overall ideas of the science concepts were usually represented accurately. Further, looking for information on the internet may cause new problems in learning. Likewise, being absent, starting late, and/or dropping out all may negatively influence students' proficiency on the standards. Finally, the way Science Structures are constructed and linked may provide insights into the way individual students think and process information.

  1. Experiments in teleoperator and autonomous control of space robotic vehicles

    NASA Technical Reports Server (NTRS)

    Alexander, Harold L.

    1991-01-01

    A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.

  2. Three-dimensional visualization and display technologies; Proceedings of the Meeting, Los Angeles, CA, Jan. 18-20, 1989

    NASA Technical Reports Server (NTRS)

    Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)

    1989-01-01

    Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.

  3. Vids: Version 2.0 Alpha Visualization Engine

    DTIC Science & Technology

    2018-04-25

    fidelity than existing efforts. Vids is a project aimed at producing more dynamic and interactive visualization tools using modern computer game ...move through and interact with the data to improve informational understanding. The Vids software leverages off-the-shelf modern game development...analysis and correlations. Recently, an ARL-pioneered project named Virtual Reality Data Analysis Environment (VRDAE) used VR and a modern game engine

  4. Traditional Project Management and the Visual Workplace Environment to Improve Project Success

    ERIC Educational Resources Information Center

    Fichera, Christopher E.

    2016-01-01

    A majority of large IT projects fail to meet scheduled deadlines, are over budget and do not satisfy the end user. Many projects fail in spite of utilizing traditional project management techniques. Research of project management has not identified the use of a visual workspace as a feature affecting or influencing the success of a project during…

  5. The use of mobile devices as assistive technology in resource-limited environments: access for learners with visual impairments in Kenya.

    PubMed

    Foley, Alan R; Masingila, Joanna O

    2015-07-01

    In this paper, the authors explore the use of mobile devices as assistive technology for students with visual impairments in resource-limited environments. This paper provides initial data and analysis from an ongoing project in Kenya using tablet devices to provide access to education and independence for university students with visual impairments in Kenya. The project is a design-based research project in which we have developed and are refining a theoretically grounded intervention--a model for developing communities of practice to support the use of mobile technology as an assistive technology. We are collecting data to assess the efficacy and improve the model as well as inform the literature that has guided the design of the intervention. In examining the impact of the use of mobile devices for the students with visual impairments, we found that the devices provide the students with (a) access to education, (b) the means to participate in everyday life and (c) the opportunity to create a community of practice. Findings from this project suggest that communities of practice are both a viable and a valuable approach for facilitating the diffusion and support of mobile devices as assistive technology for students with visual impairments in resource-limited environments. Implications for Rehabilitation The use of mobile devices as assistive technology in resource-limited environments provides students with visual impairments access to education and enhanced means to participate in everyday life. Communities of practice are both a viable and a valuable approach for facilitating the diffusion and support of mobile devices as assistive technology for students with visual impairments in resource-limited environments. Providing access to assistive technology early and consistently throughout students' schooling builds both their skill and confidence and also demonstrates the capabilities of people with visual impairments to the larger society.

  6. Manipulating the fidelity of lower extremity visual feedback to identify obstacle negotiation strategies in immersive virtual reality.

    PubMed

    Kim, Aram; Zhou, Zixuan; Kretch, Kari S; Finley, James M

    2017-07-01

    The ability to successfully navigate obstacles in our environment requires integration of visual information about the environment with estimates of our body's state. Previous studies have used partial occlusion of the visual field to explore how information about the body and impending obstacles are integrated to mediate a successful clearance strategy. However, because these manipulations often remove information about both the body and obstacle, it remains to be seen how information about the lower extremities alone is utilized during obstacle crossing. Here, we used an immersive virtual reality (VR) interface to explore how visual feedback of the lower extremities influences obstacle crossing performance. Participants wore a head-mounted display while walking on treadmill and were instructed to step over obstacles in a virtual corridor in four different feedback trials. The trials involved: (1) No visual feedback of the lower extremities, (2) an endpoint-only model, (3) a link-segment model, and (4) a volumetric multi-segment model. We found that the volumetric model improved success rate, placed their trailing foot before crossing and leading foot after crossing more consistently, and placed their leading foot closer to the obstacle after crossing compared to no model. This knowledge is critical for the design of obstacle negotiation tasks in immersive virtual environments as it may provide information about the fidelity necessary to reproduce ecologically valid practice environments.

  7. Validity of using photographs to simulate visible qualities of forest recreation environments

    Treesearch

    Robin E. Hoffman; James F. Palmer

    1995-01-01

    Forest recreation managers and researchers interested in conserving and improving the visual quality and recreation opportunities available in forest environments must often resort to simulations as a means of illustrating alternatives for potential users to evaluate. This paper reviews the results of prior research evaluating the validity of using photographic...

  8. G2H--graphics-to-haptic virtual environment development tool for PC's.

    PubMed

    Acosta, E; Temkin, B; Krummel, T M; Heinrichs, W L

    2000-01-01

    For surgical training and preparations, the existing surgical virtual environments have shown great improvement. However, these improvements are more in the visual aspect. The incorporation of haptics into virtual reality base surgical simulations would enhance the sense of realism greatly. To aid in the development of the haptic surgical virtual environment we have created a graphics to haptic, G2H, virtual environment developer tool. G2H transforms graphical virtual environments (created or imported) to haptic virtual environments without programming. The G2H capability has been demonstrated using the complex 3D pelvic model of Lucy 2.0, the Stanford Visible Female. The pelvis was made haptic using G2H without any further programming effort.

  9. Lexical Link Analysis (LLA) Application: Improving Web Service to Defense Acquisition Visibility Environment (DAVE)

    DTIC Science & Technology

    2015-05-01

    1 LEXICAL LINK ANALYSIS (LLA) APPLICATION: IMPROVING WEB SERVICE TO DEFENSE ACQUISITION VISIBILITY ENVIRONMENT(DAVE) May 13-14, 2015 Dr. Ying...REPORT DATE MAY 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Lexical Link Analysis (LLA) Application...Making 3 2 1 3 L L A Methods • Lexical Link Analysis (LLA) Core – LLA Reports and Visualizations • Collaborative Learning Agents (CLA) for

  10. Distributed GPU Computing in GIScience

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE Transactions on, 9(3), 378-394. 2. Li, J., Jiang, Y., Yang, C., Huang, Q., & Rice, M. (2013). Visualizing 3D/4D Environmental Data Using Many-core Graphics Processing Units (GPUs) and Multi-core Central Processing Units (CPUs). Computers & Geosciences, 59(9), 78-89. 3. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., & Phillips, J. C. (2008). GPU computing. Proceedings of the IEEE, 96(5), 879-899.

  11. Inheriting the Learner's View: A Google Glass-Based Wearable Computing Platform for Improving Surgical Trainee Performance.

    PubMed

    Brewer, Zachary E; Fann, Hutchinson C; Ogden, W David; Burdon, Thomas A; Sheikh, Ahmad Y

    2016-01-01

    It is speculated that, in operative environments, real-time visualization of the trainee's viewpoint by the instructor may improve performance and teaching efficacy. We hypothesized that introduction of a wearable surgical visualization system allowing the instructor to visualize otherwise "blind" areas in the operative field could improve trainee performance in a simulated operative setting. A total of 11 surgery residents (4 in general surgery training and 7 in an integrated 6-year cardiothoracic surgery program) participated in the study. Google (Mountain View, CA) Glass hardware running proprietary software from CrowdOptic (San Francisco, CA) was utilized for creation of the wearable surgical visualization system. Both the learner and trainer wore the system, and video was streamed from the learner's system in real time to the trainer, who directed the learner to place needles in a simulated operative field. Subjects placed a total of 5 needles in each of 4 quadrants. A composite error score was calculated based on the accuracy of needle placement in relation to the intended needle trajectories as described by the trainer. Time to task completion (TTC) was also measured and participants completed an exit questionnaire. All residents completed the protocol tasks and the survey. Introduction of the wearable surgical visualization system did not affect mean time to task completion (278 ± 50 vs. 282 ± 69 seconds, p = NS). However, mean composite error score fell significantly once the wearable system was deployed (18 ± 5 vs. 15 ± 4, p < 0.05), demonstrating improved accuracy of needle placement. Most of the participants deemed the device unobtrusive, easy to operate, and useful for communication and instruction. This study suggests that wearable surgical visualization systems allowing for adoption of the learner's perspective may be a useful educational adjunct in the training of surgeons. Further evaluations of the efficacy of wearable technology in the operating room environment are warranted. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  12. The role of vision in auditory distance perception.

    PubMed

    Calcagno, Esteban R; Abregú, Ezequiel L; Eguía, Manuel C; Vergara, Ramiro

    2012-01-01

    In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.

  13. "Building" 3D visualization skills in mineralogy

    NASA Astrophysics Data System (ADS)

    Gaudio, S. J.; Ajoku, C. N.; McCarthy, B. S.; Lambart, S.

    2016-12-01

    Studying mineralogy is fundamental for understanding the composition and physical behavior of natural materials in terrestrial and extraterrestrial environments. However, some students struggle and ultimately get discouraged with mineralogy course material because they lack well-developed spatial visualization skills that are needed to deal with three-dimensional (3D) objects, such as crystal forms or atomic-scale structures, typically represented in two-dimensional (2D) space. Fortunately, spatial visualization can improve with practice. Our presentation demonstrates a set of experiential learning activities designed to support the development and improvement of spatial visualization skills in mineralogy using commercially available magnetic building tiles, rods, and spheres. These instructional support activities guide students in the creation of 3D models that replicate macroscopic crystal forms and atomic-scale structures in a low-pressure learning environment and at low cost. Students physically manipulate square and triangularly shaped magnetic tiles to build 3D open and closed crystal forms (platonic solids, prisms, pyramids and pinacoids). Prismatic shapes with different closing forms are used to demonstrate the relationship between crystal faces and Miller Indices. Silica tetrahedra and octahedra are constructed out of magnetic rods (bonds) and spheres (oxygen atoms) to illustrate polymerization, connectivity, and the consequences for mineral formulae. In another activity, students practice the identification of symmetry elements and plane lattice types by laying magnetic rods and spheres over wallpaper patterns. The spatial visualization skills developed and improved through our experiential learning activities are critical to the study of mineralogy and many other geology sub-disciplines. We will also present pre- and post- activity assessments that are aligned with explicit learning outcomes.

  14. Designing sound and visual components for enhancement of urban soundscapes.

    PubMed

    Hong, Joo Young; Jeon, Jin Yong

    2013-09-01

    The aim of this study is to investigate the effect of audio-visual components on environmental quality to improve soundscape. Natural sounds with road traffic noise and visual components in urban streets were evaluated through laboratory experiments. Waterfall and stream water sounds, as well as bird sounds, were selected to enhance the soundscape. Sixteen photomontages of a streetscape were constructed in combination with two types of water features and three types of vegetation which were chosen as positive visual components. The experiments consisted of audio-only, visual-only, and audio-visual conditions. The preferences and environmental qualities of the stimuli were evaluated by a numerical scale and 12 pairs of adjectives, respectively. The results showed that bird sounds were the most preferred among the natural sounds, while the sound of falling water was found to degrade the soundscape quality when the road traffic noise level was high. The visual effects of vegetation on aesthetic preference were significant, but those of water features relatively small. It was revealed that the perceptual dimensions of the environment were different from the noise levels. Particularly, the acoustic comfort factor related to soundscape quality considerably influenced preference for the overall environment at a higher level of road traffic noise.

  15. Developing Social Skills in Children Who Have Disabilities through the Use of Social Stories and Visual Supports

    ERIC Educational Resources Information Center

    Fisher, Kristi; Haufe, Theresa

    2009-01-01

    The purpose of this action research project was to improve the social skills of eight preschool students and four first grade and second grade students through the use of Social Stories and visual supports to create a more positive learning environment. The teacher researchers wanted to increase the social skills of students who had been diagnosed…

  16. Use of cues in virtual reality depends on visual feedback.

    PubMed

    Fulvio, Jacqueline M; Rokers, Bas

    2017-11-22

    3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.

  17. Seafloor Environments North St. Croix Margin and Virgin Islands Trough. Part 1. Introduction. Part 2. Geology and Geophysics. Part 3. Geotechnical Investigations. Part 4. Engineering Significance,

    DTIC Science & Technology

    1982-12-01

    Visual observations indi- cate that rock outcrops are generally infrequent. Pelagic deposition, dovnslope creep, slumping, and tur- bidity currents are...investigation. represents * major improvement in the current knowledge of the seafloor environment in the VIT region. In particular, it is the first...the VIT to supplement long-range planning of Navy ac- tivities in this area. This investigation represents a major improvement in the current knowledge

  18. Smart-system of distance learning of visually impaired people based on approaches of artificial intelligence

    NASA Astrophysics Data System (ADS)

    Samigulina, Galina A.; Shayakhmetova, Assem S.

    2016-11-01

    Research objective is the creation of intellectual innovative technology and information Smart-system of distance learning for visually impaired people. The organization of the available environment for receiving quality education for visually impaired people, their social adaptation in society are important and topical issues of modern education.The proposed Smart-system of distance learning for visually impaired people can significantly improve the efficiency and quality of education of this category of people. The scientific novelty of proposed Smart-system is using intelligent and statistical methods of processing multi-dimensional data, and taking into account psycho-physiological characteristics of perception and awareness learning information by visually impaired people.

  19. Selecting Advanced Software Technology in Two Small Manufacturing Enterprises

    DTIC Science & Technology

    2004-05-01

    improving workflow to further reduce delivery times, enhance customer service, and obtain a competitive advantage . The company wanted help... environment , stakeholders’ needs, ecommerce , shop floor visualization, and collaboration capability. These statements are not significantly different...for the purpose of describing a software environment . This identification does not imply any recommendation or endorsement by NIST, the SEI, CMU, or

  20. Auditory biofeedback substitutes for loss of sensory information in maintaining stance.

    PubMed

    Dozza, Marco; Horak, Fay B; Chiari, Lorenzo

    2007-03-01

    The importance of sensory feedback for postural control in stance is evident from the balance improvements occurring when sensory information from the vestibular, somatosensory, and visual systems is available. However, the extent to which also audio-biofeedback (ABF) information can improve balance has not been determined. It is also unknown why additional artificial sensory feedback is more effective for some subjects than others and in some environmental contexts than others. The aim of this study was to determine the relative effectiveness of an ABF system to reduce postural sway in stance in healthy control subjects and in subjects with bilateral vestibular loss, under conditions of reduced vestibular, visual, and somatosensory inputs. This ABF system used a threshold region and non-linear scaling parameters customized for each individual, to provide subjects with pitch and volume coding of their body sway. ABF had the largest effect on reducing the body sway of the subjects with bilateral vestibular loss when the environment provided limited visual and somatosensory information; it had the smallest effect on reducing the sway of subjects with bilateral vestibular loss, when the environment provided full somatosensory information. The extent that all subjects substituted ABF information for their loss of sensory information was related to the extent that each subject was visually dependent or somatosensory-dependent for their postural control. Comparison of postural sway under a variety of sensory conditions suggests that patients with profound bilateral loss of vestibular function show larger than normal information redundancy among the remaining senses and ABF of trunk sway. The results support the hypothesis that the nervous system uses augmented sensory information differently depending both on the environment and on individual proclivities to rely on vestibular, somatosensory or visual information to control sway.

  1. Combining patient journey modelling and visual multi-agent computer simulation: a framework to improving knowledge translation in a healthcare environment.

    PubMed

    Curry, Joanne; Fitzgerald, Anneke; Prodan, Ante; Dadich, Ann; Sloan, Terry

    2014-01-01

    This article focuses on a framework that will investigate the integration of two disparate methodologies: patient journey modelling and visual multi-agent simulation, and its impact on the speed and quality of knowledge translation to healthcare stakeholders. Literature describes patient journey modelling and visual simulation as discrete activities. This paper suggests that their combination and their impact on translating knowledge to practitioners are greater than the sum of the two technologies. The test-bed is ambulatory care and the goal is to determine if this approach can improve health services delivery, workflow, and patient outcomes and satisfaction. The multidisciplinary research team is comprised of expertise in patient journey modelling, simulation, and knowledge translation.

  2. A Framework for the Design of Effective Graphics for Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.

    1992-01-01

    This proposal presents a visualization framework, based on a data model, that supports the production of effective graphics for scientific visualization. Visual representations are effective only if they augment comprehension of the increasing amounts of data being generated by modern computer simulations. These representations are created by taking into account the goals and capabilities of the scientist, the type of data to be displayed, and software and hardware considerations. This framework is embodied in an assistant-based visualization system to guide the scientist in the visualization process. This will improve the quality of the visualizations and decrease the time the scientist is required to spend in generating the visualizations. I intend to prove that such a framework will create a more productive environment for tile analysis and interpretation of large, complex data sets.

  3. Research on conflict detection algorithm in 3D visualization environment of urban rail transit line

    NASA Astrophysics Data System (ADS)

    Wang, Li; Xiong, Jing; You, Kuokuo

    2017-03-01

    In this paper, a method of collision detection is introduced, and the theory of three-dimensional modeling of underground buildings and urban rail lines is realized by rapidly extracting the buildings that are in conflict with the track area in the 3D visualization environment. According to the characteristics of the buildings, CSG and B-rep are used to model the buildings based on CSG and B-rep. On the basis of studying the modeling characteristics, this paper proposes to use the AABB level bounding volume method to detect the first conflict and improve the detection efficiency, and then use the triangular rapid intersection detection algorithm to detect the conflict, and finally determine whether the building collides with the track area. Through the algorithm of this paper, we can quickly extract buildings colliding with the influence area of the track line, so as to help the line design, choose the best route and calculate the cost of land acquisition in the three-dimensional visualization environment.

  4. Visual Environments for CFD Research

    NASA Technical Reports Server (NTRS)

    Watson, Val; George, Michael W. (Technical Monitor)

    1994-01-01

    This viewgraph presentation gives an overview of the visual environments for computational fluid dynamics (CFD) research. It includes details on critical needs from the future computer environment, features needed to attain this environment, prospects for changes in and the impact of the visualization revolution on the human-computer interface, human processing capabilities, limits of personal environment and the extension of that environment with computers. Information is given on the need for more 'visual' thinking (including instances of visual thinking), an evaluation of the alternate approaches for and levels of interactive computer graphics, a visual analysis of computational fluid dynamics, and an analysis of visualization software.

  5. Lidar and Electro-Optics for Atmospheric Hazard Sensing and Mitigation

    NASA Technical Reports Server (NTRS)

    Clark, Ivan O.

    2012-01-01

    This paper provides an overview of the research and development efforts of the Lidar and Electro-Optics element of NASA's Aviation Safety Program. This element is seeking to improve the understanding of the atmospheric environments encountered by aviation and to provide enhanced situation awareness for atmospheric hazards. The improved understanding of atmospheric conditions is specifically to develop sensor signatures for atmospheric hazards. The current emphasis is on kinetic air hazards such as turbulence, aircraft wake vortices, mountain rotors, and windshear. Additional efforts are underway to identify and quantify the hazards arising from multi-phase atmospheric conditions including liquid and solid hydrometeors and volcanic ash. When the multi-phase conditions act as obscurants that result in reduced visual awareness, the element seeks to mitigate the hazards associated with these diminished visual environments. The overall purpose of these efforts is to enable safety improvements for air transport class and business jet class aircraft as the transition to the Next Generation Air Transportation System occurs.

  6. 3-D localization of virtual sound sources: effects of visual environment, pointing method, and training.

    PubMed

    Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard

    2010-02-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.

  7. Stereo depth and the control of locomotive heading

    NASA Astrophysics Data System (ADS)

    Rushton, Simon K.; Harris, Julie M.

    1998-04-01

    Does the addition of stereoscopic depth aid steering--the perceptual control of locomotor heading--around an environment? This is a critical question when designing a tele-operation or Virtual Environment system, with implications for computational resources and visual comfort. We examined the role of stereoscopic depth in the perceptual control of heading by employing an active steering task. Three conditions were tested: stereoscopic depth; incorrect stereoscopic depth and no stereoscopic depth. Results suggest that stereoscopic depth does not improve performance in a visual control task. A further set of experiments examined the importance of a ground plane. As a ground plane is a common feature of all natural environments and provides a pictorial depth cue, it has been suggested that the visual system may be especially attuned to exploit its presence. Thus it would be predicted that a ground plane would aid judgments of locomotor heading. Results suggest that the presence of rich motion information in the lower visual field produces significant performance advantages and that provision of such information may prove a better target for system resources than stereoscopic depth. These findings have practical consequences for a system designer and also challenge previous theoretical and psychophysical perceptual research.

  8. Effects of environmental design on patient outcome: a systematic review.

    PubMed

    Laursen, Jannie; Danielsen, Anne; Rosenberg, Jacob

    2014-01-01

    The aim of this systematic review was to assess how inpatients were affected by the built environment design during their hospitalization. Over the last decade, the healthcare system has become increasingly aware of how focus on healthcare environment might affect patient satisfaction. The focus on environmental design has become a field with great potential because of its possible impact on cost control while improving quality of care. A systematic literature search was conducted to identify current and past studies about evidence-based healthcare design. The following databases were searched: Medline/PubMed, Cinahl, and Embase. Inclusion criteria were randomized clinical trials (RCTs) investigating the effect of built environment design interventions such as music, natural murals, and plants in relation to patients' health outcome. Built environment design aspects such as audio environment and visual environment had a positive influence on patients' health outcomes. Specifically the studies indicated a decrease in patients' anxiety, pain, and stress levels when exposed to certain built environment design interventions. The built environment, especially specific audio and visual aspects, seems to play an important role in patients' outcomes, making hospitals a better healing environment for patients. Built environment, evidence-based design, healing environments, hospitals, literature review.

  9. Distributed and collaborative synthetic environments

    NASA Technical Reports Server (NTRS)

    Bajaj, Chandrajit L.; Bernardini, Fausto

    1995-01-01

    Fast graphics workstations and increased computing power, together with improved interface technologies, have created new and diverse possibilities for developing and interacting with synthetic environments. A synthetic environment system is generally characterized by input/output devices that constitute the interface between the human senses and the synthetic environment generated by the computer; and a computation system running a real-time simulation of the environment. A basic need of a synthetic environment system is that of giving the user a plausible reproduction of the visual aspect of the objects with which he is interacting. The goal of our Shastra research project is to provide a substrate of geometric data structures and algorithms which allow the distributed construction and modification of the environment, efficient querying of objects attributes, collaborative interaction with the environment, fast computation of collision detection and visibility information for efficient dynamic simulation and real-time scene display. In particular, we address the following issues: (1) A geometric framework for modeling and visualizing synthetic environments and interacting with them. We highlight the functions required for the geometric engine of a synthetic environment system. (2) A distribution and collaboration substrate that supports construction, modification, and interaction with synthetic environments on networked desktop machines.

  10. CasCADe: A Novel 4D Visualization System for Virtual Construction Planning.

    PubMed

    Ivson, Paulo; Nascimento, Daniel; Celes, Waldemar; Barbosa, Simone Dj

    2018-01-01

    Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.

  11. Effectiveness of Personalised Learning Paths on Students Learning Experiences in an e-Learning Environment

    ERIC Educational Resources Information Center

    Santally, Mohammad Issack; Senteni, Alain

    2013-01-01

    Personalisation of e-learning environments is an interesting research area in which the learning experience of learners is generally believed to be improved when his or her personal learning preferences are taken into account. One such learning preference is the V-A-K instrument that classifies learners as visual, auditory or kinaesthetic. In this…

  12. CEOS visualization environment (COVE) tool for intercalibration of satellite instruments

    USGS Publications Warehouse

    Kessler, P.D.; Killough, B.D.; Gowda, S.; Williams, B.R.; Chander, G.; Qu, Min

    2013-01-01

    Increasingly, data from multiple instruments are used to gain a more complete understanding of land surface processes at a variety of scales. Intercalibration, comparison, and coordination of satellite instrument coverage areas is a critical effort of international and domestic space agencies and organizations. The Committee on Earth Observation Satellites Visualization Environment (COVE) is a suite of browser-based applications that leverage Google Earth to display past, present, and future satellite instrument coverage areas and coincident calibration opportunities. This forecasting and ground coverage analysis and visualization capability greatly benefits the remote sensing calibration community in preparation for multisatellite ground calibration campaigns or individual satellite calibration studies. COVE has been developed for use by a broad international community to improve the efficiency and efficacy of such calibration planning efforts, whether those efforts require past, present, or future predictions. This paper provides a brief overview of the COVE tool, its validation, accuracies, and limitations with emphasis on the applicability of this visualization tool for supporting ground field campaigns and intercalibration of satellite instruments.

  13. CEOS Visualization Environment (COVE) Tool for Intercalibration of Satellite Instruments

    NASA Technical Reports Server (NTRS)

    Kessler, Paul D.; Killough, Brian D.; Gowda, Sanjay; Williams, Brian R.; Chander, Gyanesh; Qu, Min

    2013-01-01

    Increasingly, data from multiple instruments are used to gain a more complete understanding of land surface processes at a variety of scales. Intercalibration, comparison, and coordination of satellite instrument coverage areas is a critical effort of space agencies and of international and domestic organizations. The Committee on Earth Observation Satellites Visualization Environment (COVE) is a suite of browser-based applications that leverage Google Earth to display past, present, and future satellite instrument coverage areas and coincident calibration opportunities. This forecasting and ground coverage analysis and visualization capability greatly benefits the remote sensing calibration community in preparation for multisatellite ground calibration campaigns or individual satellite calibration studies. COVE has been developed for use by a broad international community to improve the efficiency and efficacy of such calibration efforts. This paper provides a brief overview of the COVE tool, its validation, accuracies and limitations with emphasis on the applicability of this visualization tool for supporting ground field campaigns and intercalibration of satellite instruments.

  14. The perception of naturalness correlates with low-level visual features of environmental scenes.

    PubMed

    Berman, Marc G; Hout, Michael C; Kardan, Omid; Hunter, MaryCarol R; Yourganov, Grigori; Henderson, John M; Hanayik, Taylor; Karimi, Hossein; Jonides, John

    2014-01-01

    Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight lines in the scene, the average color saturation in the scene and the average hue diversity in the scene. We then trained a machine-learning algorithm to predict whether a scene was perceived as being natural or not based on these low-level visual features and we could do so with 81% accuracy. As such we were able to reliably predict subjective perceptions of naturalness with objective low-level visual features. Our results can be used in future studies to determine if these features, which are related to naturalness, may also lead to the benefits attained from interacting with nature.

  15. A visualization environment for supercomputing-based applications in computational mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.

    1993-06-01

    In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.

  16. Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model

    PubMed Central

    Konstantinova, Jelizaveta; Xu, Guanghua; He, Bo; Aminzadeh, Vahid; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar

    2017-01-01

    Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback) of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation. PMID:28248996

  17. Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model.

    PubMed

    Li, Min; Konstantinova, Jelizaveta; Xu, Guanghua; He, Bo; Aminzadeh, Vahid; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar

    2017-01-01

    Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback) of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation.

  18. Emotion-induced trade-offs in spatiotemporal vision.

    PubMed

    Bocanegra, Bruno R; Zeelenberg, René

    2011-05-01

    It is generally assumed that emotion facilitates human vision in order to promote adaptive responses to a potential threat in the environment. Surprisingly, we recently found that emotion in some cases impairs the perception of elementary visual features (Bocanegra & Zeelenberg, 2009b). Here, we demonstrate that emotion improves fast temporal vision at the expense of fine-grained spatial vision. We tested participants' threshold resolution with Landolt circles containing a small spatial or brief temporal discontinuity. The prior presentation of a fearful face cue, compared with a neutral face cue, impaired spatial resolution but improved temporal resolution. In addition, we show that these benefits and deficits were triggered selectively by the global configural properties of the faces, which were transmitted only through low spatial frequencies. Critically, the common locus of these opposite effects suggests a trade-off between magno- and parvocellular-type visual channels, which contradicts the common assumption that emotion invariably improves vision. We show that, rather than being a general "boost" for all visual features, affective neural circuits sacrifice the slower processing of small details for a coarser but faster visual signal.

  19. Low Vision Rehabilitation for Adult African Americans in Two Settings.

    PubMed

    Draper, Erin M; Feng, Rui; Appel, Sarah D; Graboyes, Marcy; Engle, Erin; Ciner, Elise B; Ellenberg, Jonas H; Stambolian, Dwight

    2016-07-01

    The Vision Rehabilitation for African Americans with Central Vision Impairment (VISRAC) study is a demonstration project evaluating how modifications in vision rehabilitation can improve the use of functional vision. Fifty-five African Americans 40 years of age and older with central vision impairment were randomly assigned to receive either clinic-based (CB) or home-based (HB) low vision rehabilitation services. Forty-eight subjects completed the study. The primary outcome was the change in functional vision in activities of daily living, as assessed with the Veteran's Administration Low-Vision Visual Function Questionnaire (VFQ-48). This included scores for overall visual ability and visual ability domains (reading, mobility, visual information processing, and visual motor skills). Each score was normalized into logit estimates by Rasch analysis. Linear regression models were used to compare the difference in the total score and each domain score between the two intervention groups. The significance level for each comparison was set at 0.05. Both CB and HB groups showed significant improvement in overall visual ability at the final visit compared with baseline. The CB group showed greater improvement than the HB group (mean of 1.28 vs. 0.87 logits change), though the group difference is not significant (p = 0.057). The CB group visual motor skills score showed significant improvement over the HB group score (mean of 3.30 vs. 1.34 logits change, p = 0.044). The differences in improvement of the reading and visual information processing scores were not significant (p = 0.054 and p = 0.509) between groups. Neither group had significant improvement in the mobility score, which was not part of the rehabilitation program. Vision rehabilitation is effective for this study population regardless of location. Possible reasons why the CB group performed better than the HB group include a number of psychosocial factors as well as the more standardized distraction-free work environment within the clinic setting.

  20. Research on Intelligent Synthesis Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Lobeck, William E.

    2002-01-01

    Four research activities related to Intelligent Synthesis Environment (ISE) have been performed under this grant. The four activities are: 1) non-deterministic approaches that incorporate technologies such as intelligent software agents, visual simulations and other ISE technologies; 2) virtual labs that leverage modeling, simulation and information technologies to create an immersive, highly interactive virtual environment tailored to the needs of researchers and learners; 3) advanced learning modules that incorporate advanced instructional, user interface and intelligent agent technologies; and 4) assessment and continuous improvement of engineering team effectiveness in distributed collaborative environments.

  1. Research on Intelligent Synthesis Environments

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.; Loftin, R. Bowen

    2002-12-01

    Four research activities related to Intelligent Synthesis Environment (ISE) have been performed under this grant. The four activities are: 1) non-deterministic approaches that incorporate technologies such as intelligent software agents, visual simulations and other ISE technologies; 2) virtual labs that leverage modeling, simulation and information technologies to create an immersive, highly interactive virtual environment tailored to the needs of researchers and learners; 3) advanced learning modules that incorporate advanced instructional, user interface and intelligent agent technologies; and 4) assessment and continuous improvement of engineering team effectiveness in distributed collaborative environments.

  2. Improving physics teaching materials on sound for visually impaired students in high school

    NASA Astrophysics Data System (ADS)

    Toenders, Frank G. C.; de Putter-Smits, Lesley G. A.; Sanders, Wendy T. M.; den Brok, Perry

    2017-09-01

    When visually impaired students attend regular high school, additional materials are necessary to help them understand physics concepts. The time for teachers to develop teaching materials for such students is scarce. Visually impaired students in regular high school physics classes often use a braille version of the physics textbook. Previously, we evaluated the physics learning environment of a blind high school student in a regular Dutch high school. In this research we evaluate the use of a revised braille textbook, relief drawings and 3D models. The research focussed on the topic of sound in grade 10.

  3. Can visual arts training improve physician performance?

    PubMed

    Katz, Joel T; Khoshbin, Shahram

    2014-01-01

    Clinical educators use medical humanities as a means to improve patient care by training more self-aware, thoughtful, and collaborative physicians. We present three examples of integrating fine arts - a subset of medical humanities - into the preclinical and clinical training as models that can be adapted to other medical environments to address a wide variety of perceived deficiencies. This novel teaching method has promise to improve physician skills, but requires further validation.

  4. Use of Visual and Proprioceptive Feedback to Improve Gait Speed and Spatiotemporal Symmetry Following Chronic Stroke: A Case Series

    PubMed Central

    Feasel, Jeff; Wentz, Erin; Brooks, Frederick P.; Whitton, Mary C.

    2012-01-01

    Background and Purpose Persistent deficits in gait speed and spatiotemporal symmetry are prevalent following stroke and can limit the achievement of community mobility goals. Rehabilitation can improve gait speed, but has shown limited ability to improve spatiotemporal symmetry. The incorporation of combined visual and proprioceptive feedback regarding spatiotemporal symmetry has the potential to be effective at improving gait. Case Description A 60-year-old man (18 months poststroke) and a 53-year-old woman (21 months poststroke) each participated in gait training to improve gait speed and spatiotemporal symmetry. Each patient performed 18 sessions (6 weeks) of combined treadmill-based gait training followed by overground practice. To assist with relearning spatiotemporal symmetry, treadmill-based training for both patients was augmented with continuous, real-time visual and proprioceptive feedback from an immersive virtual environment and a dual belt treadmill, respectively. Outcomes Both patients improved gait speed (patient 1: 0.35 m/s improvement; patient 2: 0.26 m/s improvement) and spatiotemporal symmetry. Patient 1, who trained with step-length symmetry feedback, improved his step-length symmetry ratio, but not his stance-time symmetry ratio. Patient 2, who trained with stance-time symmetry feedback, improved her stance-time symmetry ratio. She had no step-length asymmetry before training. Discussion Both patients made improvements in gait speed and spatiotemporal symmetry that exceeded those reported in the literature. Further work is needed to ascertain the role of combined visual and proprioceptive feedback for improving gait speed and spatiotemporal symmetry after chronic stroke. PMID:22228605

  5. AUVA - Augmented Reality Empowers Visual Analytics to explore Medical Curriculum Data.

    PubMed

    Nifakos, Sokratis; Vaitsis, Christos; Zary, Nabil

    2015-01-01

    Medical curriculum data play a key role in the structure and the organization of medical programs in Universities around the world. The effective processing and usage of these data may improve the educational environment of medical students. As a consequence, the new generation of health professionals would have improved skills from the previous ones. This study introduces the process of enhancing curriculum data by the use of augmented reality technology as a management and presentation tool. The final goal is to enrich the information presented from a visual analytics approach applied on medical curriculum data and to sustain low levels of complexity of understanding these data.

  6. Scientific Assistant Virtual Laboratory (SAVL)

    NASA Astrophysics Data System (ADS)

    Alaghband, Gita; Fardi, Hamid; Gnabasik, David

    2007-03-01

    The Scientific Assistant Virtual Laboratory (SAVL) is a scientific discovery environment, an interactive simulated virtual laboratory, for learning physics and mathematics. The purpose of this computer-assisted intervention is to improve middle and high school student interest, insight and scores in physics and mathematics. SAVL develops scientific and mathematical imagination in a visual, symbolic, and experimental simulation environment. It directly addresses the issues of scientific and technological competency by providing critical thinking training through integrated modules. This on-going research provides a virtual laboratory environment in which the student directs the building of the experiment rather than observing a packaged simulation. SAVL: * Engages the persistent interest of young minds in physics and math by visually linking simulation objects and events with mathematical relations. * Teaches integrated concepts by the hands-on exploration and focused visualization of classic physics experiments within software. * Systematically and uniformly assesses and scores students by their ability to answer their own questions within the context of a Master Question Network. We will demonstrate how the Master Question Network uses polymorphic interfaces and C# lambda expressions to manage simulation objects.

  7. Atmospheric ammonia and particulate inorganic nitrogen over the United States

    EPA Science Inventory

    We use in situ observations from the Interagency Monitoring of PROtected Visual Environments (IMPROVE) network, the Midwest Ammonia Monitoring Project, 11 surface site campaigns as well as Infrared Atmospheric Sounding Interferometer (IASI) satellite measurements with the GEOS-Ch...

  8. Task set induces dynamic reallocation of resources in visual short-term memory.

    PubMed

    Sheremata, Summer L; Shomstein, Sarah

    2017-08-01

    Successful interaction with the environment requires the ability to flexibly allocate resources to different locations in the visual field. Recent evidence suggests that visual short-term memory (VSTM) resources are distributed asymmetrically across the visual field based upon task demands. Here, we propose that context, rather than the stimulus itself, determines asymmetrical distribution of VSTM resources. To test whether context modulates the reallocation of resources to the right visual field, task set, defined by memory-load, was manipulated to influence visual short-term memory performance. Performance was measured for single-feature objects embedded within predominantly single- or two-feature memory blocks. Therefore, context was varied to determine whether task set directly predicts changes in visual field biases. In accord with the dynamic reallocation of resources hypothesis, task set, rather than aspects of the physical stimulus, drove improvements in performance in the right- visual field. Our results show, for the first time, that preparation for upcoming memory demands directly determines how resources are allocated across the visual field.

  9. Visual Bias Predicts Gait Adaptability in Novel Sensory Discordant Conditions

    NASA Technical Reports Server (NTRS)

    Brady, Rachel A.; Batson, Crystal D.; Peters, Brian T.; Mulavara, Ajitkumar P.; Bloomberg, Jacob J.

    2010-01-01

    We designed a gait training study that presented combinations of visual flow and support-surface manipulations to investigate the response of healthy adults to novel discordant sensorimotor conditions. We aimed to determine whether a relationship existed between subjects visual dependence and their postural stability and cognitive performance in a new discordant environment presented at the conclusion of training (Transfer Test). Our training system comprised a treadmill placed on a motion base facing a virtual visual scene that provided a variety of sensory challenges. Ten healthy adults completed 3 training sessions during which they walked on a treadmill at 1.1 m/s while receiving discordant support-surface and visual manipulations. At the first visit, in an analysis of normalized torso translation measured in a scene-movement-only condition, 3 of 10 subjects were classified as visually dependent. During the Transfer Test, all participants received a 2-minute novel exposure. In a combined measure of stride frequency and reaction time, the non-visually dependent subjects showed improved adaptation on the Transfer Test compared to their visually dependent counterparts. This finding suggests that individual differences in the ability to adapt to new sensorimotor conditions may be explained by individuals innate sensory biases. An accurate preflight assessment of crewmembers biases for visual dependence could be used to predict their propensities to adapt to novel sensory conditions. It may also facilitate the development of customized training regimens that could expedite adaptation to alternate gravitational environments.

  10. U.S. Forest Service Region 1 Lake Chemistry, NADP, and IMPROVE air quality data analysis

    Treesearch

    Jill Grenon; Mark Story

    2009-01-01

    This report was developed to address the need for comprehensive analysis of U.S. Forest Service (USFS) Region 1 air quality monitoring data. The monitoring data includes Phase 3 (long-term data) lakes, National Atmospheric Deposition Program (NADP), and Interagency Monitoring of Protected Visual Environments (IMPROVE). Annual and seasonal data for the periods of record...

  11. The Role of Audio-Visual Feedback in a Thought-Based Control of a Humanoid Robot: A BCI Study in Healthy and Spinal Cord Injured People.

    PubMed

    Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M

    2017-06-01

    The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.

  12. Can walking motions improve visually induced rotational self-motion illusions in virtual reality?

    PubMed

    Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y

    2015-02-04

    Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.

  13. Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.

    PubMed

    Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M

    2015-01-01

    This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.

  14. The Effect of Conventional and Transparent Surgical Masks on Speech Understanding in Individuals with and without Hearing Loss.

    PubMed

    Atcherson, Samuel R; Mendel, Lisa Lucks; Baltimore, Wesley J; Patro, Chhayakanta; Lee, Sungmin; Pousson, Monique; Spann, M Joshua

    2017-01-01

    It is generally well known that speech perception is often improved with integrated audiovisual input whether in quiet or in noise. In many health-care environments, however, conventional surgical masks block visual access to the mouth and obscure other potential facial cues. In addition, these environments can be noisy. Although these masks may not alter the acoustic properties, the presence of noise in addition to the lack of visual input can have a deleterious effect on speech understanding. A transparent ("see-through") surgical mask may help to overcome this issue. To compare the effect of noise and various visual input conditions on speech understanding for listeners with normal hearing (NH) and hearing impairment using different surgical masks. Participants were assigned to one of three groups based on hearing sensitivity in this quasi-experimental, cross-sectional study. A total of 31 adults participated in this study: one talker, ten listeners with NH, ten listeners with moderate sensorineural hearing loss, and ten listeners with severe-to-profound hearing loss. Selected lists from the Connected Speech Test were digitally recorded with and without surgical masks and then presented to the listeners at 65 dB HL in five conditions against a background of four-talker babble (+10 dB SNR): without a mask (auditory only), without a mask (auditory and visual), with a transparent mask (auditory only), with a transparent mask (auditory and visual), and with a paper mask (auditory only). A significant difference was found in the spectral analyses of the speech stimuli with and without the masks; however, no more than ∼2 dB root mean square. Listeners with NH performed consistently well across all conditions. Both groups of listeners with hearing impairment benefitted from visual input from the transparent mask. The magnitude of improvement in speech perception in noise was greatest for the severe-to-profound group. Findings confirm improved speech perception performance in noise for listeners with hearing impairment when visual input is provided using a transparent surgical mask. Most importantly, the use of the transparent mask did not negatively affect speech perception performance in noise. American Academy of Audiology

  15. Processing reafferent and exafferent visual information for action and perception.

    PubMed

    Reichenbach, Alexandra; Diedrichsen, Jörn

    2015-01-01

    A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception.

  16. Visual attention distracter insertion for improved EEG rapid serial visual presentation (RSVP) target stimuli detection

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Martin, Kevin

    2017-05-01

    This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).

  17. Evaluation of Total Nitrite Pattern Visualization as an Improved Method for Gunshot Residue Detection and its Application to Casework Samples.

    PubMed

    Berger, Jason; Upton, Colin; Springer, Elyah

    2018-04-23

    Visualization of nitrite residues is essential in gunshot distance determination. Current protocols for the detection of nitrites include, among other tests, the Modified Griess Test (MGT). This method is limited as nitrite residues are unstable in the environment and limited to partially burned gunpowder. Previous research demonstrated the ability of alkaline hydrolysis to convert nitrates to nitrites, allowing visualization of unburned gunpowder particles using the MGT. This is referred to as Total Nitrite Pattern Visualization (TNV). TNV techniques were modified and a study conducted to streamline the procedure outlined in the literature to maximize the efficacy of the TNV in casework, while reducing the required time from 1 h to 5 min, and enhancing effectiveness on blood-soiled samples. The TNV method was found to provide significant improvement in the ability to detect significant nitrite residues, without sacrificing efficiency, that would allow for the determination of the muzzle-to-target distance. © 2018 American Academy of Forensic Sciences.

  18. Training Enhances Both Locomotor and Cognitive Adaptability to a Novel Sensory Environment

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Brady, R. A.; Batson, C. D.; Ploutz-Snyder, R. J.; Cohen, H. S.

    2010-01-01

    During adaptation to novel gravitational environments, sensorimotor disturbances have the potential to disrupt the ability of astronauts to perform required mission tasks. The goal of this project is to develop a sensorimotor adaptability (SA) training program to facilitate rapid adaptation. We have developed a unique training system comprised of a treadmill placed on a motion-base facing a virtual visual scene that provides an unstable walking surface combined with incongruent visual flow designed to enhance sensorimotor adaptability. The goal of our present study was to determine if SA training improved both the locomotor and cognitive responses to a novel sensory environment and to quantify the extent to which training would be retained. Methods: Twenty subjects (10 training, 10 control) completed three, 30-minute training sessions during which they walked on the treadmill while receiving discordant support surface and visual input. Control subjects walked on the treadmill but did not receive any support surface or visual alterations. To determine the efficacy of training all subjects performed the Transfer Test upon completion of training. For this test, subjects were exposed to novel visual flow and support surface movement, not previously experienced during training. The Transfer Test was performed 20 minutes, 1 week, 1, 3 and 6 months after the final training session. Stride frequency, auditory reaction time, and heart rate data were collected as measures of postural stability, cognitive effort and anxiety, respectively. Results: Using mixed effects regression methods we determined that subjects who received SA training showed less alterations in stride frequency, auditory reaction time and heart rate compared to controls. Conclusion: Subjects who received SA training improved performance across a number of modalities including enhanced locomotor function, increased multi-tasking capability and reduced anxiety during adaptation to novel discordant sensory information. Trained subjects maintained their level of performance over six months.

  19. Can Visual Arts Training Improve Physician Performance?

    PubMed Central

    Katz, Joel T.; Khoshbin, Shahram

    2014-01-01

    Clinical educators use medical humanities as a means to improve patient care by training more self-aware, thoughtful, and collaborative physicians. We present three examples of integrating fine arts — a subset of medical humanities — into the preclinical and clinical training as models that can be adapted to other medical environments to address a wide variety of perceived deficiencies. This novel teaching method has promise to improve physician skills, but requires further validation. PMID:25125749

  20. Improving School Lighting for Video Display Units.

    ERIC Educational Resources Information Center

    Parker-Jenkins, Marie; Parker-Jenkins, William

    1985-01-01

    Provides information to identify and implement the key characteristics which contribute to an efficient and comfortable visual display unit (VDU) lighting installation. Areas addressed include VDU lighting requirements, glare, lighting controls, VDU environment, lighting retrofit, optical filters, and lighting recommendations. A checklist to…

  1. Visual and tactile interfaces for bi-directional human robot communication

    NASA Astrophysics Data System (ADS)

    Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin

    2013-05-01

    Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.

  2. Reverse alignment "mirror image" visualization as a laparoscopic training tool improves task performance.

    PubMed

    Dunnican, Ward J; Singh, T Paul; Ata, Ashar; Bendana, Emma E; Conlee, Thomas D; Dolce, Charles J; Ramakrishnan, Rakesh

    2010-06-01

    Reverse alignment (mirror image) visualization is a disconcerting situation occasionally faced during laparoscopic operations. This occurs when the camera faces back at the surgeon in the opposite direction from which the surgeon's body and instruments are facing. Most surgeons will attempt to optimize trocar and camera placement to avoid this situation. The authors' objective was to determine whether the intentional use of reverse alignment visualization during laparoscopic training would improve performance. A standard box trainer was configured for reverse alignment, and 34 medical students and junior surgical residents were randomized to train with either forward alignment (DIRECT) or reverse alignment (MIRROR) visualization. Enrollees were tested on both modalities before and after a 4-week structured training program specific to their modality. Student's t test was used to determine differences in task performance between the 2 groups. Twenty-one participants completed the study (10 DIRECT, 11 MIRROR). There were no significant differences in performance time between DIRECT or MIRROR participants during forward or reverse alignment initial testing. At final testing, DIRECT participants had improved times only in forward alignment performance; they demonstrated no significant improvement in reverse alignment performance. MIRROR participants had significant time improvement in both forward and reverse alignment performance at final testing. Reverse alignment imaging for laparoscopic training improves task performance for both reverse alignment and forward alignment tasks. This may be translated into improved performance in the operating room when faced with reverse alignment situations. Minimal lab training can account for drastic adaptation to this environment.

  3. Lightness Constancy in Surface Visualization

    PubMed Central

    Szafir, Danielle Albers; Sarikaya, Alper; Gleicher, Michael

    2016-01-01

    Color is a common channel for displaying data in surface visualization, but is affected by the shadows and shading used to convey surface depth and shape. Understanding encoded data in the context of surface structure is critical for effective analysis in a variety of domains, such as in molecular biology. In the physical world, lightness constancy allows people to accurately perceive shadowed colors; however, its effectiveness in complex synthetic environments such as surface visualizations is not well understood. We report a series of crowdsourced and laboratory studies that confirm the existence of lightness constancy effects for molecular surface visualizations using ambient occlusion. We provide empirical evidence of how common visualization design decisions can impact viewers’ abilities to accurately identify encoded surface colors. These findings suggest that lightness constancy aids in understanding color encodings in surface visualization and reveal a correlation between visualization techniques that improve color interpretation in shadow and those that enhance perceptions of surface depth. These results collectively suggest that understanding constancy in practice can inform effective visualization design. PMID:26584495

  4. Public Health Analysis Transport Optimization Model v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beyeler, Walt; Finley, Patrick; Walser, Alex

    PHANTOM models logistic functions of national public health systems. The system enables public health officials to visualize and coordinate options for public health surveillance, diagnosis, response and administration in an integrated analytical environment. Users may simulate and analyze system performance applying scenarios that represent current conditions or future contingencies what-if analyses of potential systemic improvements. Public health networks are visualized as interactive maps, with graphical displays of relevant system performance metrics as calculated by the simulation modeling components.

  5. Digitization and Visualization of Greenhouse Tomato Plants in Indoor Environments

    PubMed Central

    Li, Dawei; Xu, Lihong; Tan, Chengxiang; Goodman, Erik D.; Fu, Daichang; Xin, Longjiao

    2015-01-01

    This paper is concerned with the digitization and visualization of potted greenhouse tomato plants in indoor environments. For the digitization, an inexpensive and efficient commercial stereo sensor—a Microsoft Kinect—is used to separate visual information about tomato plants from background. Based on the Kinect, a 4-step approach that can automatically detect and segment stems of tomato plants is proposed, including acquisition and preprocessing of image data, detection of stem segments, removing false detections and automatic segmentation of stem segments. Correctly segmented texture samples including stems and leaves are then stored in a texture database for further usage. Two types of tomato plants—the cherry tomato variety and the ordinary variety are studied in this paper. The stem detection accuracy (under a simulated greenhouse environment) for the cherry tomato variety is 98.4% at a true positive rate of 78.0%, whereas the detection accuracy for the ordinary variety is 94.5% at a true positive of 72.5%. In visualization, we combine L-system theory and digitized tomato organ texture data to build realistic 3D virtual tomato plant models that are capable of exhibiting various structures and poses in real time. In particular, we also simulate the growth process on virtual tomato plants by exerting controls on two L-systems via parameters concerning the age and the form of lateral branches. This research may provide useful visual cues for improving intelligent greenhouse control systems and meanwhile may facilitate research on artificial organisms. PMID:25675284

  6. Digitization and visualization of greenhouse tomato plants in indoor environments.

    PubMed

    Li, Dawei; Xu, Lihong; Tan, Chengxiang; Goodman, Erik D; Fu, Daichang; Xin, Longjiao

    2015-02-10

    This paper is concerned with the digitization and visualization of potted greenhouse tomato plants in indoor environments. For the digitization, an inexpensive and efficient commercial stereo sensor-a Microsoft Kinect-is used to separate visual information about tomato plants from background. Based on the Kinect, a 4-step approach that can automatically detect and segment stems of tomato plants is proposed, including acquisition and preprocessing of image data, detection of stem segments, removing false detections and automatic segmentation of stem segments. Correctly segmented texture samples including stems and leaves are then stored in a texture database for further usage. Two types of tomato plants-the cherry tomato variety and the ordinary variety are studied in this paper. The stem detection accuracy (under a simulated greenhouse environment) for the cherry tomato variety is 98.4% at a true positive rate of 78.0%, whereas the detection accuracy for the ordinary variety is 94.5% at a true positive of 72.5%. In visualization, we combine L-system theory and digitized tomato organ texture data to build realistic 3D virtual tomato plant models that are capable of exhibiting various structures and poses in real time. In particular, we also simulate the growth process on virtual tomato plants by exerting controls on two L-systems via parameters concerning the age and the form of lateral branches. This research may provide useful visual cues for improving intelligent greenhouse control systems and meanwhile may facilitate research on artificial organisms.

  7. Training to Facilitate Adaptation to Novel Sensory Environments

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Brady, R. A.; Batson, C. D.; Ploutz-Snyder, R. J.; Cohen, H. S.

    2010-01-01

    After spaceflight, the process of readapting to Earth s gravity causes locomotor dysfunction. We are developing a gait training countermeasure to facilitate adaptive responses in locomotor function. Our training system is comprised of a treadmill placed on a motion-base facing a virtual visual scene that provides an unstable walking surface combined with incongruent visual flow designed to train subjects to rapidly adapt their gait patterns to changes in the sensory environment. The goal of our present study was to determine if training improved both the locomotor and dual-tasking ability responses to a novel sensory environment and to quantify the retention of training. Subjects completed three, 30-minute training sessions during which they walked on the treadmill while receiving discordant support surface and visual input. Control subjects walked on the treadmill without any support surface or visual alterations. To determine the efficacy of training, all subjects were then tested using a novel visual flow and support surface movement not previously experienced during training. This test was performed 20 minutes, 1 week, and 1, 3, and 6 months after the final training session. Stride frequency and auditory reaction time were collected as measures of postural stability and cognitive effort, respectively. Subjects who received training showed less alteration in stride frequency and auditory reaction time compared to controls. Trained subjects maintained their level of performance over 6 months. We conclude that, with training, individuals became more proficient at walking in novel discordant sensorimotor conditions and were able to devote more attention to competing tasks.

  8. Underwater image enhancement through depth estimation based on random forest

    NASA Astrophysics Data System (ADS)

    Tai, Shen-Chuan; Tsai, Ting-Chou; Huang, Jyun-Han

    2017-11-01

    Light absorption and scattering in underwater environments can result in low-contrast images with a distinct color cast. This paper proposes a systematic framework for the enhancement of underwater images. Light transmission is estimated using the random forest algorithm. RGB values, luminance, color difference, blurriness, and the dark channel are treated as features in training and estimation. Transmission is calculated using an ensemble machine learning algorithm to deal with a variety of conditions encountered in underwater environments. A color compensation and contrast enhancement algorithm based on depth information was also developed with the aim of improving the visual quality of underwater images. Experimental results demonstrate that the proposed scheme outperforms existing methods with regard to subjective visual quality as well as objective measurements.

  9. Lake and bulk sampling chemistry, NADP, and IMPROVE air quality data analysis on the Bridger-Teton National Forest (USFS Region 4)

    Treesearch

    Jill Grenon; Terry Svalberg; Ted Porwoll; Mark Story

    2010-01-01

    Air quality monitoring data from several programs in and around the Bridger-Teton (B-T) National Forest - National Atmospheric Deposition Program (NADP), longterm lake monitoring, long-term bulk precipitation monitoring (both snow and rain), and Interagency Monitoring of Protected Visual Environments (IMPROVE) - were analyzed in this report. Trends were analyzed using...

  10. Setting visual pre-placement testing in a technology manufacturing environment.

    PubMed

    Gowan, Nancy J

    2014-01-01

    Every day we use our eyes to perform activities of daily living and work. Aging changes as well as health conditions can impact an individual's visual function, making it more difficult to accurately perform work activities. Occupational therapists work closely with optometrists and employers to develop ways to accommodate for these changes so that the employee can continue to perform the work tasks. This manuscript outlines a case study of systematically developing visual demands analyses and pre-placement vision screening assessment protocols for individuals completing quality inspection positions. When the vision screening was completed, it was discovered that over 20% of the employees had visual deficits that were correctable. This screening process yielded improved quality results but also identification of previously undetected visual deficits. Further development of vision screening in the workplace is supported.

  11. The Web Measurement Environment (WebME): A Tool for Combining and Modeling Distributed Data

    NASA Technical Reports Server (NTRS)

    Tesoriero, Roseanne; Zelkowitz, Marvin

    1997-01-01

    Many organizations have incorporated data collection into their software processes for the purpose of process improvement. However, in order to improve, interpreting the data is just as important as the collection of data. With the increased presence of the Internet and the ubiquity of the World Wide Web, the potential for software processes being distributed among several physically separated locations has also grown. Because project data may be stored in multiple locations and in differing formats, obtaining and interpreting data from this type of environment becomes even more complicated. The Web Measurement Environment (WebME), a Web-based data visualization tool, is being developed to facilitate the understanding of collected data in a distributed environment. The WebME system will permit the analysis of development data in distributed, heterogeneous environments. This paper provides an overview of the system and its capabilities.

  12. Real-time 3D visualization of volumetric video motion sensor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, J.; Stansfield, S.; Shawver, D.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less

  13. Virtual Environmental Enrichment through Video Games Improves Hippocampal-Associated Memory

    PubMed Central

    Clemenson, Gregory D.

    2015-01-01

    The positive effects of environmental enrichment and their neural bases have been studied extensively in the rodent (van Praag et al., 2000). For example, simply modifying an animal's living environment to promote sensory stimulation can lead to (but is not limited to) enhancements in hippocampal cognition and neuroplasticity and can alleviate hippocampal cognitive deficits associated with neurodegenerative diseases and aging. We are interested in whether these manipulations that successfully enhance cognition (or mitigate cognitive decline) have similar influences on humans. Although there are many “enriching” aspects to daily life, we are constantly adapting to new experiences and situations within our own environment on a daily basis. Here, we hypothesize that the exploration of the vast and visually stimulating virtual environments within video games is a human correlate of environmental enrichment. We show that video gamers who specifically favor complex 3D video games performed better on a demanding recognition memory task that assesses participants' ability to discriminate highly similar lure items from repeated items. In addition, after 2 weeks of training on the 3D video game Super Mario 3D World, naive video gamers showed improved mnemonic discrimination ability and improvements on a virtual water maze task. Two control conditions (passive and training in a 2D game, Angry Birds), showed no such improvements. Furthermore, individual performance in both hippocampal-associated behaviors correlated with performance in Super Mario but not Angry Birds, suggesting that how individuals explored the virtual environment may influence hippocampal behavior. SIGNIFICANCE STATEMENT The hippocampus has long been associated with episodic memory and is commonly thought to rely on neuroplasticity to adapt to the ever-changing environment. In animals, it is well understood that exposing animals to a more stimulating environment, known as environmental enrichment, can stimulate neuroplasticity and improve hippocampal function and performance on hippocampally mediated memory tasks. Here, we suggest that the exploration of vast and visually stimulating environments within modern-day video games can act as a human correlate of environmental enrichment. Training naive video gamers in a rich 3D, but not 2D, video game, resulted in a significant improvement in hippocampus-associated cognition using several behavioral measures. Our results suggest that modern day video games may provide meaningful stimulation to the human hippocampus. PMID:26658864

  14. Virtual Environmental Enrichment through Video Games Improves Hippocampal-Associated Memory.

    PubMed

    Clemenson, Gregory D; Stark, Craig E L

    2015-12-09

    The positive effects of environmental enrichment and their neural bases have been studied extensively in the rodent (van Praag et al., 2000). For example, simply modifying an animal's living environment to promote sensory stimulation can lead to (but is not limited to) enhancements in hippocampal cognition and neuroplasticity and can alleviate hippocampal cognitive deficits associated with neurodegenerative diseases and aging. We are interested in whether these manipulations that successfully enhance cognition (or mitigate cognitive decline) have similar influences on humans. Although there are many "enriching" aspects to daily life, we are constantly adapting to new experiences and situations within our own environment on a daily basis. Here, we hypothesize that the exploration of the vast and visually stimulating virtual environments within video games is a human correlate of environmental enrichment. We show that video gamers who specifically favor complex 3D video games performed better on a demanding recognition memory task that assesses participants' ability to discriminate highly similar lure items from repeated items. In addition, after 2 weeks of training on the 3D video game Super Mario 3D World, naive video gamers showed improved mnemonic discrimination ability and improvements on a virtual water maze task. Two control conditions (passive and training in a 2D game, Angry Birds), showed no such improvements. Furthermore, individual performance in both hippocampal-associated behaviors correlated with performance in Super Mario but not Angry Birds, suggesting that how individuals explored the virtual environment may influence hippocampal behavior. The hippocampus has long been associated with episodic memory and is commonly thought to rely on neuroplasticity to adapt to the ever-changing environment. In animals, it is well understood that exposing animals to a more stimulating environment, known as environmental enrichment, can stimulate neuroplasticity and improve hippocampal function and performance on hippocampally mediated memory tasks. Here, we suggest that the exploration of vast and visually stimulating environments within modern-day video games can act as a human correlate of environmental enrichment. Training naive video gamers in a rich 3D, but not 2D, video game, resulted in a significant improvement in hippocampus-associated cognition using several behavioral measures. Our results suggest that modern day video games may provide meaningful stimulation to the human hippocampus. Copyright © 2015 the authors 0270-6474/15/3516116-10$15.00/0.

  15. A method to improve visual similarity of breast masses for an interactive computer-aided diagnosis environment.

    PubMed

    Zheng, Bin; Lu, Amy; Hardesty, Lara A; Sumkin, Jules H; Hakim, Christiane M; Ganott, Marie A; Gur, David

    2006-01-01

    The purpose of this study was to develop and test a method for selecting "visually similar" regions of interest depicting breast masses from a reference library to be used in an interactive computer-aided diagnosis (CAD) environment. A reference library including 1000 malignant mass regions and 2000 benign and CAD-generated false-positive regions was established. When a suspicious mass region is identified, the scheme segments the region and searches for similar regions from the reference library using a multifeature based k-nearest neighbor (KNN) algorithm. To improve selection of reference images, we added an interactive step. All actual masses in the reference library were subjectively rated on a scale from 1 to 9 as to their "visual margins speculations". When an observer identifies a suspected mass region during a case interpretation he/she first rates the margins and the computerized search is then limited only to regions rated as having similar levels of spiculation (within +/-1 scale difference). In an observer preference study including 85 test regions, two sets of the six "similar" reference regions selected by the KNN with and without the interactive step were displayed side by side with each test region. Four radiologists and five nonclinician observers selected the more appropriate ("similar") reference set in a two alternative forced choice preference experiment. All four radiologists and five nonclinician observers preferred the sets of regions selected by the interactive method with an average frequency of 76.8% and 74.6%, respectively. The overall preference for the interactive method was highly significant (p < 0.001). The study demonstrated that a simple interactive approach that includes subjectively perceived ratings of one feature alone namely, a rating of margin "spiculation," could substantially improve the selection of "visually similar" reference images.

  16. A Fuzzy-Based Approach for Sensing, Coding and Transmission Configuration of Visual Sensors in Smart City Applications

    PubMed Central

    Costa, Daniel G.; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian

    2017-01-01

    The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field. PMID:28067777

  17. A Fuzzy-Based Approach for Sensing, Coding and Transmission Configuration of Visual Sensors in Smart City Applications.

    PubMed

    Costa, Daniel G; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian

    2017-01-05

    The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field.

  18. ASSESSING THE COMPARABILITY OF AMMONIUM, NITRATE AND SULFATE CONCENTRATIONS MEASURED BY THREE AIR QUALITY MONITORING NETWORKS

    EPA Science Inventory

    Airborne fine particulate matter across the United States is monitored by different networks, the three prevalent ones presently being the Clean Air Status and Trend Network (CASTNet), the Interagency Monitoring of PROtected Visual Environment Network (IMPROVE) and the Speciati...

  19. Evaluation of Visual Analytics Environments: The Road to the Visual Analytics Science and Technology Challenge Evaluation Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean; Plaisant, Catherine; Whiting, Mark A.

    The evaluation of visual analytics environments was a topic in Illuminating the Path [Thomas 2005] as a critical aspect of moving research into practice. For a thorough understanding of the utility of the systems available, evaluation not only involves assessing the visualizations, interactions or data processing algorithms themselves, but also the complex processes that a tool is meant to support (such as exploratory data analysis and reasoning, communication through visualization, or collaborative data analysis [Lam 2012; Carpendale 2007]). Researchers and practitioners in the field have long identified many of the challenges faced when planning, conducting, and executing an evaluation ofmore » a visualization tool or system [Plaisant 2004]. Evaluation is needed to verify that algorithms and software systems work correctly and that they represent improvements over the current infrastructure. Additionally to effectively transfer new software into a working environment, it is necessary to ensure that the software has utility for the end-users and that the software can be incorporated into the end-user’s infrastructure and work practices. Evaluation test beds require datasets, tasks, metrics and evaluation methodologies. As noted in [Thomas 2005] it is difficult and expensive for any one researcher to setup an evaluation test bed so in many cases evaluation is setup for communities of researchers or for various research projects or programs. Examples of successful community evaluations can be found [Chinchor 1993; Voorhees 2007; FRGC 2012]. As visual analytics environments are intended to facilitate the work of human analysts, one aspect of evaluation needs to focus on the utility of the software to the end-user. This requires representative users, representative tasks, and metrics that measure the utility to the end-user. This is even more difficult as now one aspect of the test methodology is access to representative end-users to participate in the evaluation. In many cases the sensitive nature of data and tasks and difficult access to busy analysts puts even more of a burden on researchers to complete this type of evaluation. User-centered design goes beyond evaluation and starts with the user [Beyer 1997, Shneiderman 2009]. Having some knowledge of the type of data, tasks, and work practices helps researchers and developers know the correct paths to pursue in their work. When access to the end-users is problematic at best and impossible at worst, user-centered design becomes difficult. Researchers are unlikely to go to work on the type of problems faced by inaccessible users. Commercial vendors have difficulties evaluating and improving their products when they cannot observe real users working with their products. In well-established fields such as web site design or office software design, user-interface guidelines have been developed based on the results of empirical studies or the experience of experts. Guidelines can speed up the design process and replace some of the need for observation of actual users [heuristics review references]. In 2006 when the visual analytics community was initially getting organized, no such guidelines existed. Therefore, we were faced with the problem of developing an evaluation framework for the field of visual analytics that would provide representative situations and datasets, representative tasks and utility metrics, and finally a test methodology which would include a surrogate for representative users, increase interest in conducting research in the field, and provide sufficient feedback to the researchers so that they could improve their systems.« less

  20. Walking simulator for evaluation of ophthalmic devices

    NASA Astrophysics Data System (ADS)

    Barabas, James; Woods, Russell L.; Peli, Eli

    2005-03-01

    Simulating mobility tasks in a virtual environment reduces risk for research subjects, and allows for improved experimental control and measurement. We are currently using a simulated shopping mall environment (where subjects walk on a treadmill in front of a large projected video display) to evaluate a number of ophthalmic devices developed at the Schepens Eye Research Institute for people with vision impairment, particularly visual field defects. We have conducted experiments to study subject's perception of "safe passing distance" when walking towards stationary obstacles. The subject's binary responses about potential collisions are analyzed by fitting a psychometric function, which gives an estimate of the subject's perceived safe passing distance, and the variability of subject responses. The system also enables simulations of visual field defects using head and eye tracking, enabling better understanding of the impact of visual field loss. Technical infrastructure for our simulated walking environment includes a custom eye and head tracking system, a gait feedback system to adjust treadmill speed, and a handheld 3-D pointing device. Images are generated by a graphics workstation, which contains a model with photographs of storefronts from an actual shopping mall, where concurrent validation experiments are being conducted.

  1. Effects of the Visual Exercise Environments on Cognitive Directed Attention, Energy Expenditure and Perceived Exertion

    PubMed Central

    Rogerson, Mike; Barton, Jo

    2015-01-01

    Green exercise research often reports psychological health outcomes without rigorously controlling exercise. This study examines effects of visual exercise environments on directed attention, perceived exertion and time to exhaustion, whilst measuring and controlling the exercise component. Participants completed three experimental conditions in a randomized counterbalanced order. Conditions varied by video content viewed (nature; built; control) during two consistently-ordered exercise bouts (Exercise 1: 60% VO2peakInt for 15-mins; Exercise 2: 85% VO2peakInt to voluntary exhaustion). In each condition, participants completed modified Backwards Digit Span tests (a measure of directed attention) pre- and post-Exercise 1. Energy expenditure, respiratory exchange ratio and perceived exertion were measured during both exercise bouts. Time to exhaustion in Exercise 2 was also recorded. There was a significant time by condition interaction for Backwards Digit Span scores (F2,22 = 6.267, p = 0.007). Scores significantly improved in the nature condition (p < 0.001) but did not in the built or control conditions. There were no significant differences between conditions for either perceived exertion or physiological measures during either Exercise 1 or Exercise 2, or for time to exhaustion in Exercise 2. This was the first study to demonstrate effects of controlled exercise conducted in different visual environments on post-exercise directed attention. Via psychological mechanisms alone, visual nature facilitates attention restoration during moderate-intensity exercise. PMID:26133125

  2. Effects of the Visual Exercise Environments on Cognitive Directed Attention, Energy Expenditure and Perceived Exertion.

    PubMed

    Rogerson, Mike; Barton, Jo

    2015-06-30

    Green exercise research often reports psychological health outcomes without rigorously controlling exercise. This study examines effects of visual exercise environments on directed attention, perceived exertion and time to exhaustion, whilst measuring and controlling the exercise component. Participants completed three experimental conditions in a randomized counterbalanced order. Conditions varied by video content viewed (nature; built; control) during two consistently-ordered exercise bouts (Exercise 1: 60% VO2peakInt for 15-mins; Exercise 2: 85% VO2peakInt to voluntary exhaustion). In each condition, participants completed modified Backwards Digit Span tests (a measure of directed attention) pre- and post-Exercise 1. Energy expenditure, respiratory exchange ratio and perceived exertion were measured during both exercise bouts. Time to exhaustion in Exercise 2 was also recorded. There was a significant time by condition interaction for Backwards Digit Span scores (F2,22 = 6.267, p = 0.007). Scores significantly improved in the nature condition (p < 0.001) but did not in the built or control conditions. There were no significant differences between conditions for either perceived exertion or physiological measures during either Exercise 1 or Exercise 2, or for time to exhaustion in Exercise 2. This was the first study to demonstrate effects of controlled exercise conducted in different visual environments on post-exercise directed attention. Via psychological mechanisms alone, visual nature facilitates attention restoration during moderate-intensity exercise.

  3. Enabling scientific workflows in virtual reality

    USGS Publications Warehouse

    Kreylos, O.; Bawden, G.; Bernardin, T.; Billen, M.I.; Cowgill, E.S.; Gold, R.D.; Hamann, B.; Jadamec, M.; Kellogg, L.H.; Staadt, O.G.; Sumner, D.Y.

    2006-01-01

    To advance research and improve the scientific return on data collection and interpretation efforts in the geosciences, we have developed methods of interactive visualization, with a special focus on immersive virtual reality (VR) environments. Earth sciences employ a strongly visual approach to the measurement and analysis of geologic data due to the spatial and temporal scales over which such data ranges, As observations and simulations increase in size and complexity, the Earth sciences are challenged to manage and interpret increasing amounts of data. Reaping the full intellectual benefits of immersive VR requires us to tailor exploratory approaches to scientific problems. These applications build on the visualization method's strengths, using both 3D perception and interaction with data and models, to take advantage of the skills and training of the geological scientists exploring their data in the VR environment. This interactive approach has enabled us to develop a suite of tools that are adaptable to a range of problems in the geosciences and beyond. Copyright ?? 2008 by the Association for Computing Machinery, Inc.

  4. pV3-Gold Visualization Environment for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa L.

    1997-01-01

    A new visualization environment, pV3-Gold, can be used during and after a computer simulation to extract and visualize the physical features in the results. This environment, which is an extension of the pV3 visualization environment developed at the Massachusetts Institute of Technology with guidance and support by researchers at the NASA Lewis Research Center, features many tools that allow users to display data in various ways.

  5. Determining the Spatial and Seasonal Variability in OM/OC Ratios across the U.S. Using Multiple Regression

    EPA Science Inventory

    Data from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network are used to estimate organic mass to organic carbon (OM/OC) ratios across the United States by extending previously published multiple regression techniques. Our new methodology addresses com...

  6. Emotion-Induced Trade-Offs in Spatiotemporal Vision

    ERIC Educational Resources Information Center

    Bocanegra, Bruno R.; Zeelenberg, Rene

    2011-01-01

    It is generally assumed that emotion facilitates human vision in order to promote adaptive responses to a potential threat in the environment. Surprisingly, we recently found that emotion in some cases impairs the perception of elementary visual features (Bocanegra & Zeelenberg, 2009b). Here, we demonstrate that emotion improves fast temporal…

  7. Spotlight on Arts Education. Volume 3, Spring, 1988.

    ERIC Educational Resources Information Center

    North Carolina State Dept. of Public Instruction, Raleigh. Div. of Arts Education.

    This volume focuses on four North Carolina school systems that have developed strategies for improving teaching and learning environments in arts education. Article 1 discusses the challenge of providing adequate levels of visual arts instruction for exceptional children in Dare County and describes a specific art project for handicapped students…

  8. Applied Augmented Reality for High Precision Maintenance

    NASA Astrophysics Data System (ADS)

    Dever, Clark

    Augmented Reality had a major consumer breakthrough this year with Pokemon Go. The underlying technologies that made that app a success with gamers can be applied to improve the efficiency and efficacy of workers. This session will explore some of the use cases for augmented reality in an industrial environment. In doing so, the environmental impacts and human factors that must be considered will be explored. Additionally, the sensors, algorithms, and visualization techniques used to realize augmented reality will be discussed. The benefits of augmented reality solutions in industrial environments include automated data recording, improved quality assurance, reduction in training costs and improved mean-time-to-resolution. As technology continues to follow Moore's law, more applications will become feasible as performance-per-dollar increases across all system components.

  9. TUTORIAL: Development of a cortical visual neuroprosthesis for the blind: the relevance of neuroplasticity

    NASA Astrophysics Data System (ADS)

    Fernández, E.; Pelayo, F.; Romero, S.; Bongard, M.; Marin, C.; Alfaro, A.; Merabet, L.

    2005-12-01

    Clinical applications such as artificial vision require extraordinary, diverse, lengthy and intimate collaborations among basic scientists, engineers and clinicians. In this review, we present the state of research on a visual neuroprosthesis designed to interface with the occipital visual cortex as a means through which a limited, but useful, visual sense could be restored in profoundly blind individuals. We review the most important physiological principles regarding this neuroprosthetic approach and emphasize the role of neural plasticity in order to achieve desired behavioral outcomes. While full restoration of fine detailed vision with current technology is unlikely in the immediate near future, the discrimination of shapes and the localization of objects should be possible allowing blind subjects to navigate in a unfamiliar environment and perhaps even to read enlarged text. Continued research and development in neuroprosthesis technology will likely result in a substantial improvement in the quality of life of blind and visually impaired individuals.

  10. Towards Determination of Visual Requirements for Augmented Reality Displays and Virtual Environments for the Airport Tower

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    2006-01-01

    The visual requirements for augmented reality or virtual environments displays that might be used in real or virtual towers are reviewed with respect to similar displays already used in aircraft. As an example of the type of human performance studies needed to determine the useful specifications of augmented reality displays, an optical see-through display was used in an ATC Tower simulation. Three different binocular fields of view (14deg, 28deg, and 47deg) were examined to determine their effect on subjects ability to detect aircraft maneuvering and landing. The results suggest that binocular fields of view much greater than 47deg are unlikely to dramatically improve search performance and that partial binocular overlap is a feasible display technique for augmented reality Tower applications.

  11. Advanced Multimodal Solutions for Information Presentation

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Godfroy-Cooper, Martine

    2018-01-01

    High-workload, fast-paced, and degraded sensory environments are the likeliest candidates to benefit from multimodal information presentation. For example, during EVA (Extra-Vehicular Activity) and telerobotic operations, the sensory restrictions associated with a space environment provide a major challenge to maintaining the situation awareness (SA) required for safe operations. Multimodal displays hold promise to enhance situation awareness and task performance by utilizing different sensory modalities and maximizing their effectiveness based on appropriate interaction between modalities. During EVA, the visual and auditory channels are likely to be the most utilized with tasks such as monitoring the visual environment, attending visual and auditory displays, and maintaining multichannel auditory communications. Previous studies have shown that compared to unimodal displays (spatial auditory or 2D visual), bimodal presentation of information can improve operator performance during simulated extravehicular activity on planetary surfaces for tasks as diverse as orientation, localization or docking, particularly when the visual environment is degraded or workload is increased. Tactile displays offer a third sensory channel that may both offload information processing effort and provide a means to capture attention when urgently required. For example, recent studies suggest that including tactile cues may result in increased orientation and alerting accuracy, improved task response time and decreased workload, as well as provide self-orientation cues in microgravity on the ISS (International Space Station). An important overall issue is that context-dependent factors like task complexity, sensory degradation, peripersonal vs. extrapersonal space operations, workload, experience level, and operator fatigue tend to vary greatly in complex real-world environments and it will be difficult to design a multimodal interface that performs well under all conditions. As a possible solution, adaptive systems have been proposed in which the information presented to the user changes as a function of taskcontext-dependent factors. However, this presupposes that adequate methods for detecting andor predicting such factors are developed. Further, research in adaptive systems for aviation suggests that they can sometimes serve to increase workload and reduce situational awareness. It will be critical to develop multimodal display guidelines that include consideration of smart systems that can select the best display method for a particular contextsituation.The scope of the current work is an analysis of potential multimodal display technologies for long duration missions and, in particular, will focus on their potential role in EVA activities. The review will address multimodal (combined visual, auditory andor tactile) displays investigated by NASA, industry, and DoD (Dept. of Defense). It also considers the need for adaptive information systems to accommodate a variety of operational contexts such as crew status (e.g., fatigue, workload level) and task environment (e.g., EVA, habitat, rover, spacecraft). Current approaches to guidelines and best practices for combining modalities for the most effective information displays are also reviewed. Potential issues in developing interface guidelines for the Exploration Information System (EIS) are briefly considered.

  12. Eye Movement Training and Suggested Gaze Strategies in Tunnel Vision - A Randomized and Controlled Pilot Study.

    PubMed

    Ivanov, Iliya V; Mackeben, Manfred; Vollmer, Annika; Martus, Peter; Nguyen, Nhung X; Trauzettel-Klosinski, Susanne

    2016-01-01

    Degenerative retinal diseases, especially retinitis pigmentosa (RP), lead to severe peripheral visual field loss (tunnel vision), which impairs mobility. The lack of peripheral information leads to fewer horizontal eye movements and, thus, diminished scanning in RP patients in a natural environment walking task. This randomized controlled study aimed to improve mobility and the dynamic visual field by applying a compensatory Exploratory Saccadic Training (EST). Oculomotor responses during walking and avoiding obstacles in a controlled environment were studied before and after saccade or reading training in 25 RP patients. Eye movements were recorded using a mobile infrared eye tracker (Tobii glasses) that measured a range of spatial and temporal variables. Patients were randomly assigned to two training conditions: Saccade (experimental) and reading (control) training. All subjects who first performed reading training underwent experimental training later (waiting list control group). To assess the effect of training on subjects, we measured performance in the training task and the following outcome variables related to daily life: Response Time (RT) during exploratory saccade training, Percent Preferred Walking Speed (PPWS), the number of collisions with obstacles, eye position variability, fixation duration, and the total number of fixations including the ones in the subjects' blind area of the visual field. In the saccade training group, RTs on average decreased, while the PPWS significantly increased. The improvement persisted, as tested 6 weeks after the end of the training. On average, the eye movement range of RP patients before and after training was similar to that of healthy observers. In both, the experimental and reading training groups, we found many fixations outside the subjects' seeing visual field before and after training. The average fixation duration was significantly shorter after the training, but only in the experimental training condition. We conclude that the exploratory saccade training was beneficial for RP patients and resulted in shorter fixation durations after the training. We also found a significant improvement in relative walking speed during navigation in a real-world like controlled environment.

  13. Eye Movement Training and Suggested Gaze Strategies in Tunnel Vision - A Randomized and Controlled Pilot Study

    PubMed Central

    Ivanov, Iliya V.; Mackeben, Manfred; Vollmer, Annika; Martus, Peter; Nguyen, Nhung X.; Trauzettel-Klosinski, Susanne

    2016-01-01

    Purpose Degenerative retinal diseases, especially retinitis pigmentosa (RP), lead to severe peripheral visual field loss (tunnel vision), which impairs mobility. The lack of peripheral information leads to fewer horizontal eye movements and, thus, diminished scanning in RP patients in a natural environment walking task. This randomized controlled study aimed to improve mobility and the dynamic visual field by applying a compensatory Exploratory Saccadic Training (EST). Methods Oculomotor responses during walking and avoiding obstacles in a controlled environment were studied before and after saccade or reading training in 25 RP patients. Eye movements were recorded using a mobile infrared eye tracker (Tobii glasses) that measured a range of spatial and temporal variables. Patients were randomly assigned to two training conditions: Saccade (experimental) and reading (control) training. All subjects who first performed reading training underwent experimental training later (waiting list control group). To assess the effect of training on subjects, we measured performance in the training task and the following outcome variables related to daily life: Response Time (RT) during exploratory saccade training, Percent Preferred Walking Speed (PPWS), the number of collisions with obstacles, eye position variability, fixation duration, and the total number of fixations including the ones in the subjects' blind area of the visual field. Results In the saccade training group, RTs on average decreased, while the PPWS significantly increased. The improvement persisted, as tested 6 weeks after the end of the training. On average, the eye movement range of RP patients before and after training was similar to that of healthy observers. In both, the experimental and reading training groups, we found many fixations outside the subjects' seeing visual field before and after training. The average fixation duration was significantly shorter after the training, but only in the experimental training condition. Conclusions We conclude that the exploratory saccade training was beneficial for RP patients and resulted in shorter fixation durations after the training. We also found a significant improvement in relative walking speed during navigation in a real-world like controlled environment. PMID:27351629

  14. Will musculoskeletal and visual stress change when Visual Display Unit (VDU) operators move from small offices to an ergonomically optimized office landscape?

    PubMed

    Helland, Magne; Horgen, Gunnar; Kvikstad, Tor Martin; Garthus, Tore; Aarås, Arne

    2011-11-01

    This study investigated the effect of moving from small offices to a landscape environment for 19 Visual Display Unit (VDU) operators at Alcatel Denmark AS. The operators reported significantly improved lighting condition and glare situation. Further, visual discomfort was also significantly reduced on a Visual Analogue Scale (VAS). There was no significant correlation between lighting condition and visual discomfort neither in the small offices nor in the office landscape. However, visual discomfort correlated significantly with glare in small offices i.e. more glare is related to more visual discomfort. This correlation disappeared after the lighting system in the office landscape had been improved. There was also a significant correlation between glare and itching of the eyes as well as blurred vision in the small offices, i.e. more glare more visual symptoms. Experience of pain was found to reduce the subjective assessment of work capacity during VDU tasks. There was a significant correlation between visual discomfort and reduced work capacity in small offices and in the office landscape. When moving from the small offices to the office landscape, there was a significant reduction in headache as well as back pain. No significant changes in pain intensity in the neck, shoulder, forearm, and wrist/hand were observed. The pain levels in different body areas were significantly correlated with subjective assessment of reduced work capacity in small offices and in the office landscape. By careful design and construction of an office landscape with regard to lighting and visual conditions, transfer from small offices may be acceptable from a visual-ergonomic point of view. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. Updates on measurements and modeling techniques for expendable countermeasures

    NASA Astrophysics Data System (ADS)

    Gignilliat, Robert; Tepfer, Kathleen; Wilson, Rebekah F.; Taczak, Thomas M.

    2016-10-01

    The potential threat of recently-advertised anti-ship missiles has instigated research at the United States (US) Naval Research Laboratory (NRL) into the improvement of measurement techniques for visual band countermeasures. The goal of measurements is the collection of radiometric imagery for use in the building and validation of digital models of expendable countermeasures. This paper will present an overview of measurement requirements unique to the visual band and differences between visual band and infrared (IR) band measurements. A review of the metrics used to characterize signatures in the visible band will be presented and contrasted to those commonly used in IR band measurements. For example, the visual band measurements require higher fidelity characterization of the background, including improved high-transmittance measurements and better characterization of solar conditions to correlate results more closely with changes in the environment. The range of relevant engagement angles has also been expanded to include higher altitude measurements of targets and countermeasures. In addition to the discussion of measurement techniques, a top-level qualitative summary of modeling approaches will be presented. No quantitative results or data will be presented.

  16. Multisensory Integration in the Virtual Hand Illusion with Active Movement

    PubMed Central

    Satoh, Satoru; Hachimura, Kozaburo

    2016-01-01

    Improving the sense of immersion is one of the core issues in virtual reality. Perceptual illusions of ownership can be perceived over a virtual body in a multisensory virtual reality environment. Rubber Hand and Virtual Hand Illusions showed that body ownership can be manipulated by applying suitable visual and tactile stimulation. In this study, we investigate the effects of multisensory integration in the Virtual Hand Illusion with active movement. A virtual xylophone playing system which can interactively provide synchronous visual, tactile, and auditory stimulation was constructed. We conducted two experiments regarding different movement conditions and different sensory stimulations. Our results demonstrate that multisensory integration with free active movement can improve the sense of immersion in virtual reality. PMID:27847822

  17. Viewfinders: A Visual Environmental Literacy Curriculum. Elementary Unit: Exploring Community Appearance and the Environment.

    ERIC Educational Resources Information Center

    Dunn Foundation, Warwick, RI.

    Recognizing that community growth and change are inevitable, Viewfinders' goals are as follows: to introduce students and teachers to the concept of the visual environment; enhance an understanding of the interrelationship between the built and natural environment; create an awareness that the visual environment affects the economy and quality of…

  18. Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments.

    PubMed

    Andrews, T J; Coppola, D M

    1999-08-01

    Eye position was recorded in different viewing conditions to assess whether the temporal and spatial characteristics of saccadic eye movements in different individuals are idiosyncratic. Our aim was to determine the degree to which oculomotor control is based on endogenous factors. A total of 15 naive subjects viewed five visual environments: (1) The absence of visual stimulation (i.e. a dark room); (2) a repetitive visual environment (i.e. simple textured patterns); (3) a complex natural scene; (4) a visual search task; and (5) reading text. Although differences in visual environment had significant effects on eye movements, idiosyncrasies were also apparent. For example, the mean fixation duration and size of an individual's saccadic eye movements when passively viewing a complex natural scene covaried significantly with those same parameters in the absence of visual stimulation and in a repetitive visual environment. In contrast, an individual's spatio-temporal characteristics of eye movements during active tasks such as reading text or visual search covaried together, but did not correlate with the pattern of eye movements detected when viewing a natural scene, simple patterns or in the dark. These idiosyncratic patterns of eye movements in normal viewing reveal an endogenous influence on oculomotor control. The independent covariance of eye movements during different visual tasks shows that saccadic eye movements during active tasks like reading or visual search differ from those engaged during the passive inspection of visual scenes.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean

    A new field of research, visual analytics, has recently been introduced. This has been defined as “the science of analytical reasoning facilitated by visual interfaces." Visual analytic environments, therefore, support analytical reasoning using visual representations and interactions, with data representations and transformation capabilities, to support production, presentation and dissemination. As researchers begin to develop visual analytic environments, it will be advantageous to develop metrics and methodologies to help researchers measure the progress of their work and understand the impact their work will have on the users who will work in such environments. This paper presents five areas or aspects ofmore » visual analytic environments that should be considered as metrics and methodologies for evaluation are developed. Evaluation aspects need to include usability, but it is necessary to go beyond basic usability. The areas of situation awareness, collaboration, interaction, creativity, and utility are proposed as areas for initial consideration. The steps that need to be undertaken to develop systematic evaluation methodologies and metrics for visual analytic environments are outlined.« less

  20. A visual-environment simulator with variable contrast

    NASA Astrophysics Data System (ADS)

    Gusarova, N. F.; Demin, A. V.; Polshchikov, G. V.

    1987-01-01

    A visual-environment simulator is proposed in which the image contrast can be varied continuously up to the reversal of the image. Contrast variability can be achieved by using two independently adjustable light sources to simultaneously illuminate the carrier of visual information (e.g., a slide or a cinematographic film). It is shown that such a scheme makes it possible to adequately model a complex visual environment.

  1. Neural practice effect during cross-modal selective attention: Supra-modal and modality-specific effects.

    PubMed

    Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi

    2018-05-16

    Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Perception-action coupling and anticipatory performance in baseball batting.

    PubMed

    Ranganathan, Rajiv; Carlton, Les G

    2007-09-01

    The authors examined 10 expert and 10 novice baseball batters' ability to distinguish between a fastball and a change-up in a virtual environment. They used 2 different response modes: (a) an uncoupled response in which the batters verbally predicted the type of pitch and (b) a coupled response in which the batters swung a baseball bat to try and hit the virtual ball. The authors manipulated visual information from the pitcher and ball in 6 visual conditions. The batters were more accurate in predicting the type of pitch when the response was uncoupled. In coupled responses, experts were better able to use the first 100 ms of ball flight independently of the pitcher's kinematics. In addition, the skilled batters' stepping patterns were related to the pitcher's kinematics, whereas their swing time was related to ball speed. Those findings suggest that specific task requirements determine whether a highly coupled perception-action environment improves anticipatory performance. The authors also highlight the need for research on interceptive actions to be conducted in the performer's natural environment.

  3. Middle School Students' Mathematics Knowledge Retention: Online or Face-To-Face Environments

    ERIC Educational Resources Information Center

    Edwards, Clayton M.; Rule, Audrey C.; Boody, Robert M.

    2017-01-01

    Educators seek to develop students' mathematical knowledge retention to increase student efficacy in follow-on classwork, improvement of test scores, attainment of standards, and preparation for careers. Interactive visuals, feedback during problem solving, and incorporation of higher-order thinking skills are known to increase retention, but a…

  4. Integrating automated support for a software management cycle into the TAME system

    NASA Technical Reports Server (NTRS)

    Sunazuka, Toshihiko; Basili, Victor R.

    1989-01-01

    Software managers are interested in the quantitative management of software quality, cost and progress. An integrated software management methodology, which can be applied throughout the software life cycle for any number purposes, is required. The TAME (Tailoring A Measurement Environment) methodology is based on the improvement paradigm and the goal/question/metric (GQM) paradigm. This methodology helps generate a software engineering process and measurement environment based on the project characteristics. The SQMAR (software quality measurement and assurance technology) is a software quality metric system and methodology applied to the development processes. It is based on the feed forward control principle. Quality target setting is carried out before the plan-do-check-action activities are performed. These methodologies are integrated to realize goal oriented measurement, process control and visual management. A metric setting procedure based on the GQM paradigm, a management system called the software management cycle (SMC), and its application to a case study based on NASA/SEL data are discussed. The expected effects of SMC are quality improvement, managerial cost reduction, accumulation and reuse of experience, and a highly visual management reporting system.

  5. Analysis and visualization of intracardiac electrograms in diagnosis and research: Concept and application of KaPAVIE.

    PubMed

    Oesterlein, Tobias Georg; Schmid, Jochen; Bauer, Silvio; Jadidi, Amir; Schmitt, Claus; Dössel, Olaf; Luik, Armin

    2016-04-01

    Progress in biomedical engineering has improved the hardware available for diagnosis and treatment of cardiac arrhythmias. But although huge amounts of intracardiac electrograms (EGMs) can be acquired during electrophysiological examinations, there is still a lack of software aiding diagnosis. The development of novel algorithms for the automated analysis of EGMs has proven difficult, due to the highly interdisciplinary nature of this task and hampered data access in clinical systems. Thus we developed a software platform, which allows rapid implementation of new algorithms, verification of their functionality and suitable visualization for discussion in the clinical environment. A software for visualization was developed in Qt5 and C++ utilizing the class library of VTK. The algorithms for signal analysis were implemented in MATLAB. Clinical data for analysis was exported from electroanatomical mapping systems. The visualization software KaPAVIE (Karlsruhe Platform for Analysis and Visualization of Intracardiac Electrograms) was implemented and tested on several clinical datasets. Both common and novel algorithms were implemented which address important clinical questions in diagnosis of different arrhythmias. It proved useful in discussions with clinicians due to its interactive and user-friendly design. Time after export from the clinical mapping system to visualization is below 5min. KaPAVIE(2) is a powerful platform for the development of novel algorithms in the clinical environment. Simultaneous and interactive visualization of measured EGM data and the results of analysis will aid diagnosis and help understanding the underlying mechanisms of complex arrhythmias like atrial fibrillation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Zhou, Ning

    With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less

  7. Towards Determination of Visual Requirements for Augmented Reality Displays and Virtual Environments for the Airport Tower

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    2006-01-01

    The visual requirements for augmented reality or virtual environments displays that might be used in real or virtual towers are reviewed wi th respect to similar displays already used in aircraft. As an example of the type of human performance studies needed to determine the use ful specifications of augmented reality displays, an optical see-thro ugh display was used in an ATC Tower simulation. Three different binocular fields of view (14 deg, 28 deg, and 47 deg) were examined to det ermine their effect on subjects# ability to detect aircraft maneuveri ng and landing. The results suggest that binocular fields of view much greater than 47 deg are unlikely to dramatically improve search perf ormance and that partial binocular overlap is a feasible display tech nique for augmented reality Tower applications.

  8. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Merriam, E. W.; Becker, J. D.

    1973-01-01

    A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.

  9. Effects of chronic iTBS-rTMS and enriched environment on visual cortex early critical period and visual pattern discrimination in dark-reared rats.

    PubMed

    Castillo-Padilla, Diana V; Funke, Klaus

    2016-01-01

    Early cortical critical period resembles a state of enhanced neuronal plasticity enabling the establishment of specific neuronal connections during first sensory experience. Visual performance with regard to pattern discrimination is impaired if the cortex is deprived from visual input during the critical period. We wondered how unspecific activation of the visual cortex before closure of the critical period using repetitive transcranial magnetic stimulation (rTMS) could affect the critical period and the visual performance of the experimental animals. Would it cause premature closure of the plastic state and thus worsen experience-dependent visual performance, or would it be able to preserve plasticity? Effects of intermittent theta-burst stimulation (iTBS) were compared with those of an enriched environment (EE) during dark-rearing (DR) from birth. Rats dark-reared in a standard cage showed poor improvement in a visual pattern discrimination task, while rats housed in EE or treated with iTBS showed a performance indistinguishable from rats reared in normal light/dark cycle. The behavioral effects were accompanied by correlated changes in the expression of brain-derived neurotrophic factor (BDNF) and atypical PKC (PKCζ/PKMζ), two factors controlling stabilization of synaptic potentiation. It appears that not only nonvisual sensory activity and exercise but also cortical activation induced by rTMS has the potential to alleviate the effects of DR on cortical development, most likely due to stimulation of BDNF synthesis and release. As we showed previously, iTBS reduced the expression of parvalbumin in inhibitory cortical interneurons, indicating that modulation of the activity of fast-spiking interneurons contributes to the observed effects of iTBS. © 2015 Wiley Periodicals, Inc.

  10. Early multisensory interactions affect the competition among multiple visual objects.

    PubMed

    Van der Burg, Erik; Talsma, Durk; Olivers, Christian N L; Hickey, Clayton; Theeuwes, Jan

    2011-04-01

    In dynamic cluttered environments, audition and vision may benefit from each other in determining what deserves further attention and what does not. We investigated the underlying neural mechanisms responsible for attentional guidance by audiovisual stimuli in such an environment. Event-related potentials (ERPs) were measured during visual search through dynamic displays consisting of line elements that randomly changed orientation. Search accuracy improved when a target orientation change was synchronized with an auditory signal as compared to when the auditory signal was absent or synchronized with a distractor orientation change. The ERP data show that behavioral benefits were related to an early multisensory interaction over left parieto-occipital cortex (50-60 ms post-stimulus onset), which was followed by an early positive modulation (80-100 ms) over occipital and temporal areas contralateral to the audiovisual event, an enhanced N2pc (210-250 ms), and a contralateral negative slow wave (CNSW). The early multisensory interaction was correlated with behavioral search benefits, indicating that participants with a strong multisensory interaction benefited the most from the synchronized auditory signal. We suggest that an auditory signal enhances the neural response to a synchronized visual event, which increases the chances of selection in a multiple object environment. Copyright © 2010 Elsevier Inc. All rights reserved.

  11. Multimodal Communication in a Noisy Environment: A Case Study of the Bornean Rock Frog Staurois parvus

    PubMed Central

    Grafe, T. Ulmar; Preininger, Doris; Sztatecsny, Marc; Kasah, Rosli; Dehling, J. Maximilian; Proksch, Sebastian; Hödl, Walter

    2012-01-01

    High background noise is an impediment to signal detection and perception. We report the use of multiple solutions to improve signal perception in the acoustic and visual modality by the Bornean rock frog, Staurois parvus. We discovered that vocal communication was not impaired by continuous abiotic background noise characterised by fast-flowing water. Males modified amplitude, pitch, repetition rate and duration of notes within their advertisement call. The difference in sound pressure between advertisement calls and background noise at the call dominant frequency of 5578 Hz was 8 dB, a difference sufficient for receiver detection. In addition, males used several visual signals to communicate with conspecifics with foot flagging and foot flashing being the most common and conspicuous visual displays, followed by arm waving, upright posture, crouching, and an open-mouth display. We used acoustic playback experiments to test the efficacy-based alerting signal hypothesis of multimodal communication. In support of the alerting hypothesis, we found that acoustic signals and foot flagging are functionally linked with advertisement calling preceding foot flagging. We conclude that S. parvus has solved the problem of continuous broadband low-frequency noise by both modifying its advertisement call in multiple ways and by using numerous visual signals. This is the first example of a frog using multiple acoustic and visual solutions to communicate in an environment characterised by continuous noise. PMID:22655089

  12. A distributed analysis and visualization system for model and observational data

    NASA Technical Reports Server (NTRS)

    Wilhelmson, Robert B.

    1994-01-01

    Software was developed with NASA support to aid in the analysis and display of the massive amounts of data generated from satellites, observational field programs, and from model simulations. This software was developed in the context of the PATHFINDER (Probing ATmospHeric Flows in an Interactive and Distributed EnviRonment) Project. The overall aim of this project is to create a flexible, modular, and distributed environment for data handling, modeling simulations, data analysis, and visualization of atmospheric and fluid flows. Software completed with NASA support includes GEMPAK analysis, data handling, and display modules for which collaborators at NASA had primary responsibility, and prototype software modules for three-dimensional interactive and distributed control and display as well as data handling, for which NSCA was responsible. Overall process control was handled through a scientific and visualization application builder from Silicon Graphics known as the Iris Explorer. In addition, the GEMPAK related work (GEMVIS) was also ported to the Advanced Visualization System (AVS) application builder. Many modules were developed to enhance those already available in Iris Explorer including HDF file support, improved visualization and display, simple lattice math, and the handling of metadata through development of a new grid datatype. Complete source and runtime binaries along with on-line documentation is available via the World Wide Web at: http://redrock.ncsa.uiuc.edu/ PATHFINDER/pathre12/top/top.html.

  13. Postural and Spatial Orientation Driven by Virtual Reality

    PubMed Central

    Keshner, Emily A.; Kenyon, Robert V.

    2009-01-01

    Orientation in space is a perceptual variable intimately related to postural orientation that relies on visual and vestibular signals to correctly identify our position relative to vertical. We have combined a virtual environment with motion of a posture platform to produce visual-vestibular conditions that allow us to explore how motion of the visual environment may affect perception of vertical and, consequently, affect postural stabilizing responses. In order to involve a higher level perceptual process, we needed to create a visual environment that was immersive. We did this by developing visual scenes that possess contextual information using color, texture, and 3-dimensional structures. Update latency of the visual scene was close to physiological latencies of the vestibulo-ocular reflex. Using this system we found that even when healthy young adults stand and walk on a stable support surface, they are unable to ignore wide field of view visual motion and they adapt their postural orientation to the parameters of the visual motion. Balance training within our environment elicited measurable rehabilitation outcomes. Thus we believe that virtual environments can serve as a clinical tool for evaluation and training of movement in situations that closely reflect conditions found in the physical world. PMID:19592796

  14. Implementing a Nurse Manager Profile to Improve Unit Performance.

    PubMed

    Krugman, Mary E; Sanders, Carolyn L

    2016-06-01

    Nurse managers face significant pressures in the rapidly changing healthcare environment. Staying current with multiple sources of data, including reports that detail institutional and unit performance outcomes, is particularly challenging. A Nurse Manager Customized Profile was developed at a western academic hospital to provide a 1-page visual of pertinent data to help managers and director supervisors focus coaching to improve unit performance. Use of the Decisional Involvement Scale provided new insights into measuring manager performance.

  15. Influence of moving visual environment on sit-to-stand kinematics in children and adults.

    PubMed

    Slaboda, Jill C; Barton, Joseph E; Keshner, Emily A

    2009-08-01

    The effect of visual field motion on the sit-to-stand kinematics of adults and children was investigated. Children (8 to12 years of age) and adults (21 to 49 years of age) were seated in a virtual environment that rotated in the pitch and roll directions. Participants stood up either (1) concurrent with onset of visual motion or (2) after an immersion period in the moving visual environment, and (3) without visual input. Angular velocities of the head with respect to the trunk, and trunk with respect to the environment, w ere calculated as was head andtrunk center of mass. Both adults and children reduced head and trunk angular velocity after immersion in the moving visual environment. Unlike adults, children demonstrated significant differences in displacement of the head center of mass during the immersion and concurrent trials when compared to trials without visual input. Results suggest a time-dependent effect of vision on sit-to-stand kinematics in adults, whereas children are influenced by the immediate presence or absence of vision.

  16. Making Time for Nature: Visual Exposure to Natural Environments Lengthens Subjective Time Perception and Reduces Impulsivity

    PubMed Central

    Berry, Meredith S.; Repke, Meredith A.; Nickerson, Norma P.; Conway, Lucian G.; Odum, Amy L.; Jordan, Kerry E.

    2015-01-01

    Impulsivity in delay discounting is associated with maladaptive behaviors such as overeating and drug and alcohol abuse. Researchers have recently noted that delay discounting, even when measured by a brief laboratory task, may be the best predictor of human health related behaviors (e.g., exercise) currently available. Identifying techniques to decrease impulsivity in delay discounting, therefore, could help improve decision-making on a global scale. Visual exposure to natural environments is one recent approach shown to decrease impulsive decision-making in a delay discounting task, although the mechanism driving this result is currently unknown. The present experiment was thus designed to evaluate not only whether visual exposure to natural (mountains, lakes) relative to built (buildings, cities) environments resulted in less impulsivity, but also whether this exposure influenced time perception. Participants were randomly assigned to either a natural environment condition or a built environment condition. Participants viewed photographs of either natural scenes or built scenes before and during a delay discounting task in which they made choices about receiving immediate or delayed hypothetical monetary outcomes. Participants also completed an interval bisection task in which natural or built stimuli were judged as relatively longer or shorter presentation durations. Following the delay discounting and interval bisection tasks, additional measures of time perception were administered, including how many minutes participants thought had passed during the session and a scale measurement of whether time "flew" or "dragged" during the session. Participants exposed to natural as opposed to built scenes were less impulsive and also reported longer subjective session times, although no differences across groups were revealed with the interval bisection task. These results are the first to suggest that decreased impulsivity from exposure to natural as opposed to built environments may be related to lengthened time perception. PMID:26558610

  17. Evaluating visual and auditory contributions to the cognitive restoration effect.

    PubMed

    Emfield, Adam G; Neider, Mark B

    2014-01-01

    It has been suggested that certain real-world environments can have a restorative effect on an individual, as expressed in changes in cognitive performance and mood. Much of this research builds on Attention Restoration Theory (ART), which suggests that environments that have certain characteristics induce cognitive restoration via variations in attentional demands. Specifically, natural environments that require little top-down processing have a positive effect on cognitive performance, while city-like environments show no effect. We characterized the cognitive restoration effect further by examining (1) whether natural visual stimuli, such as blue spaces, were more likely to provide a restorative effect over urban visual stimuli, (2) if increasing immersion with environment-related sound produces a similar or superior effect, (3) if this effect extends to other cognitive tasks, such as the functional field of view (FFOV), and (4) if we could better understand this effect by providing controls beyond previous works. We had 202 participants complete a cognitive task battery, consisting of a reverse digit span task, the attention network task, and the FFOV task prior to and immediately after a restoration period. In the restoration period, participants were assigned to one of seven conditions in which they listened to natural or urban sounds, watched images of natural or urban environments, or a combination of both. Additionally, some participants were in a control group with exposure to neither picture nor sound. While we found some indication of practice effects, there were no differential effects of restoration observed in any of our cognitive tasks, regardless of condition. We did, however, find evidence that our nature images and sounds were more relaxing than their urban counterparts. Overall, our findings suggest that acute exposure to relaxing pictorial and auditory stimulus is insufficient to induce improvements in cognitive performance.

  18. Radiation Blocking Lenses

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Biomedical Optical Company of America's (BOCA) suntiger lenses, similar in principle to natural filters in the eyes of hawks and eagles, bar 99 percent of potentially harmful wavelengths, while allowing visually useful colors of light (red, orange, green) to pass through. They also improve visual acuity, night vision and haze or fog visibility. The lenses evolved from work done by James B. Stephens and Dr. Charles G. Miller of the Jet Propulsion Laboratory. They developed a formula and produced a commercial welding curtain that absorbs, filters, and scatters light. This research led to protective glasses now used by dentists, workers in hazardous environments, CRT operators and skiers.

  19. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    PubMed Central

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  20. Information Virtulization in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Kwak, Dochan (Technical Monitor)

    2001-01-01

    Virtual Environments provide a natural setting for a wide range of information visualization applications, particularly wlieit the information to be visualized is defined on a three-dimensional domain (Bryson, 1996). This chapter provides an overview of the issues that arise when designing and implementing an information visualization application in a virtual environment. Many design issues that arise, such as, e.g., issues of display, user tracking are common to any application of virtual environments. In this chapter we focus on those issues that are special to information visualization applications, as issues of wider concern are addressed elsewhere in this book.

  1. Distributed Observer Network

    NASA Technical Reports Server (NTRS)

    Conroy, Michael; Mazzone, Rebecca; Little, William; Elfrey, Priscilla; Mann, David; Mabie, Kevin; Cuddy, Thomas; Loundermon, Mario; Spiker, Stephen; McArthur, Frank; hide

    2010-01-01

    The Distributed Observer network (DON) is a NASA-collaborative environment that leverages game technology to bring three-dimensional simulations to conventional desktop and laptop computers in order to allow teams of engineers working on design and operations, either individually or in groups, to view and collaborate on 3D representations of data generated by authoritative tools such as Delmia Envision, Pro/Engineer, or Maya. The DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3D visual environment. DON has been designed to enhance accessibility and user ability to observe and analyze visual simulations in real time. A variety of NASA mission segment simulations [Synergistic Engineering Environment (SEE) data, NASA Enterprise Visualization Analysis (NEVA) ground processing simulations, the DSS simulation for lunar operations, and the Johnson Space Center (JSC) TRICK tool for guidance, navigation, and control analysis] were experimented with. Desired functionalities, [i.e. Tivo-like functions, the capability to communicate textually or via Voice-over-Internet Protocol (VoIP) among team members, and the ability to write and save notes to be accessed later] were targeted. The resulting DON application was slated for early 2008 release to support simulation use for the Constellation Program and its teams. Those using the DON connect through a client that runs on their PC or Mac. This enables them to observe and analyze the simulation data as their schedule allows, and to review it as frequently as desired. DON team members can move freely within the virtual world. Preset camera points can be established, enabling team members to jump to specific views. This improves opportunities for shared analysis of options, design reviews, tests, operations, training, and evaluations, and improves prospects for verification of requirements, issues, and approaches among dispersed teams.

  2. Countermeasures to Enhance Sensorimotor Adaptability

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Brady, R. A.; Batson, C. C.; Miller, C. A.; Cohen, H. S.

    2011-01-01

    During exploration-class missions, sensorimotor disturbances may lead to disruption in the ability to ambulate and perform functional tasks during the initial introduction to a novel gravitational environment following a landing on a planetary surface. The goal of our current project is to develop a sensorimotor adaptability (SA) training program to facilitate rapid adaptation to novel gravitational environments. We have developed a unique training system comprised of a treadmill placed on a motion-base facing a virtual visual scene that provides an unstable walking surface combined with incongruent visual flow designed to enhance sensorimotor adaptability. We have conducted a series of studies that have shown: Training using a combination of modified visual flow and support surface motion during treadmill walking enhances locomotor adaptability to a novel sensorimotor environment. Trained individuals become more proficient at performing multiple competing tasks while walking during adaptation to novel discordant sensorimotor conditions. Trained subjects can retain their increased level of adaptability over a six months period. SA training is effective in producing increased adaptability in a more complex over-ground ambulatory task on an obstacle course. This confirms that for a complex task like walking, treadmill training contains enough of the critical features of overground walking to be an effective training modality. The structure of individual training sessions can be optimized to promote fast/strategic motor learning. Training sessions that each contain short-duration exposures to multiple perturbation stimuli allows subjects to acquire a greater ability to rapidly reorganize appropriate response strategies when encountering a novel sensory environment. Individual sensory biases (i.e. increased visual dependency) can predict adaptive responses to novel sensory environments suggesting that customized training prescriptions can be developed to enhance adaptability. These results indicate that SA training techniques can be added to existing treadmill exercise equipment and procedures to produce a single integrated countermeasure system to improve performance of astro/cosmonauts during prolonged exploratory space missions.

  3. Quantifying Pilot Visual Attention in Low Visibility Terminal Operations

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle K.; Arthur, J. J.; Latorella, Kara A.; Kramer, Lynda J.; Shelton, Kevin J.; Norman, Robert M.; Prinzel, Lawrence J.

    2012-01-01

    Quantifying pilot visual behavior allows researchers to determine not only where a pilot is looking and when, but holds implications for specific behavioral tracking when these data are coupled with flight technical performance. Remote eye tracking systems have been integrated into simulators at NASA Langley with effectively no impact on the pilot environment. This paper discusses the installation and use of a remote eye tracking system. The data collection techniques from a complex human-in-the-loop (HITL) research experiment are discussed; especially, the data reduction algorithms and logic to transform raw eye tracking data into quantified visual behavior metrics, and analysis methods to interpret visual behavior. The findings suggest superior performance for Head-Up Display (HUD) and improved attentional behavior for Head-Down Display (HDD) implementations of Synthetic Vision System (SVS) technologies for low visibility terminal area operations. Keywords: eye tracking, flight deck, NextGen, human machine interface, aviation

  4. Evaluation of visual acuity with Gen 3 night vision goggles

    NASA Technical Reports Server (NTRS)

    Bradley, Arthur; Kaiser, Mary K.

    1994-01-01

    Using laboratory simulations, visual performance was measured at luminance and night vision imaging system (NVIS) radiance levels typically encountered in the natural nocturnal environment. Comparisons were made between visual performance with unaided vision and that observed with subjects using image intensification. An Amplified Night Vision Imaging System (ANVIS6) binocular image intensifier was used. Light levels available in the experiments (using video display technology and filters) were matched to those of reflecting objects illuminated by representative night-sky conditions (e.g., full moon, starlight). Results show that as expected, the precipitous decline in foveal acuity experienced with decreasing mesopic luminance levels is effectively shifted to much lower light levels by use of an image intensification system. The benefits of intensification are most pronounced foveally, but still observable at 20 deg eccentricity. Binocularity provides a small improvement in visual acuity under both intensified and unintensified conditions.

  5. Real-time tracking using stereo and motion: Visual perception for space robotics

    NASA Technical Reports Server (NTRS)

    Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann

    1994-01-01

    The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.

  6. Nonlinear circuits for naturalistic visual motion estimation

    PubMed Central

    Fitzgerald, James E; Clark, Damon A

    2015-01-01

    Many animals use visual signals to estimate motion. Canonical models suppose that animals estimate motion by cross-correlating pairs of spatiotemporally separated visual signals, but recent experiments indicate that humans and flies perceive motion from higher-order correlations that signify motion in natural environments. Here we show how biologically plausible processing motifs in neural circuits could be tuned to extract this information. We emphasize how known aspects of Drosophila's visual circuitry could embody this tuning and predict fly behavior. We find that segregating motion signals into ON/OFF channels can enhance estimation accuracy by accounting for natural light/dark asymmetries. Furthermore, a diversity of inputs to motion detecting neurons can provide access to more complex higher-order correlations. Collectively, these results illustrate how non-canonical computations improve motion estimation with naturalistic inputs. This argues that the complexity of the fly's motion computations, implemented in its elaborate circuits, represents a valuable feature of its visual motion estimator. DOI: http://dx.doi.org/10.7554/eLife.09123.001 PMID:26499494

  7. Exploiting Listener Gaze to Improve Situated Communication in Dynamic Virtual Environments.

    PubMed

    Garoufi, Konstantina; Staudte, Maria; Koller, Alexander; Crocker, Matthew W

    2016-09-01

    Beyond the observation that both speakers and listeners rapidly inspect the visual targets of referring expressions, it has been argued that such gaze may constitute part of the communicative signal. In this study, we investigate whether a speaker may, in principle, exploit listener gaze to improve communicative success. In the context of a virtual environment where listeners follow computer-generated instructions, we provide two kinds of support for this claim. First, we show that listener gaze provides a reliable real-time index of understanding even in dynamic and complex environments, and on a per-utterance basis. Second, we show that a language generation system that uses listener gaze to provide rapid feedback improves overall task performance in comparison with two systems that do not use gaze. Aside from demonstrating the utility of listener gaze in situated communication, our findings open the door to new methods for developing and evaluating multi-modal models of situated interaction. Copyright © 2015 Cognitive Science Society, Inc.

  8. 3D elastic control for mobile devices.

    PubMed

    Hachet, Martin; Pouderoux, Joachim; Guitton, Pascal

    2008-01-01

    To increase the input space of mobile devices, the authors developed a proof-of-concept 3D elastic controller that easily adapts to mobile devices. This embedded device improves the completion of high-level interaction tasks such as visualization of large documents and navigation in 3D environments. It also opens new directions for tomorrow's mobile applications.

  9. Simulation-Visualization and Self-Assessment Modules' Capabilities in Structural Analysis Course Including Survey Analysis Results

    ERIC Educational Resources Information Center

    Kadiam, Subhash Chandra Bose S. V.; Mohammed, Ahmed Ali; Nguyen, Duc T.

    2010-01-01

    In this paper, we describe an approach to analyze 2D truss/Frame/Beam structures under Flash-based environment. Stiffness Matrix Method (SMM) module was developed as part of ongoing projects on a broad topic "Students' Learning Improvements in Science, Technology, Engineering and Mathematics (STEM) Related Areas" at Old Dominion…

  10. Using multimedia effectively in the teaching-learning process.

    PubMed

    DiGiacinto, Dora

    2007-01-01

    This report presents current learning theories that relate to multimedia use. It is important to understand how these learning theories apply to the instructional environment that faculty find themselves teaching in today's classroom. Textual information is often presented concurrently with visual information, but the way they are presented can improve or hinder the learning process of novice students.

  11. Discovery of 20,000 RAD-SNPs and development of a 52-SNP array for monitoring river otters

    Treesearch

    Jeffrey B. Stetz; Seth Smith; Michael A. Sawaya; Alan B. Ramsey; Stephen J. Amish; Michael K. Schwartz; Gordon Luikart

    2016-01-01

    Many North American river otter (Lontra canadensis) populations are threatened or recovering but are difficult to study because they occur at low densities, it is difficult to visually identify individuals, and they inhabit aquatic environments that accelerate degradation of biological samples. Single nucleotide polymorphisms (SNPs) can improve our ability to...

  12. Critical loads and levels: Leveraging existing monitoring data

    Treesearch

    D. G. Fox; A. R. Riebau; R. Fisher

    2006-01-01

    A snapshot of current air quality in the National Parks and Wilderness areas of the US is presented based on data from the 165 site Interagency Monitoring of Protected Visual Environments, or IMPROVE program, and other relevant air quality monitoring programs. This snapshot is provided using the VIEWS web service, an on-line web-based data warehouse, analysis, and...

  13. Influence of Alice 3: Reducing the Hurdles to Success in a CS1 Programming Course

    ERIC Educational Resources Information Center

    Daly, Tebring

    2013-01-01

    Learning the syntax, semantics, and concepts behind software engineering can be a challenging task for many individuals. This paper examines the Alice 3 software, a three-dimensional visual environment for teaching programming concepts, to determine if it is an effective tool for improving student achievement, raising self-efficacy, and engaging…

  14. Emotion and anxiety potentiate the way attention alters visual appearance.

    PubMed

    Barbot, Antoine; Carrasco, Marisa

    2018-04-12

    The ability to swiftly detect and prioritize the processing of relevant information around us is critical for the way we interact with our environment. Selective attention is a key mechanism that serves this purpose, improving performance in numerous visual tasks. Reflexively attending to sudden information helps detect impeding threat or danger, a possible reason why emotion modulates the way selective attention affects perception. For instance, the sudden appearance of a fearful face potentiates the effects of exogenous (involuntary, stimulus-driven) attention on performance. Internal states such as trait anxiety can also modulate the impact of attention on early visual processing. However, attention does not only improve performance; it also alters the way visual information appears to us, e.g. by enhancing perceived contrast. Here we show that emotion potentiates the effects of exogenous attention on both performance and perceived contrast. Moreover, we found that trait anxiety mediates these effects, with stronger influences of attention and emotion in anxious observers. Finally, changes in performance and appearance correlated with each other, likely reflecting common attentional modulations. Altogether, our findings show that emotion and anxiety interact with selective attention to truly alter how we see.

  15. Rocinante, a virtual collaborative visualizer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, M.J.; Ice, L.G.

    1996-12-31

    With the goal of improving the ability of people around the world to share the development and use of intelligent systems, Sandia National Laboratories` Intelligent Systems and Robotics Center is developing new Virtual Collaborative Engineering (VCE) and Virtual Collaborative Control (VCC) technologies. A key area of VCE and VCC research is in shared visualization of virtual environments. This paper describes a Virtual Collaborative Visualizer (VCV), named Rocinante, that Sandia developed for VCE and VCC applications. Rocinante allows multiple participants to simultaneously view dynamic geometrically-defined environments. Each viewer can exclude extraneous detail or include additional information in the scene as desired.more » Shared information can be saved and later replayed in a stand-alone mode. Rocinante automatically scales visualization requirements with computer system capabilities. Models with 30,000 polygons and 4 Megabytes of texture display at 12 to 15 frames per second (fps) on an SGI Onyx and at 3 to 8 fps (without texture) on Indigo 2 Extreme computers. In its networked mode, Rocinante synchronizes its local geometric model with remote simulators and sensory systems by monitoring data transmitted through UDP packets. Rocinante`s scalability and performance make it an ideal VCC tool. Users throughout the country can monitor robot motions and the thinking behind their motion planners and simulators.« less

  16. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  17. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  18. Scientific Visualization in High Speed Network Environments

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kutler, Paul (Technical Monitor)

    1997-01-01

    In several cases, new visualization techniques have vastly increased the researcher's ability to analyze and comprehend data. Similarly, the role of networks in providing an efficient supercomputing environment have become more critical and continue to grow at a faster rate than the increase in the processing capabilities of supercomputers. A close relationship between scientific visualization and high-speed networks in providing an important link to support efficient supercomputing is identified. The two technologies are driven by the increasing complexities and volume of supercomputer data. The interaction of scientific visualization and high-speed networks in a Computational Fluid Dynamics simulation/visualization environment are given. Current capabilities supported by high speed networks, supercomputers, and high-performance graphics workstations at the Numerical Aerodynamic Simulation Facility (NAS) at NASA Ames Research Center are described. Applied research in providing a supercomputer visualization environment to support future computational requirements are summarized.

  19. Visual Data Comm: A Tool for Visualizing Data Communication in the Multi Sector Planner Study

    NASA Technical Reports Server (NTRS)

    Lee, Hwasoo Eric

    2010-01-01

    Data comm is a new technology proposed in future air transport system as a potential tool to provide comprehensive data connectivity. It is a key enabler to manage 4D trajectory digitally, potentially resulting in improved flight times and increased throughput. Future concepts with data comm integration have been tested in a number of human-in-the-loop studies but analyzing the results has proven to be particularly challenging because future traffic environment in which data comm is fully enabled has assumed high traffic density, resulting in data set with large amount of information. This paper describes the motivation, design, current and potential future application of Visual Data Comm (VDC), a tool for visualizing data developed in Java using Processing library which is a tool package designed for interactive visualization programming. This paper includes an example of an application of VDC on data pertaining to the most recent Multi Sector Planner study, conducted at NASA s Airspace Operations Laboratory in 2009, in which VDC was used to visualize and interpret data comm activities

  20. Three dimensional tracking with misalignment between display and control axes

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Tyler, Mitchell; Kim, Won S.; Stark, Lawrence

    1992-01-01

    Human operators confronted with misaligned display and control frames of reference performed three dimensional, pursuit tracking in virtual environment and virtual space simulations. Analysis of the components of the tracking errors in the perspective displays presenting virtual space showed that components of the error due to visual motor misalignment may be linearly separated from those associated with the mismatch between display and control coordinate systems. Tracking performance improved with several hours practice despite previous reports that such improvement did not take place.

  1. Prenatal and postnatal polybrominated diphenyl ether exposure and visual spatial abilities in children

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vuong, Ann M.

    Polybrominated diphenyl ethers (PBDEs) are associated with impaired visual spatial abilities in toxicological studies, but no epidemiologic study has investigated PBDEs and visual spatial abilities in children. The Health Outcomes and Measures of the Environment Study, a prospective birth cohort (2003–2006, Cincinnati, OH), was used to examine prenatal and childhood PBDEs and visual spatial abilities in 199 children. PBDEs were measured at 16±3 weeks gestation and at 1, 2, 3, 5, and 8 years using gas chromatography/isotope dilution high-resolution mass spectrometry. We used the Virtual Morris Water Maze to measure visual spatial abilities at 8 years. In covariate-adjusted models, 10-foldmore » increases in BDE-47, −99, and −100 at 5 years were associated with shorter completion times by 5.2 s (95% Confidence Interval [CI] −9.3, −1.1), 4.5 s (95% CI −8.1, −0.9), and 4.7 s (95% CI −9.0, −0.3), respectively. However, children with higher BDE-153 at 3 years had longer completion times (β=5.4 s, 95% CI −0.3, 11.1). Prenatal PBDEs were associated with improved visual spatial memory retention, with children spending a higher percentage of their search path in the correct quadrant. Child sex modified some associations between PBDEs and visual spatial learning. Longer path lengths were observed among males with increased BDE-47 at 2 and 3 years, while females had shorter paths. In conclusion, prenatal and postnatal BDE-28, −47, −99, and −100 at 5 and 8 years were associated with improved visual spatial abilities, whereas a pattern of impairments in visual spatial learning was noted with early childhood BDE-153 concentrations. - Highlights: • The VMWM test was used to assess visual spatial abilities in children at 8 years. • BDE-153 at 3 years was adversely associated with visual spatial learning. • BDE-47, −99, and −100 at 5 years was associated with better visual spatial learning. • Prenatal PBDEs were associated with improved visual spatial memory retention. • Male children were observed to perform more poorly on the VMWM than females.« less

  2. Ground-Based Robotic Sensing of an Agricultural Sub-Canopy Environment

    NASA Astrophysics Data System (ADS)

    Burns, A.; Peschel, J.

    2015-12-01

    Airborne remote sensing is a useful method for measuring agricultural crop parameters over large areas; however, the approach becomes limited to above-canopy characterization as a crop matures due to reduced visual access of the sub-canopy environment. During the growth cycle of an agricultural crop, such as soybeans, the micrometeorology of the sub-canopy environment can significantly impact pod development and reduced yields may result. Larger-scale environmental conditions aside, the physical structure and configuration of the sub-canopy matrix will logically influence local climate conditions for a single plant; understanding the state and development of the sub-canopy could inform crop models and improve best practices but there are currently no low-cost methods to quantify the sub-canopy environment at a high spatial and temporal resolution over an entire growth cycle. This work describes the modification of a small tactical and semi-autonomous, ground-based robotic platform with sensors capable of mapping the physical structure of an agricultural row crop sub-canopy; a soybean crop is used as a case study. Point cloud data representing the sub-canopy structure are stored in LAS format and can be used for modeling and visualization in standard GIS software packages.

  3. Real-time evaluation and visualization of learner performance in a mixed-reality environment for clinical breast examination.

    PubMed

    Kotranza, Aaron; Lind, D Scott; Lok, Benjamin

    2012-07-01

    We investigate the efficacy of incorporating real-time feedback of user performance within mixed-reality environments (MREs) for training real-world tasks with tightly coupled cognitive and psychomotor components. This paper presents an approach to providing real-time evaluation and visual feedback of learner performance in an MRE for training clinical breast examination (CBE). In a user study of experienced and novice CBE practitioners (n = 69), novices receiving real-time feedback performed equivalently or better than more experienced practitioners in the completeness and correctness of the exam. A second user study (n = 8) followed novices through repeated practice of CBE in the MRE. Results indicate that skills improvement in the MRE transfers to the real-world task of CBE of human patients. This initial case study demonstrates the efficacy of MREs incorporating real-time feedback for training real-world cognitive-psychomotor tasks.

  4. The Virtual Pelvic Floor, a tele-immersive educational environment.

    PubMed Central

    Pearl, R. K.; Evenhouse, R.; Rasmussen, M.; Dech, F.; Silverstein, J. C.; Prokasy, S.; Panko, W. B.

    1999-01-01

    This paper describes the development of the Virtual Pelvic Floor, a new method of teaching the complex anatomy of the pelvic region utilizing virtual reality and advanced networking technology. Virtual reality technology allows improved visualization of three-dimensional structures over conventional media because it supports stereo vision, viewer-centered perspective, large angles of view, and interactivity. Two or more ImmersaDesk systems, drafting table format virtual reality displays, are networked together providing an environment where teacher and students share a high quality three-dimensional anatomical model, and are able to converse, see each other, and to point in three dimensions to indicate areas of interest. This project was realized by the teamwork of surgeons, medical artists and sculptors, computer scientists, and computer visualization experts. It demonstrates the future of virtual reality for surgical education and applications for the Next Generation Internet. Images Figure 1 Figure 2 Figure 3 PMID:10566378

  5. Associative visual learning by tethered bees in a controlled visual environment.

    PubMed

    Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin

    2017-10-10

    Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.

  6. A Proposed Treatment for Visual Field Loss caused by Traumatic Brain Injury using Interactive Visuotactile Virtual Environment

    NASA Astrophysics Data System (ADS)

    Farkas, Attila J.; Hajnal, Alen; Shiratuddin, Mohd F.; Szatmary, Gabriella

    In this paper, we propose a novel approach of using interactive virtual environment technology in Vision Restoration Therapy caused by Traumatic Brain Injury. We called the new system Interactive Visuotactile Virtual Environment and it holds a promise of expanding the scope of already existing rehabilitation techniques. Traditional vision rehabilitation methods are based on passive psychophysical training procedures, and can last up to six months before any modest improvements can be seen in patients. A highly immersive and interactive virtual environment will allow the patient to practice everyday activities such as object identification and object manipulation through the use 3D motion sensoring handheld devices such data glove or the Nintendo Wiimote. Employing both perceptual and action components in the training procedures holds the promise of more efficient sensorimotor rehabilitation. Increased stimulation of visual and sensorimotor areas of the brain should facilitate a comprehensive recovery of visuomotor function by exploiting the plasticity of the central nervous system. Integrated with a motion tracking system and an eye tracking device, the interactive virtual environment allows for the creation and manipulation of a wide variety of stimuli, as well as real-time recording of hand-, eye- and body movements and coordination. The goal of the project is to design a cost-effective and efficient vision restoration system.

  7. Virtual Environments in Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Lisinski, T. A. (Technical Monitor)

    1994-01-01

    Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.

  8. Flowfield visualization for SSME hot gas manifold

    NASA Technical Reports Server (NTRS)

    Roger, Robert P.

    1988-01-01

    The objective of this research, as defined by NASA-Marshall Space Flight Center, was two-fold: (1) to numerically simulate viscous subsonic flow in a proposed elliptical two-duct version of the fuel side Hot Gas Manifold (HGM) for the Space Shuttle Main Engine (SSME), and (2) to provide analytical support for SSME related numerical computational experiments, being performed by the Computational Fluid Dynamics staff in the Aerophysics Division of the Structures and Dynamics Laboratory at NASA-MSFC. Numerical results of HGM were calculations to complement both water flow visualization experiments and air flow visualization experiments and air experiments in two-duct geometries performed at NASA-MSFC and Rocketdyne. In addition, code modification and improvement efforts were to strengthen the CFD capabilities of NASA-MSFC for producing reliable predictions of flow environments within the SSME.

  9. Improving Aviation Safety with information Visualization: A Flight Simulation Study

    NASA Technical Reports Server (NTRS)

    Aragon, Cecilia R.; Hearst, Marti

    2005-01-01

    Many aircraft accidents each year are caused by encounters with invisible airflow hazards. Recent advances in aviation sensor technology offer the potential for aircraft-based sensors that can gather large amounts of airflow velocity data in real-time. With this influx of data comes the need to study how best to present it to the pilot - a cognitively overloaded user focused on a primary task other than that of information visualization. In this paper, we present the results of a usability study of an airflow hazard visualization system that significantly reduced the crash rate among experienced helicopter pilots flying a high fidelity, aerodynamically realistic fixed-base rotorcraft flight simulator into hazardous conditions. We focus on one particular aviation application, but the results may be relevant to user interfaces in other operationally stressful environments.

  10. Unipedal balance in healthy adults: effect of visual environments yielding decreased lateral velocity feedback.

    PubMed

    Deyer, T W; Ashton-Miller, J A

    1999-09-01

    To test the (null) hypotheses that the reliability of unipedal balance is unaffected by the attenuation of visual velocity feedback and that, relative to baseline performance, deterioration of balance success rates from attenuated visual velocity feedback will not differ between groups of young men and older women, and the presence (or absence) of a vertical foreground object will not affect balance success rates. Single blind, single case study. University research laboratory. Two volunteer samples: 26 healthy young men (mean age, 20.0yrs; SD, 1.6); 23 healthy older women (mean age, 64.9 yrs; SD, 7.8). Normalized success rates in unipedal balance task. Subjects were asked to transfer to and maintain unipedal stance for 5 seconds in a task near the limit of their balance capabilities. Subjects completed 64 trials: 54 trials of three experimental visual scenes in blocked randomized sequences of 18 trials and 10 trials in a normal visual environment. The experimental scenes included two that provided strong velocity/weak position feedback, one of which had a vertical foreground object (SVWP+) and one without (SVWP-), and one scene providing weak velocity/strong position (WVSP) feedback. Subjects' success rates in the experimental environments were normalized by the success rate in the normal environment in order to allow comparisons between subjects using a mixed model repeated measures analysis of variance. The normalized success rate was significantly greater in SVWP+ than in WVSP (p = .0001) and SVWP- (p = .013). Visual feedback significantly affected the normalized unipedal balance success rates (p = .001); neither the group effect nor the group X visual environment interaction was significant (p = .9362 and p = .5634, respectively). Normalized success rates did not differ significantly between the young men and older women in any visual environment. Near the limit of the young men's or older women's balance capability, the reliability of transfer to unipedal balance was adversely affected by visual environments offering attenuated visual velocity feedback cues and those devoid of vertical foreground objects.

  11. A rehabilitation tool for functional balance using altered gravity and virtual reality.

    PubMed

    Oddsson, Lars I E; Karlsson, Robin; Konrad, Janusz; Ince, Serdar; Williams, Steve R; Zemkova, Erika

    2007-07-10

    There is a need for effective and early functional rehabilitation of patients with gait and balance problems including those with spinal cord injury, neurological diseases and recovering from hip fractures, a common consequence of falls especially in the elderly population. Gait training in these patients using partial body weight support (BWS) on a treadmill, a technique that involves unloading the subject through a harness, improves walking better than training with full weight bearing. One problem with this technique not commonly acknowledged is that the harness provides external support that essentially eliminates associated postural adjustments (APAs) required for independent gait. We have developed a device to address this issue and conducted a training study for proof of concept of efficacy. We present a tool that can enhance the concept of BWS training by allowing natural APAs to occur mediolaterally. While in a supine position in a 90 deg tilted environment built around a modified hospital bed, subjects wear a backpack frame that is freely moving on air-bearings (cf. puck on an air hockey table) and attached through a cable to a pneumatic cylinder that provides a load that can be set to emulate various G-like loads. Veridical visual input is provided through two 3-D automultiscopic displays that allow glasses free 3-D vision representing a virtual surrounding environment that may be acquired from sites chosen by the patient. Two groups of 12 healthy subjects were exposed to either strength training alone or a combination of strength and balance training in such a tilted environment over a period of four weeks. Isokinetic strength measured during upright squat extension improved similarly in both groups. Measures of balance assessed in upright showed statistically significant improvements only when balance was part of the training in the tilted environment. Postural measures indicated less reliance on visual and/or increased use of somatosensory cues after training. Upright balance function can be improved following balance specific training performed in a supine position in an environment providing the perception of an upright position with respect to gravity. Future studies will implement this concept in patients.

  12. Promotion of a healthy public living environment: participatory design of public toilets with visually impaired persons.

    PubMed

    Siu, Kin Wai Michael; Wong, M M Y

    2013-07-01

    The principal objective of a healthy living environment is to improve the quality of everyday life. Visually impaired persons (VIPs) encounter many difficulties in everyday life through a series of barriers, particularly in relation to public toilets. This study aimed to explore the concerns of VIPs in accessing public toilets, and identify methods for improvement. Considerations about user participation are also discussed. Adopting a case study approach, VIPs were invited to participate in the research process. In addition to in-depth interviews and field visits, models and a simulated full-scale environment were produced to facilitate the VIPs to voice their opinions. The key findings indicate that the design of public toilets for promoting public health should be considered and tackled from a three-level framework: plain, line and point. Governments, professionals and the public need to consider the quality of public toilets in terms of policy, implementation and management. VIPs have the right to access public toilets. Governments and professionals should respect the particular needs and concerns of VIPs. A three-level framework (plain, line and point) is required to consider the needs of VIPs in accessing public toilets, and user participation is a good way to reveal the actual needs of VIPs. Copyright © 2013 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  13. Image-Based Visual Servoing for Robotic Systems: A Nonlinear Lyapunov-Based Control Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Warren

    2004-06-01

    There is significant motivation to provide robotic systems with improved autonomy as a means to significantly accelerate deactivation and decommissioning (D&D) operations while also reducing the associated costs, removing human operators from hazardous environments, and reducing the required burden and skill of human operators. To achieve improved autonomy, this project focused on the basic science challenges leading to the development of visual servo controllers. The challenge in developing these controllers is that a camera provides 2-dimensional image information about the 3-dimensional Euclidean-space through a perspective (range dependent) projection that can be corrupted by uncertainty in the camera calibration matrix andmore » by disturbances such as nonlinear radial distortion. Disturbances in this relationship (i.e., corruption in the sensor information) propagate erroneous information to the feedback controller of the robot, leading to potentially unpredictable task execution. This research project focused on the development of a visual servo control methodology that targets compensating for disturbances in the camera model (i.e., camera calibration and the recovery of range information) as a means to achieve predictable response by the robotic system operating in unstructured environments. The fundamental idea is to use nonlinear Lyapunov-based techniques along with photogrammetry methods to overcome the complex control issues and alleviate many of the restrictive assumptions that impact current robotic applications. The outcome of this control methodology is a plug-and-play visual servoing control module that can be utilized in conjunction with current technology such as feature recognition and extraction to enable robotic systems with the capabilities of increased accuracy, autonomy, and robustness, with a larger field of view (and hence a larger workspace). The developed methodology has been reported in numerous peer-reviewed publications and the performance and enabling capabilities of the resulting visual servo control modules have been demonstrated on mobile robot and robot manipulator platforms.« less

  14. Self-motivated visual scanning predicts flexible navigation in a virtual environment.

    PubMed

    Ploran, Elisabeth J; Bevitt, Jacob; Oshiro, Jaris; Parasuraman, Raja; Thompson, James C

    2014-01-01

    The ability to navigate flexibly (e.g., reorienting oneself based on distal landmarks to reach a learned target from a new position) may rely on visual scanning during both initial experiences with the environment and subsequent test trials. Reliance on visual scanning during navigation harkens back to the concept of vicarious trial and error, a description of the side-to-side head movements made by rats as they explore previously traversed sections of a maze in an attempt to find a reward. In the current study, we examined if visual scanning predicted the extent to which participants would navigate to a learned location in a virtual environment defined by its position relative to distal landmarks. Our results demonstrated a significant positive relationship between the amount of visual scanning and participant accuracy in identifying the trained target location from a new starting position as long as the landmarks within the environment remain consistent with the period of original learning. Our findings indicate that active visual scanning of the environment is a deliberative attentional strategy that supports the formation of spatial representations for flexible navigation.

  15. Coastal Bathymetry Using Satellite Observation in Support of Intelligence Preparation of the Environment

    DTIC Science & Technology

    2011-09-01

    Sensor ..........................................................................25 2. The Environment for Visualizing Images 4.7 (ENVI......DEM Digital Elevation Model ENVI Environment for Visualizing Images HADR Humanitarian and Disaster Relief IfSAR Interferometric Synthetic Aperture

  16. Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study

    PubMed Central

    Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano

    2017-01-01

    The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631

  17. Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution

    PubMed Central

    Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin

    2016-01-01

    The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114

  18. Training haptic stiffness discrimination: time course of learning with or without visual information and knowledge of results.

    PubMed

    Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria

    2013-08-01

    In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.

  19. High-performance execution of psychophysical tasks with complex visual stimuli in MATLAB

    PubMed Central

    Asaad, Wael F.; Santhanam, Navaneethan; McClellan, Steven

    2013-01-01

    Behavioral, psychological, and physiological experiments often require the ability to present sensory stimuli, monitor and record subjects' responses, interface with a wide range of devices, and precisely control the timing of events within a behavioral task. Here, we describe our recent progress developing an accessible and full-featured software system for controlling such studies using the MATLAB environment. Compared with earlier reports on this software, key new features have been implemented to allow the presentation of more complex visual stimuli, increase temporal precision, and enhance user interaction. These features greatly improve the performance of the system and broaden its applicability to a wider range of possible experiments. This report describes these new features and improvements, current limitations, and quantifies the performance of the system in a real-world experimental setting. PMID:23034363

  20. Interactive 3D visualization of structural changes in the brain of a person with corticobasal syndrome

    PubMed Central

    Hänel, Claudia; Pieperhoff, Peter; Hentschel, Bernd; Amunts, Katrin; Kuhlen, Torsten

    2014-01-01

    The visualization of the progression of brain tissue loss in neurodegenerative diseases like corticobasal syndrome (CBS) can provide not only information about the localization and distribution of the volume loss, but also helps to understand the course and the causes of this neurodegenerative disorder. The visualization of such medical imaging data is often based on 2D sections, because they show both internal and external structures in one image. Spatial information, however, is lost. 3D visualization of imaging data is capable to solve this problem, but it faces the difficulty that more internally located structures may be occluded by structures near the surface. Here, we present an application with two designs for the 3D visualization of the human brain to address these challenges. In the first design, brain anatomy is displayed semi-transparently; it is supplemented by an anatomical section and cortical areas for spatial orientation, and the volumetric data of volume loss. The second design is guided by the principle of importance-driven volume rendering: A direct line-of-sight to the relevant structures in the deeper parts of the brain is provided by cutting out a frustum-like piece of brain tissue. The application was developed to run in both, standard desktop environments and in immersive virtual reality environments with stereoscopic viewing for improving the depth perception. We conclude, that the presented application facilitates the perception of the extent of brain degeneration with respect to its localization and affected regions. PMID:24847243

  1. Modulation of Visually Evoked Postural Responses by Contextual Visual, Haptic and Auditory Information: A ‘Virtual Reality Check’

    PubMed Central

    Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.

    2013-01-01

    Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760

  2. Top-down estimates of biomass burning emissions of black carbon in the western United States

    Treesearch

    Y. H. Mao; Q. B. Li; D. Chen; L. Zhang; W. -M. Hao; K.-N. Liou

    2014-01-01

    We estimate biomass burning and anthropogenic emissions of black carbon (BC) in the western US for May-October 2006 by inverting surface BC concentrations from the Interagency Monitoring of PROtected Visual Environment (IMPROVE) network using a global chemical transport model. We first use active fire counts from the Moderate Resolution Imaging Spectroradiometer (MODIS...

  3. Improvements to the design process for a real-time passive millimeter-wave imager to be used for base security and helicopter navigation in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Anderton, Rupert N.; Cameron, Colin D.; Burnett, James G.; Güell, Jeff J.; Sanders-Reed, John N.

    2014-06-01

    This paper discusses the design of an improved passive millimeter wave imaging system intended to be used for base security in degraded visual environments. The discussion starts with the selection of the optimum frequency band. The trade-offs between requirements on detection, recognition and identification ranges and optical aperture are discussed with reference to the Johnson Criteria. It is shown that these requirements also affect image sampling, receiver numbers and noise temperature, frame rate, field of view, focusing requirements and mechanisms, and tolerance budgets. The effect of image quality degradation is evaluated and a single testable metric is derived that best describes the effects of degradation on meeting the requirements. The discussion is extended to tolerance budgeting constraints if significant degradation is to be avoided, including surface roughness, receiver position errors and scan conversion errors. Although the reflective twist-polarization imager design proposed is potentially relatively low cost and high performance, there is a significant problem with obscuration of the beam by the receiver array. Methods of modeling this accurately and thus designing for best performance are given.

  4. Toward autonomous rotorcraft flight in degraded visual environments: experiments and lessons learned

    NASA Astrophysics Data System (ADS)

    Stambler, Adam; Spiker, Spencer; Bergerman, Marcel; Singh, Sanjiv

    2016-05-01

    Unmanned cargo delivery to combat outposts will inevitably involve operations in degraded visual environments (DVE). When DVE occurs, the aircraft autonomy system needs to be able to function regardless of the obscurant level. In 2014, Near Earth Autonomy established a baseline perception system for autonomous rotorcraft operating in clear air conditions, when its m3 sensor suite and perception software enabled autonomous, no-hover landings onto unprepared sites populated with obstacles. The m3's long-range lidar scanned the helicopter's path and the perception software detected obstacles and found safe locations for the helicopter to land. This paper presents the results of initial tests with the Near Earth perception system in a variety of DVE conditions and analyzes them from the perspective of mission performance and risk. Tests were conducted with the m3's lidar and a lightweight synthetic aperture radar in rain, smoke, snow, and controlled brownout experiments. These experiments showed the capability to penetrate through mild DVE but the perceptual capabilities became degraded with the densest brownouts. The results highlight the need for not only improved ability to see through DVE, but also for improved algorithms to monitor and report DVE conditions.

  5. Development and implementation of Inflight Neurosensory Training for Adaptation/Readaptation (INSTAR)

    NASA Technical Reports Server (NTRS)

    Harm, D. L.; Guedry, F. E.; Parker, Donald E.; Reschke, M. F.

    1993-01-01

    Resolution of space motion sickness, and improvements in spatial orientation, posture and motion control, and compensatory eye movements occur as a function of neurosensory and sensorimotor adaptation to microgravity. These adaptive responses, however, are inappropriate for return to Earth. Even following relatively brief space Shuttle missions, significant re-adaptation disturbances related to visual performance, locomotion, and perceived self-motion have been observed. Russian reports suggest that these disturbances increase with mission duration and may be severe following landing after prolonged microgravity exposure such as during a voyage to Mars. Consequently, there is a need to enable the astronauts to be prepared for and more quickly re-adapt to a gravitational environment following extended space missions. Several devices to meet this need are proposed including a virtual environment - centrifuge device (VECD). A short-arm centrifuge will provide centripetal acceleration parallel to the astronaut's longitudinal body axis and a restraint system will be configured to permit head movements only in the plane of rotation (to prevent 'cross-coupling'). A head-mounted virtual environment system will be used to develop appropriate 'calibration' between visual motion/orientation signals and inertial motion/orientation signals generated by the centrifuge. This will permit vestibular, visual and somatosensory signal matches to bias central interpretation of otolith signals toward the 'position' responses and to recalibrate the vestibulo-ocular reflex (VOR).

  6. Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera

    NASA Astrophysics Data System (ADS)

    Rahman, Samiur; Ullah, Sana; Ullah, Sehat

    2018-01-01

    Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.

  7. Visual summation in night-flying sweat bees: a theoretical study.

    PubMed

    Theobald, Jamie Carroll; Greiner, Birgit; Wcislo, William T; Warrant, Eric J

    2006-07-01

    Bees are predominantly diurnal; only a few groups fly at night. An evolutionary limitation that bees must overcome to inhabit dim environments is their eye type: bees possess apposition compound eyes, which are poorly suited to vision in dim light. Here, we theoretically examine how nocturnal bees Megalopta genalis fly at light levels usually reserved for insects bearing more sensitive superposition eyes. We find that neural summation should greatly increase M. genalis's visual reliability. Predicted spatial summation closely matches the morphology of laminal neurons believed to mediate such summation. Improved reliability costs acuity, but dark adapted bees already suffer optical blurring, and summation further degrades vision only slightly.

  8. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  9. Application of advanced computing techniques to the analysis and display of space science measurements

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Lapolla, M. V.; Horblit, B.

    1995-01-01

    A prototype system has been developed to aid the experimental space scientist in the display and analysis of spaceborne data acquired from direct measurement sensors in orbit. We explored the implementation of a rule-based environment for semi-automatic generation of visualizations that assist the domain scientist in exploring one's data. The goal has been to enable rapid generation of visualizations which enhance the scientist's ability to thoroughly mine his data. Transferring the task of visualization generation from the human programmer to the computer produced a rapid prototyping environment for visualizations. The visualization and analysis environment has been tested against a set of data obtained from the Hot Plasma Composition Experiment on the AMPTE/CCE satellite creating new visualizations which provided new insight into the data.

  10. OnSight: Multi-platform Visualization of the Surface of Mars

    NASA Astrophysics Data System (ADS)

    Abercrombie, S. P.; Menzies, A.; Winter, A.; Clausen, M.; Duran, B.; Jorritsma, M.; Goddard, C.; Lidawer, A.

    2017-12-01

    A key challenge of planetary geology is to develop an understanding of an environment that humans cannot (yet) visit. Instead, scientists rely on visualizations created from images sent back by robotic explorers, such as the Curiosity Mars rover. OnSight is a multi-platform visualization tool that helps scientists and engineers to visualize the surface of Mars. Terrain visualization allows scientists to understand the scale and geometric relationships of the environment around the Curiosity rover, both for scientific understanding and for tactical consideration in safely operating the rover. OnSight includes a web-based 2D/3D visualization tool, as well as an immersive mixed reality visualization. In addition, OnSight offers a novel feature for communication among the science team. Using the multiuser feature of OnSight, scientists can meet virtually on Mars, to discuss geology in a shared spatial context. Combining web-based visualization with immersive visualization allows OnSight to leverage strengths of both platforms. This project demonstrates how 3D visualization can be adapted to either an immersive environment or a computer screen, and will discuss advantages and disadvantages of both platforms.

  11. Development and learning of saccadic eye movements in 7- to 42-month-old children.

    PubMed

    Alahyane, Nadia; Lemoine-Lardennois, Christelle; Tailhefer, Coline; Collins, Thérèse; Fagard, Jacqueline; Doré-Mazars, Karine

    2016-01-01

    From birth, infants move their eyes to explore their environment, interact with it, and progressively develop a multitude of motor and cognitive abilities. The characteristics and development of oculomotor control in early childhood remain poorly understood today. Here, we examined reaction time and amplitude of saccadic eye movements in 93 7- to 42-month-old children while they oriented toward visual animated cartoon characters appearing at unpredictable locations on a computer screen over 140 trials. Results revealed that saccade performance is immature in children compared to a group of adults: Saccade reaction times were longer, and saccade amplitude relative to target location (10° eccentricity) was shorter. Results also indicated that performance is flexible in children. Although saccade reaction time decreased as age increased, suggesting developmental improvements in saccade control, saccade amplitude gradually improved over trials. Moreover, similar to adults, children were able to modify saccade amplitude based on the visual error made in the previous trial. This second set of results suggests that short visual experience and/or rapid sensorimotor learning are functional in children and can also affect saccade performance.

  12. An Active Vision Approach to Understanding and Improving Visual Training in the Geosciences

    NASA Astrophysics Data System (ADS)

    Voronov, J.; Tarduno, J. A.; Jacobs, R. A.; Pelz, J. B.; Rosen, M. R.

    2009-12-01

    Experience in the field is a fundamental aspect of geologic training, and its effectiveness is largely unchallenged because of anecdotal evidence of its success among expert geologists. However, there have been only a few quantitative studies based on large data collection efforts to investigate how Earth Scientists learn in the field. In a recent collaboration between Earth scientists, Cognitive scientists and experts in Imaging science at the University of Rochester and Rochester Institute of Technology, we are investigating such a study. Within Cognitive Science, one school of thought, referred to as the Active Vision approach, emphasizes that visual perception is an active process requiring us to move our eyes to acquire new information about our environment. The Active Vision approach indicates the perceptual skills which experts possess and which novices will need to acquire to achieve expert performance. We describe data collection efforts using portable eye-trackers to assess how novice and expert geologists acquire visual knowledge in the field. We also discuss our efforts to collect images for use in a semi-immersive classroom environment, useful for further testing of novices and experts using eye-tracking technologies.

  13. Metric Scale Calculation for Visual Mapping Algorithms

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.

    2018-05-01

    Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.

  14. Development of Techniques for Visualization of Scalar and Vector Fields in the Immersive Environment

    NASA Technical Reports Server (NTRS)

    Bidasaria, Hari B.; Wilson, John W.; Nealy, John E.

    2005-01-01

    Visualization of scalar and vector fields in the immersive environment (CAVE - Cave Automated Virtual Environment) is important for its application to radiation shielding research at NASA Langley Research Center. A complete methodology and the underlying software for this purpose have been developed. The developed software has been put to use for the visualization of the earth s magnetic field, and in particular for the study of the South Atlantic Anomaly. The methodology has also been put to use for the visualization of geomagnetically trapped protons and electrons within Earth's magnetosphere.

  15. Design and implementation of a 3D ocean virtual reality and visualization engine

    NASA Astrophysics Data System (ADS)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  16. Learning feedback and feedforward control in a mirror-reversed visual environment.

    PubMed

    Kasuga, Shoko; Telgen, Sebastian; Ushiba, Junichi; Nozaki, Daichi; Diedrichsen, Jörn

    2015-10-01

    When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers. Copyright © 2015 the American Physiological Society.

  17. Learning feedback and feedforward control in a mirror-reversed visual environment

    PubMed Central

    Kasuga, Shoko; Telgen, Sebastian; Ushiba, Junichi; Nozaki, Daichi

    2015-01-01

    When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers. PMID:26245313

  18. Measurement Tools for the Immersive Visualization Environment: Steps Toward the Virtual Laboratory.

    PubMed

    Hagedorn, John G; Dunkers, Joy P; Satterfield, Steven G; Peskin, Adele P; Kelso, John T; Terrill, Judith E

    2007-01-01

    This paper describes a set of tools for performing measurements of objects in a virtual reality based immersive visualization environment. These tools enable the use of the immersive environment as an instrument for extracting quantitative information from data representations that hitherto had be used solely for qualitative examination. We provide, within the virtual environment, ways for the user to analyze and interact with the quantitative data generated. We describe results generated by these methods to obtain dimensional descriptors of tissue engineered medical products. We regard this toolbox as our first step in the implementation of a virtual measurement laboratory within an immersive visualization environment.

  19. jAMVLE, a New Integrated Molecular Visualization Learning Environment

    ERIC Educational Resources Information Center

    Bottomley, Steven; Chandler, David; Morgan, Eleanor; Helmerhorst, Erik

    2006-01-01

    A new computer-based molecular visualization tool has been developed for teaching, and learning, molecular structure. This java-based jmol Amalgamated Molecular Visualization Learning Environment (jAMVLE) is platform-independent, integrated, and interactive. It has an overall graphical user interface that is intuitive and easy to use. The…

  20. The Physical Environment and the Visually Impaired.

    ERIC Educational Resources Information Center

    Braf, Per-Gunnar

    Reported are results of a project carried out at the Swedish Institute for the Handicapped to determine needs of the visually impaired in the planning and adaptation of buildings and other forms of physical environment. Chapter 1 considers implications of impaired vision and includes definitions, statistics, and problems of the visually impaired…

  1. Training Modalities to Increase Sensorimotor Adaptability

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Mulavara, A. P.; Peters, B. T.; Brady, R.; Audas, C.; Cohen, H. S.

    2009-01-01

    During the acute phase of adaptation to novel gravitational environments, sensorimotor disturbances have the potential to disrupt the ability of astronauts to perform required mission tasks. The goal of our current series of studies is develop a sensorimotor adaptability (SA) training program designed to facilitate recovery of functional capabilities when astronauts transition to different gravitational environments. The project has conducted a series of studies investigating the efficacy of treadmill training combined with a variety of sensory challenges (incongruent visual input, support surface instability) designed to increase adaptability. SA training using a treadmill combined with exposure to altered visual input was effective in producing increased adaptability in a more complex over-ground ambulatory task on an obstacle course. This confirms that for a complex task like walking, treadmill training contains enough of the critical features of overground walking to be an effective training modality. SA training can be optimized by using a periodized training schedule. Test sessions that each contain short-duration exposures to multiple perturbation stimuli allows subjects to acquire a greater ability to rapidly reorganize appropriate response strategies when encountering a novel sensory environment. Using a treadmill mounted on top of a six degree-of-freedom motion base platform we investigated locomotor training responses produced by subjects introduced to a dynamic walking surface combined with alterations in visual flow. Subjects who received this training had improved locomotor performance and faster reaction times when exposed to the novel sensory stimuli compared to control subjects. Results also demonstrate that individual sensory biases (i.e. increased visual dependency) can predict adaptive responses to novel sensory environments suggesting that individual training prescription can be developed to enhance adaptability. These data indicate that SA training can be effectively integrated with treadmill exercise and optimized to provide a unique system that combines multiple training requirements in a single countermeasure system. Learning Objectives: The development of a new countermeasure approach that enhances sensorimotor adaptability will be discussed.

  2. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields

    PubMed Central

    Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian

    2017-01-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469

  3. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    PubMed

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

  4. Improving Wayfinding for Older Users With Selective Attention Deficits

    PubMed Central

    Mishler, Ada D.; Neider, Mark B.

    2016-01-01

    Feature at a Glance Older adults experience difficulties with navigating their environments, and may need to rely on signs more heavily than younger adults. However, older adults also experience difficulties with focusing their visual attention, which suggests that signs need to be designed with the goal of making it as easy as possible to attend to them. This article discusses some design principles that may be especially important to compensate for declining attentional focus. These principles include distinctiveness, consistent appearance and location, standardized images, simplicity, isolation from other elements of the environment, and reassurance about the current route. PMID:28286405

  5. Eye movements, visual search and scene memory, in an immersive virtual environment.

    PubMed

    Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.

  6. The virtual windtunnel: Visualizing modern CFD datasets with a virtual environment

    NASA Technical Reports Server (NTRS)

    Bryson, Steve

    1993-01-01

    This paper describes work in progress on a virtual environment designed for the visualization of pre-computed fluid flows. The overall problems involved in the visualization of fluid flow are summarized, including computational, data management, and interface issues. Requirements for a flow visualization are summarized. Many aspects of the implementation of the virtual windtunnel were uniquely determined by these requirements. The user interface is described in detail.

  7. Visualizing turbulent mixing of gases and particles

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Smith, Philip J.; Jain, Sandeep

    1995-01-01

    A physical model and interactive computer graphics techniques have been developed for the visualization of the basic physical process of stochastic dispersion and mixing from steady-state CFD calculations. The mixing of massless particles and inertial particles is visualized by transforming the vector field from a traditionally Eulerian reference frame into a Lagrangian reference frame. Groups of particles are traced through the vector field for the mean path as well as their statistical dispersion about the mean position by using added scalar information about the root mean square value of the vector field and its Lagrangian time scale. In this way, clouds of particles in a turbulent environment are traced, not just mean paths. In combustion simulations of many industrial processes, good mixing is required to achieve a sufficient degree of combustion efficiency. The ability to visualize this multiphase mixing can not only help identify poor mixing but also explain the mechanism for poor mixing. The information gained from the visualization can be used to improve the overall combustion efficiency in utility boilers or propulsion devices. We have used this technique to visualize steady-state simulations of the combustion performance in several furnace designs.

  8. Scalable metadata environments (MDE): artistically impelled immersive environments for large-scale data exploration

    NASA Astrophysics Data System (ADS)

    West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram

    2014-02-01

    Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.

  9. Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data

    NASA Astrophysics Data System (ADS)

    Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.

    2017-12-01

    As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.

  10. Improvement in spatial imagery following sight onset late in childhood.

    PubMed

    Gandhi, Tapan K; Ganesh, Suma; Sinha, Pawan

    2014-03-01

    The factors contributing to the development of spatial imagery skills are not well understood. Here, we consider whether visual experience shapes these skills. Although differences in spatial imagery between sighted and blind individuals have been reported, it is unclear whether these differences are truly due to visual deprivation or instead are due to extraneous factors, such as reduced opportunities for the blind to interact with their environment. A direct way of assessing vision's contribution to the development of spatial imagery is to determine whether spatial imagery skills change soon after the onset of sight in congenitally blind individuals. We tested 10 children who gained sight after several years of congenital blindness and found significant improvements in their spatial imagery skills following sight-restoring surgeries. These results provide evidence of vision's contribution to spatial imagery and also have implications for the nature of internal spatial representations.

  11. Understanding the visual skills and strategies of train drivers in the urban rail environment.

    PubMed

    Naweed, Anjum; Balakrishnan, Ganesh

    2014-01-01

    Due to the growth of information in the urban rail environment, there is a need to better understand the ergonomics profile underpinning the visual behaviours in train drivers. The aim of this study was to examine the tasks and activities of urban/metropolitan passenger train drivers in order to better understand the nature of the visual demands in their task activities. Data were collected from 34 passenger train drivers in four different Australian states. The research approach used a novel participative ergonomics methodology that fused interviews and observations with generative tools. Data analysis was conducted thematically. Results suggested participants did not so much drive their trains, as manage the intensity of visually demanding work held in their environment. The density of this information and the opacity of the task, invoked an ergonomics profile more closely aligned with diagnostic and error detection than actual train regulation. The paper discusses the relative proportion of strategies corresponding with specific tasks, the visual-perceptual load in substantive activities, and the requisite visual skills behoving navigation in the urban rail environment. These findings provide the basis for developing measures of complexity to further specify the visual demands in passenger train driving.

  12. Adequacy of the Regular Early Education Classroom Environment for Students with Visual Impairment

    ERIC Educational Resources Information Center

    Brown, Cherylee M.; Packer, Tanya L.; Passmore, Anne

    2013-01-01

    This study describes the classroom environment that students with visual impairment typically experience in regular Australian early education. Adequacy of the classroom environment (teacher training and experience, teacher support, parent involvement, adult involvement, inclusive attitude, individualization of the curriculum, physical…

  13. Coastal On-line Assessment and Synthesis Tool 2.0

    NASA Technical Reports Server (NTRS)

    Brown, Richard; Navard, Andrew; Nguyen, Beth

    2011-01-01

    COAST (Coastal On-line Assessment and Synthesis Tool) is a 3D, open-source Earth data browser developed by leveraging and enhancing previous NASA open-source tools. These tools use satellite imagery and elevation data in a way that allows any user to zoom from orbit view down into any place on Earth, and enables the user to experience Earth terrain in a visually rich 3D view. The benefits associated with taking advantage of an open-source geo-browser are that it is free, extensible, and offers a worldwide developer community that is available to provide additional development and improvement potential. What makes COAST unique is that it simplifies the process of locating and accessing data sources, and allows a user to combine them into a multi-layered and/or multi-temporal visual analytical look into possible data interrelationships and coeffectors for coastal environment phenomenology. COAST provides users with new data visual analytic capabilities. COAST has been upgraded to maximize use of open-source data access, viewing, and data manipulation software tools. The COAST 2.0 toolset has been developed to increase access to a larger realm of the most commonly implemented data formats used by the coastal science community. New and enhanced functionalities that upgrade COAST to COAST 2.0 include the development of the Temporal Visualization Tool (TVT) plug-in, the Recursive Online Remote Data-Data Mapper (RECORD-DM) utility, the Import Data Tool (IDT), and the Add Points Tool (APT). With these improvements, users can integrate their own data with other data sources, and visualize the resulting layers of different data types (such as spatial and spectral, for simultaneous visual analysis), and visualize temporal changes in areas of interest.

  14. Visual operations control in administrative environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carson, M.L.; Levine, L.O.

    1995-03-01

    When asked what comes to mind when they think of ``controlling work`` in the office, people may respond with ``overbearing boss,`` ``no autonomy,`` or ``Theory X management.`` The idea of controlling work in white collar or administrative environments can have a negative connotation. However, office life is often chaotic and miserable precisely because the work processes are out of control, and managers must spend their time looking over people`s shoulders and fighting fires. While management styles and structures vary, the need for control of work processes does not. Workers in many environments are being reorganized into self-managed work teams. Thesemore » teams are expected to manage their own work through increased autonomy and empowerment. However, even empowered work teams must manage their work processes because of process variation. The amount of incoming jobs vary with both expected (seasonal) and unexpected demand. The mixture of job types vary over time, changing the need for certain skills or knowledge. And illness and turnover affect the availability of workers with needed skills and knowledge. Clearly, there is still a need to control work, whether the authority for controlling work is vested in one person or many. Visual control concepts provide simple, inexpensive, and flexible mechanisms for managing processes in work teams and continuous improvement administrative environments.« less

  15. Increasing Accessibility to the Blind of Virtual Environments, Using a Virtual Mobility Aid Based On the "EyeCane": Feasibility Study

    PubMed Central

    Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel-Robert; Amedi, Amir

    2013-01-01

    Virtual worlds and environments are becoming an increasingly central part of our lives, yet they are still far from accessible to the blind. This is especially unfortunate as such environments hold great potential for them for uses such as social interaction, online education and especially for use with familiarizing the visually impaired user with a real environment virtually from the comfort and safety of his own home before visiting it in the real world. We have implemented a simple algorithm to improve this situation using single-point depth information, enabling the blind to use a virtual cane, modeled on the “EyeCane” electronic travel aid, within any virtual environment with minimal pre-processing. Use of the Virtual-EyeCane, enables this experience to potentially be later used in real world environments with identical stimuli to those from the virtual environment. We show the fast-learned practical use of this algorithm for navigation in simple environments. PMID:23977316

  16. Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments

    ERIC Educational Resources Information Center

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…

  17. Integrated instrumentation & computation environment for GRACE

    NASA Astrophysics Data System (ADS)

    Dhekne, P. S.

    2002-03-01

    The project GRACE (Gamma Ray Astrophysics with Coordinated Experiments) aims at setting up a state of the art Gamma Ray Observatory at Mt. Abu, Rajasthan for undertaking comprehensive scientific exploration over a wide spectral window (10's keV - 100's TeV) from a single location through 4 coordinated experiments. The cumulative data collection rate of all the telescopes is expected to be about 1 GB/hr, necessitating innovations in the data management environment. As real-time data acquisition and control as well as off-line data processing, analysis and visualization environment of these systems is based on the us cutting edge and affordable technologies in the field of computers, communications and Internet. We propose to provide a single, unified environment by seamless integration of instrumentation and computations by taking advantage of the recent advancements in Web based technologies. This new environment will allow researchers better acces to facilities, improve resource utilization and enhance collaborations by having identical environments for online as well as offline usage of this facility from any location. We present here a proposed implementation strategy for a platform independent web-based system that supplements automated functions with video-guided interactive and collaborative remote viewing, remote control through virtual instrumentation console, remote acquisition of telescope data, data analysis, data visualization and active imaging system. This end-to-end web-based solution will enhance collaboration among researchers at the national and international level for undertaking scientific studies, using the telescope systems of the GRACE project.

  18. Investigating the Use of 3d Geovisualizations for Urban Design in Informal Settlement Upgrading in South Africa

    NASA Astrophysics Data System (ADS)

    Rautenbach, V.; Coetzee, S.; Çöltekin, A.

    2016-06-01

    Informal settlements are a common occurrence in South Africa, and to improve in-situ circumstances of communities living in informal settlements, upgrades and urban design processes are necessary. Spatial data and maps are essential throughout these processes to understand the current environment, plan new developments, and communicate the planned developments. All stakeholders need to understand maps to actively participate in the process. However, previous research demonstrated that map literacy was relatively low for many planning professionals in South Africa, which might hinder effective planning. Because 3D visualizations resemble the real environment more than traditional maps, many researchers posited that they would be easier to interpret. Thus, our goal is to investigate the effectiveness of 3D geovisualizations for urban design in informal settlement upgrading in South Africa. We consider all involved processes: 3D modelling, visualization design, and cognitive processes during map reading. We found that procedural modelling is a feasible alternative to time-consuming manual modelling, and can produce high quality models. When investigating the visualization design, the visual characteristics of 3D models and relevance of a subset of visual variables for urban design activities of informal settlement upgrades were qualitatively assessed. The results of three qualitative user experiments contributed to understanding the impact of various levels of complexity in 3D city models and map literacy of future geoinformatics and planning professionals when using 2D maps and 3D models. The research results can assist planners in designing suitable 3D models that can be used throughout all phases of the process.

  19. Visualizing vascular structures in virtual environments

    NASA Astrophysics Data System (ADS)

    Wischgoll, Thomas

    2013-01-01

    In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.

  20. UnAdulterated - children and adults' visual attention to healthy and unhealthy food.

    PubMed

    Junghans, Astrid F; Hooge, Ignace T C; Maas, Josje; Evers, Catharine; De Ridder, Denise T D

    2015-04-01

    Visually attending to unhealthy food creates a desire to consume the food. To resist the temptation people have to employ self-regulation strategies, such as visual avoidance. Past research has shown that self-regulatory skills develop throughout childhood and adolescence, suggesting adults' superior self-regulation skills compared to children. This study employed a novel method to investigate self-regulatory skills. Children and adults' initial (bottom-up) and maintained (top-down) visual attention to simultaneously presented healthy and unhealthy food were examined in an eye-tracking paradigm. Results showed that both children and adults initially attended most to the unhealthy food. Subsequently, adults self-regulated their visual attention away from the unhealthy food. Despite the children's high self-reported attempts to eat healthily and importance of eating healthily, children did not self-regulate visual attention away from unhealthy food. Children remained influenced by the attention-driven desire to consume the unhealthy food whereas adults visually attended more strongly to the healthy food thereby avoiding the desire to consume the unhealthy option. The findings emphasize the necessity of improving children's self-regulatory skills to support their desire to remain healthy and to protect children from the influences of the obesogenic environment. Copyright © 2015. Published by Elsevier Ltd.

  1. Matching optical flow to motor speed in virtual reality while running on a treadmill

    PubMed Central

    Lafortuna, Claudio L.; Mugellini, Elena; Abou Khaled, Omar

    2018-01-01

    We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed–i.e., treadmill’s speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care. PMID:29641564

  2. Matching optical flow to motor speed in virtual reality while running on a treadmill.

    PubMed

    Caramenti, Martina; Lafortuna, Claudio L; Mugellini, Elena; Abou Khaled, Omar; Bresciani, Jean-Pierre; Dubois, Amandine

    2018-01-01

    We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed-i.e., treadmill's speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care.

  3. The remarkable visual capacities of nocturnal insects: vision at the limits with small eyes and tiny brains

    PubMed Central

    2017-01-01

    Nocturnal insects have evolved remarkable visual capacities, despite small eyes and tiny brains. They can see colour, control flight and land, react to faint movements in their environment, navigate using dim celestial cues and find their way home after a long and tortuous foraging trip using learned visual landmarks. These impressive visual abilities occur at light levels when only a trickle of photons are being absorbed by each photoreceptor, begging the question of how the visual system nonetheless generates the reliable signals needed to steer behaviour. In this review, I attempt to provide an answer to this question. Part of the answer lies in their compound eyes, which maximize light capture. Part lies in the slow responses and high gains of their photoreceptors, which improve the reliability of visual signals. And a very large part lies in the spatial and temporal summation of these signals in the optic lobe, a strategy that substantially enhances contrast sensitivity in dim light and allows nocturnal insects to see a brighter world, albeit a slower and coarser one. What is abundantly clear, however, is that during their evolution insects have overcome several serious potential visual limitations, endowing them with truly extraordinary night vision. This article is part of the themed issue ‘Vision in dim light’. PMID:28193808

  4. Use of Perturbation-Based Gait Training in a Virtual Environment to Address Mediolateral Instability in an Individual With Unilateral Transfemoral Amputation

    PubMed Central

    Rábago, Christopher A.; Rylander, Jonathan H.; Dingwell, Jonathan B.; Wilken, Jason M.

    2016-01-01

    Background and Purpose Roughly 50% of individuals with lower limb amputation report a fear of falling and fall at least once a year. Perturbation-based gait training and the use of virtual environments have been shown independently to be effective at improving walking stability in patient populations. An intervention was developed combining the strengths of the 2 paradigms utilizing continuous, walking surface angle oscillations within a virtual environment. This case report describes walking function and mediolateral stability outcomes of an individual with a unilateral transfemoral amputation following a novel perturbation-based gait training intervention in a virtual environment. Case Description The patient was a 43-year-old male veteran who underwent a right transfemoral amputation 7+ years previously as a result of a traumatic blast injury. He used a microprocessor-controlled knee and an energy storage and return foot. Outcomes Following the intervention, multiple measures indicated improved function and stability, including faster self-selected walking speed and reduced functional stepping time, mean step width, and step width variability. These changes were seen during normal level walking and mediolateral visual field or platform perturbations. In addition, benefits were retained at least 5 weeks after the final training session. Discussion The perturbation-based gait training program in the virtual environment resulted in the patient's improved walking function and mediolateral stability. Although the patient had completed intensive rehabilitation following injury and was fully independent, the intervention still induced notable improvements to mediolateral stability. Thus, perturbation-based gait training in challenging simulated environments shows promise for improving walking stability and may be beneficial when integrated into a rehabilitation program. PMID:27277497

  5. Integration of bio-inspired, control-based visual and olfactory data for the detection of an elusive target

    NASA Astrophysics Data System (ADS)

    Duong, Tuan A.; Duong, Nghi; Le, Duong

    2017-01-01

    In this paper, we present an integration technique using a bio-inspired, control-based visual and olfactory receptor system to search for elusive targets in practical environments where the targets cannot be seen obviously by either sensory data. Bio-inspired Visual System is based on a modeling of extended visual pathway which consists of saccadic eye movements and visual pathway (vertebrate retina, lateral geniculate nucleus and visual cortex) to enable powerful target detections of noisy, partial, incomplete visual data. Olfactory receptor algorithm, namely spatial invariant independent component analysis, that was developed based on data of old factory receptor-electronic nose (enose) of Caltech, is adopted to enable the odorant target detection in an unknown environment. The integration of two systems is a vital approach and sets up a cornerstone for effective and low-cost of miniaturized UAVs or fly robots for future DOD and NASA missions, as well as for security systems in Internet of Things environments.

  6. Conservation implications of anthropogenic impacts on visual communication and camouflage.

    PubMed

    Delhey, Kaspar; Peters, Anne

    2017-02-01

    Anthropogenic environmental impacts can disrupt the sensory environment of animals and affect important processes from mate choice to predator avoidance. Currently, these effects are best understood for auditory and chemosensory modalities, and recent reviews highlight their importance for conservation. We examined how anthropogenic changes to the visual environment (ambient light, transmission, and backgrounds) affect visual communication and camouflage and considered the implications of these effects for conservation. Human changes to the visual environment can increase predation risk by affecting camouflage effectiveness, lead to maladaptive patterns of mate choice, and disrupt mutualistic interactions between pollinators and plants. Implications for conservation are particularly evident for disrupted camouflage due to its tight links with survival. The conservation importance of impaired visual communication is less documented. The effects of anthropogenic changes on visual communication and camouflage may be severe when they affect critical processes such as pollination or species recognition. However, when impaired mate choice does not lead to hybridization, the conservation consequences are less clear. We suggest that the demographic effects of human impacts on visual communication and camouflage will be particularly strong when human-induced modifications to the visual environment are evolutionarily novel (i.e., very different from natural variation); affected species and populations have low levels of intraspecific (genotypic and phenotypic) variation and behavioral, sensory, or physiological plasticity; and the processes affected are directly related to survival (camouflage), species recognition, or number of offspring produced, rather than offspring quality or attractiveness. Our findings suggest that anthropogenic effects on the visual environment may be of similar importance relative to conservation as anthropogenic effects on other sensory modalities. © 2016 Society for Conservation Biology.

  7. Integrated Data Visualization and Virtual Reality Tool

    NASA Technical Reports Server (NTRS)

    Dryer, David A.

    1998-01-01

    The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.

  8. Balance rehabilitation: promoting the role of virtual reality in patients with diabetic peripheral neuropathy.

    PubMed

    Grewal, Gurtej S; Sayeed, Rashad; Schwenk, Michael; Bharara, Manish; Menzies, Robert; Talal, Talal K; Armstrong, David G; Najafi, Bijan

    2013-01-01

    Individuals with diabetic peripheral neuropathy frequently experience concomitant impaired proprioception and postural instability. Conventional exercise training has been demonstrated to be effective in improving balance but does not incorporate visual feedback targeting joint perception, which is an integral mechanism that helps compensate for impaired proprioception in diabetic peripheral neuropathy. This prospective cohort study recruited 29 participants (mean ± SD: age, 57 ± 10 years; body mass index [calculated as weight in kilograms divided by height in meters squared], 26.9 ± 3.1). Participants satisfying the inclusion criteria performed predefined ankle exercises through reaching tasks, with visual feedback from the ankle joint projected on a screen. Ankle motion in the mediolateral and anteroposterior directions was captured using wearable sensors attached to the participant's shank. Improvements in postural stability were quantified by measuring center of mass sway area and the reciprocal compensatory index before and after training using validated body-worn sensor technology. Findings revealed a significant reduction in center of mass sway after training (mean, 22%; P = .02). A higher postural stability deficit (high body sway) at baseline was associated with higher training gains in postural balance (reduction in center of mass sway) (r = -0.52, P < .05). In addition, significant improvement was observed in postural coordination between the ankle and hip joints (mean, 10.4%; P = .04). The present research implemented a novel balance rehabilitation strategy based on virtual reality technology. The method included wearable sensors and an interactive user interface for real-time visual feedback based on ankle joint motion, similar to a video gaming environment, for compensating impaired joint proprioception. These findings support that visual feedback generated from the ankle joint coupled with motor learning may be effective in improving postural stability in patients with diabetic peripheral neuropathy.

  9. Analysing the physics learning environment of visually impaired students in high schools

    NASA Astrophysics Data System (ADS)

    Toenders, Frank G. C.; de Putter-Smits, Lesley G. A.; Sanders, Wendy T. M.; den Brok, Perry

    2017-07-01

    Although visually impaired students attend regular high school, their enrolment in advanced science classes is dramatically low. In our research we evaluated the physics learning environment of a blind high school student in a regular Dutch high school. For visually impaired students to grasp physics concepts, time and additional materials to support the learning process are key. Time for teachers to develop teaching methods for such students is scarce. Suggestions for changes to the learning environment and of materials used are given.

  10. Open cyberGIS software for geospatial research and education in the big data era

    NASA Astrophysics Data System (ADS)

    Wang, Shaowen; Liu, Yan; Padmanabhan, Anand

    CyberGIS represents an interdisciplinary field combining advanced cyberinfrastructure, geographic information science and systems (GIS), spatial analysis and modeling, and a number of geospatial domains to improve research productivity and enable scientific breakthroughs. It has emerged as new-generation GIS that enable unprecedented advances in data-driven knowledge discovery, visualization and visual analytics, and collaborative problem solving and decision-making. This paper describes three open software strategies-open access, source, and integration-to serve various research and education purposes of diverse geospatial communities. These strategies have been implemented in a leading-edge cyberGIS software environment through three corresponding software modalities: CyberGIS Gateway, Toolkit, and Middleware, and achieved broad and significant impacts.

  11. Stroboscopic Training Enhances Anticipatory Timing.

    PubMed

    Smith, Trevor Q; Mitroff, Stephen R

    The dynamic aspects of sports often place heavy demands on visual processing. As such, an important goal for sports training should be to enhance visual abilities. Recent research has suggested that training in a stroboscopic environment, where visual experiences alternate between visible and obscured, may provide a means of improving attentional and visual abilities. The current study explored whether stroboscopic training could impact anticipatory timing - the ability to predict where a moving stimulus will be at a specific point in time. Anticipatory timing is a critical skill for both sports and non-sports activities, and thus finding training improvements could have broad impacts. Participants completed a pre-training assessment that used a Bassin Anticipation Timer to measure their abilities to accurately predict the timing of a moving visual stimulus. Immediately after this initial assessment, the participants completed training trials, but in one of two conditions. Those in the Control condition proceeded as before with no change. Those in the Strobe condition completed the training trials while wearing specialized eyewear that had lenses that alternated between transparent and opaque (rate of 100ms visible to 150ms opaque). Post-training assessments were administered immediately after training, 10-minutes after training, and 10-days after training. Compared to the Control group, the Strobe group was significantly more accurate immediately after training, was more likely to respond early than to respond late immediately after training and 10 minutes later, and was more consistent in their timing estimates immediately after training and 10 minutes later.

  12. Toward Bridging the Mechanistic Gap Between Genes and Traits by Emphasizing the Role of Proteins in a Computational Environment

    NASA Astrophysics Data System (ADS)

    Haskel-Ittah, Michal; Yarden, Anat

    2017-12-01

    Previous studies have shown that students often ignore molecular mechanisms when describing genetic phenomena. Specifically, students tend to directly link genes to their encoded traits, ignoring the role of proteins as mediators in this process. We tested the ability of 10th grade students to connect genes to traits through proteins, using concept maps and reasoning questions. The context of this study was a computational learning environment developed specifically to foster this ability. This environment presents proteins as the mechanism-mediating genetic phenomena. We found that students' ability to connect genes, proteins, and traits, or to reason using this connection, was initially poor. However, significant improvement was obtained when using the learning environment. Our results suggest that visual representations of proteins' functions in the context of a specific trait contributed to this improvement. One significant aspect of these results is the indication that 10th graders are capable of accurately describing genetic phenomena and their underlying mechanisms, a task that has been shown to raise difficulties, even in higher grades of high school.

  13. The social computing room: a multi-purpose collaborative visualization environment

    NASA Astrophysics Data System (ADS)

    Borland, David; Conway, Michael; Coposky, Jason; Ginn, Warren; Idaszak, Ray

    2010-01-01

    The Social Computing Room (SCR) is a novel collaborative visualization environment for viewing and interacting with large amounts of visual data. The SCR consists of a square room with 12 projectors (3 per wall) used to display a single 360-degree desktop environment that provides a large physical real estate for arranging visual information. The SCR was designed to be cost-effective, collaborative, configurable, widely applicable, and approachable for naive users. Because the SCR displays a single desktop, a wide range of applications is easily supported, making it possible for a variety of disciplines to take advantage of the room. We provide a technical overview of the room and highlight its application to scientific visualization, arts and humanities projects, research group meetings, and virtual worlds, among other uses.

  14. Distributed Observer Network

    NASA Technical Reports Server (NTRS)

    2008-01-01

    NASA s advanced visual simulations are essential for analyses associated with life cycle planning, design, training, testing, operations, and evaluation. Kennedy Space Center, in particular, uses simulations for ground services and space exploration planning in an effort to reduce risk and costs while improving safety and performance. However, it has been difficult to circulate and share the results of simulation tools among the field centers, and distance and travel expenses have made timely collaboration even harder. In response, NASA joined with Valador Inc. to develop the Distributed Observer Network (DON), a collaborative environment that leverages game technology to bring 3-D simulations to conventional desktop and laptop computers. DON enables teams of engineers working on design and operations to view and collaborate on 3-D representations of data generated by authoritative tools. DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3-D visual environment. Multiple widely dispersed users, working individually or in groups, can view and analyze simulation results on desktop and laptop computers in real time.

  15. Virtual reality simulation in neurosurgery: technologies and evolution.

    PubMed

    Chan, Sonny; Conti, François; Salisbury, Kenneth; Blevins, Nikolas H

    2013-01-01

    Neurosurgeons are faced with the challenge of learning, planning, and performing increasingly complex surgical procedures in which there is little room for error. With improvements in computational power and advances in visual and haptic display technologies, virtual surgical environments can now offer potential benefits for surgical training, planning, and rehearsal in a safe, simulated setting. This article introduces the various classes of surgical simulators and their respective purposes through a brief survey of representative simulation systems in the context of neurosurgery. Many technical challenges currently limit the application of virtual surgical environments. Although we cannot yet expect a digital patient to be indistinguishable from reality, new developments in computational methods and related technology bring us closer every day. We recognize that the design and implementation of an immersive virtual reality surgical simulator require expert knowledge from many disciplines. This article highlights a selection of recent developments in research areas related to virtual reality simulation, including anatomic modeling, computer graphics and visualization, haptics, and physics simulation, and discusses their implication for the simulation of neurosurgery.

  16. The Model of Landscape Development in Big Cities Of Central Java

    NASA Astrophysics Data System (ADS)

    Darmawan, E.; Murtini, T. W.

    2018-05-01

    The existence of urban parks as a part of urban green space is very important for the environment and the citizen of the city, and inseparable part from the urban landscape. In its development, the existence of an urban parks could create a safe, comfortable, productive, and visually aesthetical environment. The problem arising now is a view that the urban parks are often unsuitable with the surrounding. Therefore, the parks are not functional and does not have significant visual. So that, this research is aimed to reveal model of landscape development in big cities in Central Java. The method used is descriptive qualitative that can describe the detail of problem, in determining the plan to overcome the problem. The research location will focused on big cities in Central Java with potential landscape that can be improved. The results of the research will be composed in an international scientific journals and is expected to be a reference in the field of urban landscape arrangement.

  17. Classification of Movement and Inhibition Using a Hybrid BCI.

    PubMed

    Chmura, Jennifer; Rosing, Joshua; Collazos, Steven; Goodwin, Shikha J

    2017-01-01

    Brain-computer interfaces (BCIs) are an emerging technology that are capable of turning brain electrical activity into commands for an external device. Motor imagery (MI)-when a person imagines a motion without executing it-is widely employed in BCI devices for motor control because of the endogenous origin of its neural control mechanisms, and the similarity in brain activation to actual movements. Challenges with translating a MI-BCI into a practical device used outside laboratories include the extensive training required, often due to poor user engagement and visual feedback response delays; poor user flexibility/freedom to time the execution/inhibition of their movements, and to control the movement type (right arm vs. left leg) and characteristics (reaching vs. grabbing); and high false positive rates of motion control. Solutions to improve sensorimotor activation and user performance of MI-BCIs have been explored. Virtual reality (VR) motor-execution tasks have replaced simpler visual feedback (smiling faces, arrows) and have solved this problem to an extent. Hybrid BCIs (hBCIs) implementing an additional control signal to MI have improved user control capabilities to a limited extent. These hBCIs either fail to allow the patients to gain asynchronous control of their movements, or have a high false positive rate. We propose an immersive VR environment which provides visual feedback that is both engaging and immediate, but also uniquely engages a different cognitive process in the patient that generates event-related potentials (ERPs). These ERPs provide a key executive function for the users to execute/inhibit movements. Additionally, we propose signal processing strategies and machine learning algorithms to move BCIs toward developing long-term signal stability in patients with distinctive brain signals and capabilities to control motor signals. The hBCI itself and the VR environment we propose would help to move BCI technology outside laboratory environments for motor rehabilitation in hospitals, and potentially for controlling a prosthetic.

  18. Classification of Movement and Inhibition Using a Hybrid BCI

    PubMed Central

    Chmura, Jennifer; Rosing, Joshua; Collazos, Steven; Goodwin, Shikha J.

    2017-01-01

    Brain-computer interfaces (BCIs) are an emerging technology that are capable of turning brain electrical activity into commands for an external device. Motor imagery (MI)—when a person imagines a motion without executing it—is widely employed in BCI devices for motor control because of the endogenous origin of its neural control mechanisms, and the similarity in brain activation to actual movements. Challenges with translating a MI-BCI into a practical device used outside laboratories include the extensive training required, often due to poor user engagement and visual feedback response delays; poor user flexibility/freedom to time the execution/inhibition of their movements, and to control the movement type (right arm vs. left leg) and characteristics (reaching vs. grabbing); and high false positive rates of motion control. Solutions to improve sensorimotor activation and user performance of MI-BCIs have been explored. Virtual reality (VR) motor-execution tasks have replaced simpler visual feedback (smiling faces, arrows) and have solved this problem to an extent. Hybrid BCIs (hBCIs) implementing an additional control signal to MI have improved user control capabilities to a limited extent. These hBCIs either fail to allow the patients to gain asynchronous control of their movements, or have a high false positive rate. We propose an immersive VR environment which provides visual feedback that is both engaging and immediate, but also uniquely engages a different cognitive process in the patient that generates event-related potentials (ERPs). These ERPs provide a key executive function for the users to execute/inhibit movements. Additionally, we propose signal processing strategies and machine learning algorithms to move BCIs toward developing long-term signal stability in patients with distinctive brain signals and capabilities to control motor signals. The hBCI itself and the VR environment we propose would help to move BCI technology outside laboratory environments for motor rehabilitation in hospitals, and potentially for controlling a prosthetic. PMID:28860986

  19. Recovery of Visual Search following Moderate to Severe Traumatic Brain Injury

    PubMed Central

    Schmitter-Edgecombe, Maureen; Robertson, Kayela

    2015-01-01

    Introduction Deficits in attentional abilities can significantly impact rehabilitation and recovery from traumatic brain injury (TBI). This study investigated the nature and recovery of pre-attentive (parallel) and attentive (serial) visual search abilities after TBI. Methods Participants were 40 individuals with moderate to severe TBI who were tested following emergence from post-traumatic amnesia and approximately 8-months post-injury, as well as 40 age and education matched controls. Pre-attentive (automatic) and attentive (controlled) visual search situations were created by manipulating the saliency of the target item amongst distractor items in visual displays. The relationship between pre-attentive and attentive visual search rates and follow-up community integration were also explored. Results The results revealed intact parallel (automatic) processing skills in the TBI group both post-acutely and at follow-up. In contrast, when attentional demands on visual search were increased by reducing the saliency of the target, the TBI group demonstrated poorer performances compared to the control group both post-acutely and 8-months post-injury. Neither pre-attentive nor attentive visual search slope values correlated with follow-up community integration. Conclusions These results suggest that utilizing intact pre-attentive visual search skills during rehabilitation may help to reduce high mental workload situations, thereby improving the rehabilitation process. For example, making commonly used objects more salient in the environment should increase reliance or more automatic visual search processes and reduce visual search time for individuals with TBI. PMID:25671675

  20. SeaTouch: A Haptic and Auditory Maritime Environment for Non Visual Cognitive Mapping of Blind Sailors

    NASA Astrophysics Data System (ADS)

    Simonnet, Mathieu; Jacobson, Dan; Vieilledent, Stephane; Tisseau, Jacques

    Navigating consists of coordinating egocentric and allocentric spatial frames of reference. Virtual environments have afforded researchers in the spatial community with tools to investigate the learning of space. The issue of the transfer between virtual and real situations is not trivial. A central question is the role of frames of reference in mediating spatial knowledge transfer to external surroundings, as is the effect of different sensory modalities accessed in simulated and real worlds. This challenges the capacity of blind people to use virtual reality to explore a scene without graphics. The present experiment involves a haptic and auditory maritime virtual environment. In triangulation tasks, we measure systematic errors and preliminary results show an ability to learn configurational knowledge and to navigate through it without vision. Subjects appeared to take advantage of getting lost in an egocentric “haptic” view in the virtual environment to improve performances in the real environment.

  1. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle over or on the terrain correctly. For long traverses over terrain, the visualization can stream in terrain piecewise in order to maintain the current area of interest for the operator without incurring unreasonable resource constraints on the computing platform. The visualization software is designed to run on laptops that can operate in field-testing environments without Internet access, which is a frequently encountered situation when testing in remote locations that simulate planetary environments such as Mars and other planetary bodies.

  2. An Update on Improvements to NiCE Support for PROTEUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Andrew; McCaskey, Alexander J.; Billings, Jay Jay

    2015-09-01

    The Department of Energy Office of Nuclear Energy's Nuclear Energy Advanced Modeling and Simulation (NEAMS) program has supported the development of the NEAMS Integrated Computational Environment (NiCE), a modeling and simulation workflow environment that provides services and plugins to facilitate tasks such as code execution, model input construction, visualization, and data analysis. This report details the development of workflows for the reactor core neutronics application, PROTEUS. This advanced neutronics application (primarily developed at Argonne National Laboratory) aims to improve nuclear reactor design and analysis by providing an extensible and massively parallel, finite-element solver for current and advanced reactor fuel neutronicsmore » modeling. The integration of PROTEUS-specific tools into NiCE is intended to make the advanced capabilities that PROTEUS provides more accessible to the nuclear energy research and development community. This report will detail the work done to improve existing PROTEUS workflow support in NiCE. We will demonstrate and discuss these improvements, including the development of flexible IO services, an improved interface for input generation, and the addition of advanced Fortran development tools natively in the platform.« less

  3. Towards an assistive peripheral visual prosthesis for long-term treatment of retinitis pigmentosa: evaluating mobility performance in immersive simulations

    NASA Astrophysics Data System (ADS)

    Zapf, Marc Patrick H.; Boon, Mei-Ying; Matteucci, Paul B.; Lovell, Nigel H.; Suaning, Gregg J.

    2015-06-01

    Objective. The prospective efficacy of a future peripheral retinal prosthesis complementing residual vision to raise mobility performance in non-end stage retinitis pigmentosa (RP) was evaluated using simulated prosthetic vision (SPV). Approach. Normally sighted volunteers were fitted with a wide-angle head-mounted display and carried out mobility tasks in photorealistic virtual pedestrian scenarios. Circumvention of low-lying obstacles, path following, and navigating around static and moving pedestrians were performed either with central simulated residual vision of 10° alone or enhanced by assistive SPV in the lower and lateral peripheral visual field (VF). Three layouts of assistive vision corresponding to hypothetical electrode array layouts were compared, emphasizing higher visual acuity, a wider visual angle, or eccentricity-dependent acuity across an intermediate angle. Movement speed, task time, distance walked and collisions with the environment were analysed as performance measures. Main results. Circumvention of low-lying obstacles was improved with all tested configurations of assistive SPV. Higher-acuity assistive vision allowed for greatest improvement in walking speeds—14% above that of plain residual vision, while only wide-angle and eccentricity-dependent vision significantly reduced the number of collisions—both by 21%. Navigating around pedestrians, there were significant reductions in collisions with static pedestrians by 33% and task time by 7.7% with the higher-acuity layout. Following a path, higher-acuity assistive vision increased walking speed by 9%, and decreased collisions with stationary cars by 18%. Significance. The ability of assistive peripheral prosthetic vision to improve mobility performance in persons with constricted VFs has been demonstrated. In a prospective peripheral visual prosthesis, electrode array designs need to be carefully tailored to the scope of tasks in which a device aims to assist. We posit that maximum benefit might come from application alongside existing visual aids, to further raise life quality of persons living through the prolonged early stages of RP.

  4. Visual Analytics in Public Safety: Example Capabilities for Example Government Agencies

    DTIC Science & Technology

    2011-10-01

    is not limited to: the Police Records Information Management Environment for British Columbia (PRIME-BC), the Police Reporting and Occurrence System...and filtering for rapid identification of relevant documents - Graphical environment for visual evidence marshaling - Interactive linking and...analytical reasoning facilitated by interactive visual interfaces and integration with computational analytics. Indeed, a wide variety of technologies

  5. Preparing Content-Rich Learning Environments with VPython and Excel, Controlled by Visual Basic for Applications

    ERIC Educational Resources Information Center

    Prayaga, Chandra

    2008-01-01

    A simple interface between VPython and Microsoft (MS) Office products such as Word and Excel, controlled by Visual Basic for Applications, is described. The interface allows the preparation of content-rich, interactive learning environments by taking advantage of the three-dimensional (3D) visualization capabilities of VPython and the GUI…

  6. The use of a tactile interface to convey position and motion perceptions

    NASA Technical Reports Server (NTRS)

    Rupert, A. H.; Guedry, F. E.; Reschke, M. F.

    1994-01-01

    Under normal terrestrial conditions, perception of position and motion is determined by central nervous system integration of concordant and redundant information from multiple sensory channels (somatosensory, vestibular, visual), which collectively yield vertical perceptions. In the acceleration environment experienced by the pilots, the somatosensory and vestibular sensors frequently present false information concerning the direction of gravity. When presented with conflicting sensory information, it is normal for pilots to experience episodes of disorientation. We have developed a tactile interface that obtains vertical roll and pitch information from a gyro-stabilized attitude indicator and maps this information in a one-to-one correspondence onto the torso of the body using a matrix of vibrotactors. This enables the pilot to continuously maintain an awareness of aircraft attitude without reference to visual cues, utilizing a sensory channel that normally operates at the subconscious level. Although initially developed to improve pilot spatial awareness, this device has obvious applications to 1) simulation and training, 2) nonvisual tracking of targets, which can reduce the need for pilots to make head movements in the high-G environment of aerial combat, and 3) orientation in environments with minimal somatosensory cues (e.g., underwater) or gravitational cues (e.g., space).

  7. A Novel Robot Visual Homing Method Based on SIFT Features

    PubMed Central

    Zhu, Qidan; Liu, Chuanjia; Cai, Chengtao

    2015-01-01

    Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method. PMID:26473880

  8. Flies and humans share a motion estimation strategy that exploits natural scene statistics

    PubMed Central

    Clark, Damon A.; Fitzgerald, James E.; Ales, Justin M.; Gohl, Daryl M.; Silies, Marion A.; Norcia, Anthony M.; Clandinin, Thomas R.

    2014-01-01

    Sighted animals extract motion information from visual scenes by processing spatiotemporal patterns of light falling on the retina. The dominant models for motion estimation exploit intensity correlations only between pairs of points in space and time. Moving natural scenes, however, contain more complex correlations. Here we show that fly and human visual systems encode the combined direction and contrast polarity of moving edges using triple correlations that enhance motion estimation in natural environments. Both species extract triple correlations with neural substrates tuned for light or dark edges, and sensitivity to specific triple correlations is retained even as light and dark edge motion signals are combined. Thus, both species separately process light and dark image contrasts to capture motion signatures that can improve estimation accuracy. This striking convergence argues that statistical structures in natural scenes have profoundly affected visual processing, driving a common computational strategy over 500 million years of evolution. PMID:24390225

  9. 3D Visualization of Cooperative Trajectories

    NASA Technical Reports Server (NTRS)

    Schaefer, John A.

    2014-01-01

    Aerodynamicists and biologists have long recognized the benefits of formation flight. When birds or aircraft fly in the upwash region of the vortex generated by leaders in a formation, induced drag is reduced for the trail bird or aircraft, and efficiency improves. The major consequence of this is that fuel consumption can be greatly reduced. When two aircraft are separated by a large enough longitudinal distance, the aircraft are said to be flying in a cooperative trajectory. A simulation has been developed to model autonomous cooperative trajectories of aircraft; however it does not provide any 3D representation of the multi-body system dynamics. The topic of this research is the development of an accurate visualization of the multi-body system observable in a 3D environment. This visualization includes two aircraft (lead and trail), a landscape for a static reference, and simplified models of the vortex dynamics and trajectories at several locations between the aircraft.

  10. The Last Meter: Blind Visual Guidance to a Target.

    PubMed

    Manduchi, Roberto; Coughlan, James M

    2014-01-01

    Smartphone apps can use object recognition software to provide information to blind or low vision users about objects in the visual environment. A crucial challenge for these users is aiming the camera properly to take a well-framed picture of the desired target object. We investigate the effects of two fundamental constraints of object recognition - frame rate and camera field of view - on a blind person's ability to use an object recognition smartphone app. The app was used by 18 blind participants to find visual targets beyond arm's reach and approach them to within 30 cm. While we expected that a faster frame rate or wider camera field of view should always improve search performance, our experimental results show that in many cases increasing the field of view does not help, and may even hurt, performance. These results have important implications for the design of object recognition systems for blind users.

  11. Visual analytics as a translational cognitive science.

    PubMed

    Fisher, Brian; Green, Tera Marie; Arias-Hernández, Richard

    2011-07-01

    Visual analytics is a new interdisciplinary field of study that calls for a more structured scientific approach to understanding the effects of interaction with complex graphical displays on human cognitive processes. Its primary goal is to support the design and evaluation of graphical information systems that better support cognitive processes in areas as diverse as scientific research and emergency management. The methodologies that make up this new field are as yet ill defined. This paper proposes a pathway for development of visual analytics as a translational cognitive science that bridges fundamental research in human/computer cognitive systems and design and evaluation of information systems in situ. Achieving this goal will require the development of enhanced field methods for conceptual decomposition of human/computer cognitive systems that maps onto laboratory studies, and improved methods for conducting laboratory investigations that might better map onto real-world cognitive processes in technology-rich environments. Copyright © 2011 Cognitive Science Society, Inc.

  12. Image and emotion: from outcomes to brain behavior.

    PubMed

    Nanda, Upali; Zhu, Xi; Jansen, Ben H

    2012-01-01

    A systematic review of neuroscience articles on the emotional states of fear, anxiety, and pain to understand how emotional response is linked to the visual characteristics of an image at the level of brain behavior. A number of outcome studies link exposure to visual images (with nature content) to improvements in stress, anxiety, and pain perception. However, an understanding of the underlying perceptual mechanisms has been lacking. In this article, neuroscience studies that use visual images to induce fear, anxiety, or pain are reviewed to gain an understanding of how the brain processes visual images in this context and to explore whether this processing can be linked to specific visual characteristics. The amygdala was identified as one of the key regions of the brain involved in the processing of fear, anxiety, and pain (induced by visual images). Other key areas included the thalamus, insula, and hippocampus. Characteristics of visual images such as the emotional dimension (valence/arousal), subject matter (familiarity, ambiguity, novelty, realism, and facial expressions), and form (sharp and curved contours) were identified as key factors influencing emotional processing. The broad structural properties of an image and overall content were found to have a more pivotal role in the emotional response than the specific details of an image. Insights on specific visual properties were translated to recommendations for what should be incorporated-and avoided-in healthcare environments.

  13. Development and Evaluation of a Compartmental Picture Archiving and Communications System Model for Integration and Visualization of Multidisciplinary Biomedical Data to Facilitate Student Learning in an Integrative Health Clinic

    ERIC Educational Resources Information Center

    Chow, Meyrick; Chan, Lawrence

    2010-01-01

    Information technology (IT) has the potential to improve the clinical learning environment. The extent to which IT enhances or detracts from healthcare professionals' role performance can be expected to affect both student learning and patient outcomes. This study evaluated nursing students' satisfaction with a novel compartmental Picture…

  14. Center for Nonlinear Phenomena and Magnetic Materials

    DTIC Science & Technology

    1992-12-04

    S) AND ADDRESS(ES) B. PERFORMING ORGANIZATION Howard University /ComSERC REPORT NUMBER 2216 6th St., N.W. Suite 205 NA Washington, D.C. 20059 9...contract on the research environment at Howard University 14. SUBJECT TERMS 15. NUMBER OF PAGES 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY...October 25, 1991: Dr. Gerald Chachere, Math Dept., Howard University . Visualization - Improved Marching Cubes. January 27, 1992: Dr. Gerald Chachere, Math

  15. Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment

    PubMed Central

    Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905

  16. Bridging the gap between PAT concepts and implementation: An integrated software platform for fermentation.

    PubMed

    Chopda, Viki R; Gomes, James; Rathore, Anurag S

    2016-01-01

    Bioreactor control significantly impacts both the amount and quality of the product being manufactured. The complexity of the control strategy that is implemented increases with reactor size, which may vary from thousands to tens of thousands of litres in commercial manufacturing. The Process Analytical Technology (PAT) initiative has highlighted the need for having robust monitoring tools and effective control schemes that are capable of taking real time information about the critical quality attributes (CQA) and the critical process parameters (CPP) and executing immediate response as soon as a deviation occurs. However, the limited flexibility that present commercial software packages offer creates a hurdle. Visual programming environments have gradually emerged as potential alternatives to the available text based languages. This paper showcases development of an integrated programme using a visual programming environment for a Sartorius BIOSTAT® B Plus 5L bioreactor through which various peripheral devices are interfaced. The proposed programme facilitates real-time access to data and allows for execution of control actions to follow the desired trajectory. Major benefits of such integrated software system include: (i) improved real time monitoring and control; (ii) reduced variability; (iii) improved performance; (iv) reduced operator-training time; (v) enhanced knowledge management; and (vi) easier PAT implementation. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Visually Coupled Systems (VCS): The Virtual Panoramic Display (VPD) System

    NASA Technical Reports Server (NTRS)

    Kocian, Dean F.

    1992-01-01

    The development and impact is described of new visually coupled system (VCS) equipment designed to support engineering and human factors research in the military aircraft cockpit environment. VCS represents an advanced man-machine interface (MMI). Its potential to improve aircrew situational awareness seems enormous, but its superiority over the conventional cockpit MMI has not been established in a conclusive and rigorous fashion. What has been missing is a 'systems' approach to technology advancement that is comprehensive enough to produce conclusive results concerning the operational viability of the VCS concept and verify any risk factors that might be involved with its general use in the cockpit. The advanced VCS configuration described here, was ruggedized for use in military aircraft environments and was dubbed the Virtual Panoramic Display (VPD). It was designed to answer the VCS portion of the systems problem, and is implemented as a modular system whose performance can be tailored to specific application requirements. The overall system concept and the design of the two most important electronic subsystems that support the helmet mounted parts, a new militarized version of the magnetic helmet mounted sight and correspondingly similar helmet display electronics, are discussed in detail. Significant emphasis is given to illustrating how particular design features in the hardware improve overall system performance and support research activities.

  18. Magnetic beads-based DNAzyme recognition and AuNPs-based enzymatic catalysis amplification for visual detection of trace uranyl ion in aqueous environment.

    PubMed

    Zhang, Hongyan; Lin, Ling; Zeng, Xiaoxue; Ruan, Yajuan; Wu, Yongning; Lin, Minggui; He, Ye; Fu, FengFu

    2016-04-15

    We herein developed a novel biosensor for the visual detection of trace uranyl ion (UO2(2+)) in aqueous environment with high sensitivity and specificity by using DNAzyme-functionalized magnetic beads (MBs) for UO2(2+) recognition and gold nano-particles (AuNPs)-based enzymatic catalysis oxidation of TMB (3,3',5,5'-tetramethylbenzidine sulfate) for signal generation. The utilization of MBs facilitates the magnetic separation and collection of sensing system from complex sample solution, which leads to more convenient experimental operation and more strong resistibility of the biosensor to the matrix of sample, and the utilization of AuNPs-based enzymatic catalysis amplification greatly improved the sensitivity of the biosensor. Compared with the previous DNAzyme-based UO2(2+) sensors, the proposed biosensor has outstanding advantages such as relative high sensitivity and specificity, operation convenience, low cost and more strong resistibility to the matrix of sample. It can be used to detect as low as 0.02 ppb (74 pM) of UO2(2+) in aqueous environment by only naked-eye observation and 1.89 ppt (7.0 pM) of UO2(2+) by UV-visible spectrophotometer with a recovery of 93-99% and a RSD ≤ 5.0% (n=6) within 3h. Especially, the visual detection limit of 0.02 ppb (74 pM) is much lower than the maximum allowable level of UO2(2+) (130 nM) in the drinking water defined by the U.S. Environmental Protection Agency (EPA), indicating that our method meets the requirement of rapid and on-site detection of UO2(2+) in the aqueous environment by only naked-eye observation. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Developing Guidelines for Assessing Visual Analytics Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean

    2011-07-01

    In this paper, we develop guidelines for evaluating visual analytic environments based on a synthesis of reviews for the entries to the 2009 Visual Analytics Science and Technology (VAST) Symposium Challenge and from a user study with professional intelligence analysts. By analyzing the 2009 VAST Challenge reviews we gained a better understanding of what is important to our reviewers, both visualization researchers and professional analysts. We also report on a small user study with professional analysts to determine the important factors that they use in evaluating visual analysis systems. We then looked at guidelines developed by researchers in various domainsmore » and synthesized these into an initial set for use by others in the community. In a second part of the user study, we looked at guidelines for a new aspect of visual analytic systems – the generation of reports. Future visual analytic systems have been challenged to help analysts generate their reports. In our study we worked with analysts to understand the criteria they used to evaluate the quality of analytic reports. We propose that this knowledge will be useful as researchers look at systems to automate some of the report generation.1 Based on these efforts, we produced some initial guidelines for evaluating visual analytic environment and for evaluation of analytic reports. It is important to understand that these guidelines are initial drafts and are limited in scope because of the type of tasks for which the visual analytic systems used in the studies in this paper were designed. More research and refinement is needed by the Visual Analytics Community to provide additional evaluation guidelines for different types of visual analytic environments.« less

  20. RAVE: Rapid Visualization Environment

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos

    1994-01-01

    Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.

  1. Augmented Virtuality: A Real-time Process for Presenting Real-world Visual Sensory Information in an Immersive Virtual Environment for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.

    2017-12-01

    Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.

  2. A GIS-Enabled, Michigan-Specific, Hierarchical Groundwater Modeling and Visualization System

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Li, S.; Mandle, R.; Simard, A.; Fisher, B.; Brown, E.; Ross, S.

    2005-12-01

    Efficient management of groundwater resources relies on a comprehensive database that represents the characteristics of the natural groundwater system as well as analysis and modeling tools to describe the impacts of decision alternatives. Many agencies in Michigan have spent several years compiling expensive and comprehensive surface water and groundwater inventories and other related spatial data that describe their respective areas of responsibility. However, most often this wealth of descriptive data has only been utilized for basic mapping purposes. The benefits from analyzing these data, using GIS analysis functions or externally developed analysis models or programs, has yet to be systematically realized. In this talk, we present a comprehensive software environment that allows Michigan groundwater resources managers and frontline professionals to make more effective use of the available data and improve their ability to manage and protect groundwater resources, address potential conflicts, design cleanup schemes, and prioritize investigation activities. In particular, we take advantage of the Interactive Ground Water (IGW) modeling system and convert it to a customized software environment specifically for analyzing, modeling, and visualizing the Michigan statewide groundwater database. The resulting Michigan IGW modeling system (IGW-M) is completely window-based, fully interactive, and seamlessly integrated with a GIS mapping engine. The system operates in real-time (on the fly) providing dynamic, hierarchical mapping, modeling, spatial analysis, and visualization. Specifically, IGW-M allows water resources and environmental professionals in Michigan to: * Access and utilize the extensive data from the statewide groundwater database, interactively manipulate GIS objects, and display and query the associated data and attributes; * Analyze and model the statewide groundwater database, interactively convert GIS objects into numerical model features, automatically extract data and attributes, and simulate unsteady groundwater flow and contaminant transport in response to water and land management decisions; * Visualize and map model simulations and predictions with data from the statewide groundwater database in a seamless interactive environment. IGW-M has the potential to significantly improve the productivity of Michigan groundwater management investigations. It changes the role of engineers and scientists in modeling and analyzing the statewide groundwater database from heavily physical to cognitive problem-solving and decision-making tasks. The seamless real-time integration, real-time visual interaction, and real-time processing capability allows a user to focus on critical management issues, conflicts, and constraints, to quickly and iteratively examine conceptual approximations, management and planning scenarios, and site characterization assumptions, to identify dominant processes, to evaluate data worth and sensitivity, and to guide further data-collection activities. We illustrate the power and effectiveness of the M-IGW modeling and visualization system with a real case study and a real-time, live demonstration.

  3. Virtual reality for health care: a survey.

    PubMed

    Moline, J

    1997-01-01

    This report surveys the state of the art in applications of virtual environments and related technologies for health care. Applications of these technologies are being developed for health care in the following areas: surgical procedures (remote surgery or telepresence, augmented or enhanced surgery, and planning and simulation of procedures before surgery); medical therapy; preventive medicine and patient education; medical education and training; visualization of massive medical databases; skill enhancement and rehabilitation; and architectural design for health-care facilities. To date, such applications have improved the quality of health care, and in the future they will result in substantial cost savings. Tools that respond to the needs of present virtual environment systems are being refined or developed. However, additional large-scale research is necessary in the following areas: user studies, use of robots for telepresence procedures, enhanced system reality, and improved system functionality.

  4. Visualizing Complex Environments in the Geo- and BioSciences

    NASA Astrophysics Data System (ADS)

    Prabhu, A.; Fox, P. A.; Zhong, H.; Eleish, A.; Ma, X.; Zednik, S.; Morrison, S. M.; Moore, E. K.; Muscente, D.; Meyer, M.; Hazen, R. M.

    2017-12-01

    Earth's living and non-living components have co-evolved for 4 billion years through numerous positive and negative feedbacks. Earth and life scientists have amassed vast amounts of data in diverse fields related to planetary evolution through deep time-mineralogy and petrology, paleobiology and paleontology, paleotectonics and paleomagnetism, geochemistry and geochrononology, genomics and proteomics, and more. Integrating the data from these complimentary disciplines is very useful in gaining an understanding of the evolution of our planet's environment. The integrated data however, represent many extremely complex environments. In order to gain insights and make discoveries using this data, it is important for us to model and visualize these complex environments. As part of work in understanding the "Co-Evolution of Geo and Biospheres using Data Driven Methodologies," we have developed several visualizations to help represent the information stored in the datasets from complimentary disciplines. These visualizations include 2D and 3D force directed Networks, Chord Diagrams, 3D Klee Diagrams. Evolving Network Diagrams, Skyline Diagrams and Tree Diagrams. Combining these visualizations with the results of machine learning and data analysis methods leads to a powerful way to discover patterns and relationships about the Earth's past and today's changing environment.

  5. Improving the discrimination of hand motor imagery via virtual reality based visual guidance.

    PubMed

    Liang, Shuang; Choi, Kup-Sze; Qin, Jing; Pang, Wai-Man; Wang, Qiong; Heng, Pheng-Ann

    2016-08-01

    While research on the brain-computer interface (BCI) has been active in recent years, how to get high-quality electrical brain signals to accurately recognize human intentions for reliable communication and interaction is still a challenging task. The evidence has shown that visually guided motor imagery (MI) can modulate sensorimotor electroencephalographic (EEG) rhythms in humans, but how to design and implement efficient visual guidance during MI in order to produce better event-related desynchronization (ERD) patterns is still unclear. The aim of this paper is to investigate the effect of using object-oriented movements in a virtual environment as visual guidance on the modulation of sensorimotor EEG rhythms generated by hand MI. To improve the classification accuracy on MI, we further propose an algorithm to automatically extract subject-specific optimal frequency and time bands for the discrimination of ERD patterns produced by left and right hand MI. The experimental results show that the average classification accuracy of object-directed scenarios is much better than that of non-object-directed scenarios (76.87% vs. 69.66%). The result of the t-test measuring the difference between them is statistically significant (p = 0.0207). When compared to algorithms based on fixed frequency and time bands, contralateral dominant ERD patterns can be enhanced by using the subject-specific optimal frequency and the time bands obtained by our proposed algorithm. These findings have the potential to improve the efficacy and robustness of MI-based BCI applications. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Functional Mobility Performance and Balance Confidence in Older Adults after Sensorimotor Adaptation Training

    NASA Technical Reports Server (NTRS)

    Buccello-Stout, Regina R.; Cromwell, Ronita L.; Bloomberg, Jacob J.; Weaver, G. D.

    2010-01-01

    Research indicates a main contributor of injury in older adults is from falling. The decline in sensory systems limits information needed to successfully maneuver through the environment. The objective of this study was to determine if prolonged exposure to the realignment of perceptual-motor systems increases adaptability of balance, and if balance confidence improves after training. A total of 16 older adults between ages 65-85 were randomized to a control group (walking on a treadmill while viewing a static visual scene) and an experimental group (walking on a treadmill while viewing a rotating visual scene). Prior to visual exposure, participants completed six trials of walking through a soft foamed obstacle course. Participants came in twice a week for 4 weeks to complete training of walking on a treadmill and viewing the visual scene for 20 minutes each session. Participants completed the obstacle course after training and four weeks later. Average time, penalty, and Activity Balance Confidence Scale scores were computed for both groups across testing times. The older adults who trained, significantly improved their time through the obstacle course F (2, 28) = 9.41, p < 0.05, as well as reduced their penalty scores F (2, 28) = 21.03, p < 0.05, compared to those who did not train. There was no difference in balance confidence scores between groups across testing times F (2, 28) = 0.503, p > 0.05. Although the training group improved mobility through the obstacle course, there were no differences between the groups in balance confidence.

  7. The research of autonomous obstacle avoidance of mobile robot based on multi-sensor integration

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Han, Baoling

    2016-11-01

    The object of this study is the bionic quadruped mobile robot. The study has proposed a system design plan for mobile robot obstacle avoidance with the binocular stereo visual sensor and the self-control 3D Lidar integrated with modified ant colony optimization path planning to realize the reconstruction of the environmental map. Because the working condition of a mobile robot is complex, the result of the 3D reconstruction with a single binocular sensor is undesirable when feature points are few and the light condition is poor. Therefore, this system integrates the stereo vision sensor blumblebee2 and the Lidar sensor together to detect the cloud information of 3D points of environmental obstacles. This paper proposes the sensor information fusion technology to rebuild the environment map. Firstly, according to the Lidar data and visual data on obstacle detection respectively, and then consider two methods respectively to detect the distribution of obstacles. Finally fusing the data to get the more complete, more accurate distribution of obstacles in the scene. Then the thesis introduces ant colony algorithm. It has analyzed advantages and disadvantages of the ant colony optimization and its formation cause deeply, and then improved the system with the help of the ant colony optimization to increase the rate of convergence and precision of the algorithm in robot path planning. Such improvements and integrations overcome the shortcomings of the ant colony optimization like involving into the local optimal solution easily, slow search speed and poor search results. This experiment deals with images and programs the motor drive under the compiling environment of Matlab and Visual Studio and establishes the visual 2.5D grid map. Finally it plans a global path for the mobile robot according to the ant colony algorithm. The feasibility and effectiveness of the system are confirmed by ROS and simulation platform of Linux.

  8. Contributions of Head-Mounted Cameras to Studying the Visual Environments of Infants and Young Children

    ERIC Educational Resources Information Center

    Smith, Linda B.; Yu, Chen; Yoshida, Hanako; Fausey, Caitlin M.

    2015-01-01

    Head-mounted video cameras (with and without an eye camera to track gaze direction) are being increasingly used to study infants' and young children's visual environments and provide new and often unexpected insights about the visual world from a child's point of view. The challenge in using head cameras is principally conceptual and concerns the…

  9. SCSODC: Integrating Ocean Data for Visualization Sharing and Application

    NASA Astrophysics Data System (ADS)

    Xu, C.; Li, S.; Wang, D.; Xie, Q.

    2014-02-01

    The South China Sea Ocean Data Center (SCSODC) was founded in 2010 in order to improve collecting and managing of ocean data of the South China Sea Institute of Oceanology (SCSIO). The mission of SCSODC is to ensure the long term scientific stewardship of ocean data, information and products - collected through research groups, monitoring stations and observation cruises - and to facilitate the efficient use and distribution to possible users. However, data sharing and applications were limited due to the characteristics of distribution and heterogeneity that made it difficult to integrate the data. To surmount those difficulties, the Data Sharing System has been developed by the SCSODC using the most appropriate information management and information technology. The Data Sharing System uses open standards and tools to promote the capability to integrate ocean data and to interact with other data portals or users and includes a full range of processes such as data discovery, evaluation and access combining C/S and B/S mode. It provides a visualized management interface for the data managers and a transparent and seamless data access and application environment for users. Users are allowed to access data using the client software and to access interactive visualization application interface via a web browser. The architecture, key technologies and functionality of the system are discussed briefly in this paper. It is shown that the system of SCSODC is able to implement web visualization sharing and seamless access to ocean data in a distributed and heterogeneous environment.

  10. 3D Visualization for Phoenix Mars Lander Science Operations

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol

    2012-01-01

    Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.

  11. Colour change of twig-mimicking peppered moth larvae is a continuous reaction norm that increases camouflage against avian predators

    PubMed Central

    Rowland, Hannah M.; Edmonds, Nicola; Saccheri, Ilik J.

    2017-01-01

    Camouflage, and in particular background-matching, is one of the most common anti-predator strategies observed in nature. Animals can improve their match to the colour/pattern of their surroundings through background selection, and/or by plastic colour change. Colour change can occur rapidly (a few seconds), or it may be slow, taking hours to days. Many studies have explored the cues and mechanisms behind rapid colour change, but there is a considerable lack of information about slow colour change in the context of predation: the cues that initiate it, and the range of phenotypes that are produced. Here we show that peppered moth (Biston betularia) larvae respond to colour and luminance of the twigs they rest on, and exhibit a continuous reaction norm of phenotypes. When presented with a heterogeneous environment of mixed twig colours, individual larvae specialise crypsis towards one colour rather than developing an intermediate colour. Flexible colour change in this species has likely evolved in association with wind dispersal and polyphagy, which result in caterpillars settling and feeding in a diverse range of visual environments. This is the first example of visually induced slow colour change in Lepidoptera that has been objectively quantified and measured from the visual perspective of natural predators. PMID:29158965

  12. Colour change of twig-mimicking peppered moth larvae is a continuous reaction norm that increases camouflage against avian predators.

    PubMed

    Eacock, Amy; Rowland, Hannah M; Edmonds, Nicola; Saccheri, Ilik J

    2017-01-01

    Camouflage, and in particular background-matching, is one of the most common anti-predator strategies observed in nature. Animals can improve their match to the colour/pattern of their surroundings through background selection, and/or by plastic colour change. Colour change can occur rapidly (a few seconds), or it may be slow, taking hours to days. Many studies have explored the cues and mechanisms behind rapid colour change, but there is a considerable lack of information about slow colour change in the context of predation: the cues that initiate it, and the range of phenotypes that are produced. Here we show that peppered moth ( Biston betularia ) larvae respond to colour and luminance of the twigs they rest on, and exhibit a continuous reaction norm of phenotypes. When presented with a heterogeneous environment of mixed twig colours, individual larvae specialise crypsis towards one colour rather than developing an intermediate colour. Flexible colour change in this species has likely evolved in association with wind dispersal and polyphagy, which result in caterpillars settling and feeding in a diverse range of visual environments. This is the first example of visually induced slow colour change in Lepidoptera that has been objectively quantified and measured from the visual perspective of natural predators.

  13. Improve Problem Solving Skills through Adapting Programming Tools

    NASA Technical Reports Server (NTRS)

    Shaykhian, Linda H.; Shaykhian, Gholam Ali

    2007-01-01

    There are numerous ways for engineers and students to become better problem-solvers. The use of command line and visual programming tools can help to model a problem and formulate a solution through visualization. The analysis of problem attributes and constraints provide insight into the scope and complexity of the problem. The visualization aspect of the problem-solving approach tends to make students and engineers more systematic in their thought process and help them catch errors before proceeding too far in the wrong direction. The problem-solver identifies and defines important terms, variables, rules, and procedures required for solving a problem. Every step required to construct the problem solution can be defined in program commands that produce intermediate output. This paper advocates improved problem solving skills through using a programming tool. MatLab created by MathWorks, is an interactive numerical computing environment and programming language. It is a matrix-based system that easily lends itself to matrix manipulation, and plotting of functions and data. MatLab can be used as an interactive command line or a sequence of commands that can be saved in a file as a script or named functions. Prior programming experience is not required to use MatLab commands. The GNU Octave, part of the GNU project, a free computer program for performing numerical computations, is comparable to MatLab. MatLab visual and command programming are presented here.

  14. Low-cost, high-resolution scanning laser ophthalmoscope for the clinical environment

    NASA Astrophysics Data System (ADS)

    Soliz, P.; Larichev, A.; Zamora, G.; Murillo, S.; Barriga, E. S.

    2010-02-01

    Researchers have sought to gain greater insight into the mechanisms of the retina and the optic disc at high spatial resolutions that would enable the visualization of small structures such as photoreceptors and nerve fiber bundles. The sources of retinal image quality degradation are aberrations within the human eye, which limit the achievable resolution and the contrast of small image details. To overcome these fundamental limitations, researchers have been applying adaptive optics (AO) techniques to correct for the aberrations. Today, deformable mirror based adaptive optics devices have been developed to overcome the limitations of standard fundus cameras, but at prices that are typically unaffordable for most clinics. In this paper we demonstrate a clinically viable fundus camera with auto-focus and astigmatism correction that is easy to use and has improved resolution. We have shown that removal of low-order aberrations results in significantly better resolution and quality images. Additionally, through the application of image restoration and super-resolution techniques, the images present considerably improved quality. The improvements lead to enhanced visualization of retinal structures associated with pathology.

  15. Integration of today's digital state with tomorrow's visual environment

    NASA Astrophysics Data System (ADS)

    Fritsche, Dennis R.; Liu, Victor; Markandey, Vishal; Heimbuch, Scott

    1996-03-01

    New developments in visual communication technologies, and the increasingly digital nature of the industry infrastructure as a whole, are converging to enable new visual environments with an enhanced visual component in interaction, entertainment, and education. New applications and markets can be created, but this depends on the ability of the visual communications industry to provide market solutions that are cost effective and user friendly. Industry-wide cooperation in the development of integrated, open architecture applications enables the realization of such market solutions. This paper describes the work being done by Texas Instruments, in the development of its Digital Light ProcessingTM technology, to support the development of new visual communications technologies and applications.

  16. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE PAGES

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  17. BactoGeNIE: a large-scale comparative genome visualization for big displays

    PubMed Central

    2015-01-01

    Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021

  18. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  19. Feature reliability determines specificity and transfer of perceptual learning in orientation search.

    PubMed

    Yashar, Amit; Denison, Rachel N

    2017-12-01

    Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL's effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments.

  20. Feature reliability determines specificity and transfer of perceptual learning in orientation search

    PubMed Central

    2017-01-01

    Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL’s effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments. PMID:29240813

  1. A habituation based approach for detection of visual changes in surveillance camera

    NASA Astrophysics Data System (ADS)

    Sha'abani, M. N. A. H.; Adan, N. F.; Sabani, M. S. M.; Abdullah, F.; Nadira, J. H. S.; Yasin, M. S. M.

    2017-09-01

    This paper investigates a habituation based approach in detecting visual changes using video surveillance systems in a passive environment. Various techniques have been introduced for dynamic environment such as motion detection, object classification and behaviour analysis. However, in a passive environment, most of the scenes recorded by the surveillance system are normal. Therefore, implementing a complex analysis all the time in the passive environment resulting on computationally expensive, especially when using a high video resolution. Thus, a mechanism of attention is required, where the system only responds to an abnormal event. This paper proposed a novelty detection mechanism in detecting visual changes and a habituation based approach to measure the level of novelty. The objective of the paper is to investigate the feasibility of the habituation based approach in detecting visual changes. Experiment results show that the approach are able to accurately detect the presence of novelty as deviations from the learned knowledge.

  2. A collaborative interaction and visualization multi-modal environment for surgical planning.

    PubMed

    Foo, Jung Leng; Martinez-Escobar, Marisol; Peloquin, Catherine; Lobe, Thom; Winer, Eliot

    2009-01-01

    The proliferation of virtual reality visualization and interaction technologies has changed the way medical image data is analyzed and processed. This paper presents a multi-modal environment that combines a virtual reality application with a desktop application for collaborative surgical planning. Both visualization applications can function independently but can also be synced over a network connection for collaborative work. Any changes to either application is immediately synced and updated to the other. This is an efficient collaboration tool that allows multiple teams of doctors with only an internet connection to visualize and interact with the same patient data simultaneously. With this multi-modal environment framework, one team working in the VR environment and another team from a remote location working on a desktop machine can both collaborate in the examination and discussion for procedures such as diagnosis, surgical planning, teaching and tele-mentoring.

  3. High-fidelity bilateral teleoperation systems and the effect of multimodal haptics.

    PubMed

    Tavakoli, Mahdi; Aziminejad, Arash; Patel, Rajni V; Moallem, Mehrdad

    2007-12-01

    In master-slave teleoperation applications that deal with a delicate and sensitive environment, it is important to provide haptic feedback of slave/environment interactions to the user's hand as it improves task performance and teleoperation transparency (fidelity), which is the extent of telepresence of the remote environment available to the user through the master-slave system. For haptic teleoperation, in addition to a haptics-capable master interface, often one or more force sensors are also used, which warrant new bilateral control architectures while increasing the cost and the complexity of the teleoperation system. In this paper, we investigate the added benefits of using force sensors that measure hand/master and slave/environment interactions and of utilizing local feedback loops on the teleoperation transparency. We compare the two-channel and the four-channel bilateral control systems in terms of stability and transparency, and study the stability and performance robustness of the four-channel method against nonidealities that arise during bilateral control implementation, which include master-slave communication latency and changes in the environment dynamics. The next issue addressed in the paper deals with the case where the master interface is not haptics capable, but the slave is equipped with a force sensor. In the context of robotics-assisted soft-tissue surgical applications, we explore through human factors experiments whether slave/environment force measurements can be of any help with regard to improving task performance. The last problem we study is whether slave/environment force information, with and without haptic capability in the master interface, can help improve outcomes under degraded visual conditions.

  4. Mobile assistive technologies for the visually impaired.

    PubMed

    Hakobyan, Lilit; Lumsden, Jo; O'Sullivan, Dympna; Bartlett, Hannah

    2013-01-01

    There are around 285 million visually impaired people worldwide, and around 370,000 people are registered as blind or partially sighted in the UK. Ongoing advances in information technology (IT) are increasing the scope for IT-based mobile assistive technologies to facilitate the independence, safety, and improved quality of life of the visually impaired. Research is being directed at making mobile phones and other handheld devices accessible via our haptic (touch) and audio sensory channels. We review research and innovation within the field of mobile assistive technology for the visually impaired and, in so doing, highlight the need for successful collaboration between clinical expertise, computer science, and domain users to realize fully the potential benefits of such technologies. We initially reflect on research that has been conducted to make mobile phones more accessible to people with vision loss. We then discuss innovative assistive applications designed for the visually impaired that are either delivered via mainstream devices and can be used while in motion (e.g., mobile phones) or are embedded within an environment that may be in motion (e.g., public transport) or within which the user may be in motion (e.g., smart homes). Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Mobile in vivo camera robots provide sole visual feedback for abdominal exploration and cholecystectomy.

    PubMed

    Rentschler, M E; Dumpert, J; Platt, S R; Ahmed, S I; Farritor, S M; Oleynikov, D

    2006-01-01

    The use of small incisions in laparoscopy reduces patient trauma, but also limits the surgeon's ability to view and touch the surgical environment directly. These limitations generally restrict the application of laparoscopy to procedures less complex than those performed during open surgery. Although current robot-assisted laparoscopy improves the surgeon's ability to manipulate and visualize the target organs, the instruments and cameras remain fundamentally constrained by the entry incisions. This limits tool tip orientation and optimal camera placement. The current work focuses on developing a new miniature mobile in vivo adjustable-focus camera robot to provide sole visual feedback to surgeons during laparoscopic surgery. A miniature mobile camera robot was inserted through a trocar into the insufflated abdominal cavity of an anesthetized pig. The mobile robot allowed the surgeon to explore the abdominal cavity remotely and view trocar and tool insertion and placement without entry incision constraints. The surgeon then performed a cholecystectomy using the robot camera alone for visual feedback. This successful trial has demonstrated that miniature in vivo mobile robots can provide surgeons with sufficient visual feedback to perform common procedures while reducing patient trauma.

  6. Visualization Co-Processing of a CFD Simulation

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    1999-01-01

    OVERFLOW, a widely used CFD simulation code, is combined with a visualization system, pV3, to experiment with an environment for simulation/visualization co-processing on a SGI Origin 2000 computer(O2K) system. The shared memory version of the solver is used with the O2K 'pfa' preprocessor invoked to automatically discover parallelism in the source code. No other explicit parallelism is enabled. In order to study the scaling and performance of the visualization co-processing system, sample runs are made with different processor groups in the range of 1 to 254 processors. The data exchange between the visualization system and the simulation system is rapid enough for user interactivity when the problem size is small. This shared memory version of OVERFLOW, with minimal parallelization, does not scale well to an increasing number of available processors. The visualization task takes about 18 to 30% of the total processing time and does not appear to be a major contributor to the poor scaling. Improper load balancing and inter-processor communication overhead are contributors to this poor performance. Work is in progress which is aimed at obtaining improved parallel performance of the solver and removing the limitations of serial data transfer to pV3 by examining various parallelization/communication strategies, including the use of the explicit message passing.

  7. Colour, vision and ergonomics.

    PubMed

    Pinheiro, Cristina; da Silva, Fernando Moreira

    2012-01-01

    This paper is based on a research project - Visual Communication and Inclusive Design-Colour, Legibility and Aged Vision, developed at the Faculty of Architecture of Lisbon. The research has the aim of determining specific design principles to be applied to visual communication design (printed) objects, in order to be easily read and perceived by all. This study target group was composed by a selection of socially active individuals, between 55 and 80 years, and we used cultural events posters as objects of study and observation. The main objective is to overlap the study of areas such as colour, vision, older people's colour vision, ergonomics, chromatic contrasts, typography and legibility. In the end we will produce a manual with guidelines and information to apply scientific knowledge into the communication design projectual practice. Within the normal aging process, visual functions gradually decline; the quality of vision worsens, colour vision and contrast sensitivity are also affected. As people's needs change along with age, design should help people and communities, and improve life quality in the present. Applying principles of visually accessible design and ergonomics, the printed design objects, (or interior spaces, urban environments, products, signage and all kinds of visually information) will be effective, easier on everyone's eyes not only for visually impaired people but also for all of us as we age.

  8. The Visual Geophysical Exploration Environment: A Multi-dimensional Scientific Visualization

    NASA Astrophysics Data System (ADS)

    Pandya, R. E.; Domenico, B.; Murray, D.; Marlino, M. R.

    2003-12-01

    The Visual Geophysical Exploration Environment (VGEE) is an online learning environment designed to help undergraduate students understand fundamental Earth system science concepts. The guiding principle of the VGEE is the importance of hands-on interaction with scientific visualization and data. The VGEE consists of four elements: 1) an online, inquiry-based curriculum for guiding student exploration; 2) a suite of El Nino-related data sets adapted for student use; 3) a learner-centered interface to a scientific visualization tool; and 4) a set of concept models (interactive tools that help students understand fundamental scientific concepts). There are two key innovations featured in this interactive poster session. One is the integration of concept models and the visualization tool. Concept models are simple, interactive, Java-based illustrations of fundamental physical principles. We developed eight concept models and integrated them into the visualization tool to enable students to probe data. The ability to probe data using a concept model addresses the common problem of transfer: the difficulty students have in applying theoretical knowledge to everyday phenomenon. The other innovation is a visualization environment and data that are discoverable in digital libraries, and installed, configured, and used for investigations over the web. By collaborating with the Integrated Data Viewer developers, we were able to embed a web-launchable visualization tool and access to distributed data sets into the online curricula. The Thematic Real-time Environmental Data Distributed Services (THREDDS) project is working to provide catalogs of datasets that can be used in new VGEE curricula under development. By cataloging this curricula in the Digital Library for Earth System Education (DLESE), learners and educators can discover the data and visualization tool within a framework that guides their use.

  9. Classroom Environments: An Experiential Analysis of the Pupil-Teacher Visual Interaction in Uruguay

    ERIC Educational Resources Information Center

    Cardellino, Paula; Araneda, Claudio; García Alvarado, Rodrigo

    2017-01-01

    We argue that the traditional physical environment is commonly taken for granted and that little consideration has been given to how this affects pupil-teacher interactions. This article presents evidence that certain physical environments do not allow equal visual interaction and, as a result, we derive a set of basic guiding principles that…

  10. The Effects of Visual Cues and Learners' Field Dependence in Multiple External Representations Environment for Novice Program Comprehension

    ERIC Educational Resources Information Center

    Wei, Liew Tze; Sazilah, Salam

    2012-01-01

    This study investigated the effects of visual cues in multiple external representations (MER) environment on the learning performance of novices' program comprehension. Program codes and flowchart diagrams were used as dual representations in multimedia environment to deliver lessons on C-Programming. 17 field independent participants and 16 field…

  11. A study on haptic collaborative game in shared virtual environment

    NASA Astrophysics Data System (ADS)

    Lu, Keke; Liu, Guanyang; Liu, Lingzhi

    2013-03-01

    A study on collaborative game in shared virtual environment with haptic feedback over computer networks is introduced in this paper. A collaborative task was used where the players located at remote sites and played the game together. The player can feel visual and haptic feedback in virtual environment compared to traditional networked multiplayer games. The experiment was desired in two conditions: visual feedback only and visual-haptic feedback. The goal of the experiment is to assess the impact of force feedback on collaborative task performance. Results indicate that haptic feedback is beneficial for performance enhancement for collaborative game in shared virtual environment. The outcomes of this research can have a powerful impact on the networked computer games.

  12. Visual spatial cue use for guiding orientation in two-to-three-year-old children

    PubMed Central

    van den Brink, Danielle; Janzen, Gabriele

    2013-01-01

    In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2–3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences. PMID:24368903

  13. Visual spatial cue use for guiding orientation in two-to-three-year-old children.

    PubMed

    van den Brink, Danielle; Janzen, Gabriele

    2013-01-01

    In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2-3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences.

  14. VISUAL3D - An EIT network on visualization of geomodels

    NASA Astrophysics Data System (ADS)

    Bauer, Tobias

    2017-04-01

    When it comes to interpretation of data and understanding of deep geological structures and bodies at different scales then modelling tools and modelling experience is vital for deep exploration. Geomodelling provides a platform for integration of different types of data, including new kinds of information (e.g., new improved measuring methods). EIT Raw Materials, initiated by the EIT (European Institute of Innovation and Technology) and funded by the European Commission, is the largest and strongest consortium in the raw materials sector worldwide. The VISUAL3D network of infrastructure is an initiative by EIT Raw Materials and aims at bringing together partners with 3D-4D-visualisation infrastructure and 3D-4D-modelling experience. The recently formed network collaboration interlinks hardware, software and expert knowledge in modelling visualization and output. A special focus will be the linking of research, education and industry and integrating multi-disciplinary data and to visualize the data in three and four dimensions. By aiding network collaborations we aim at improving the combination of geomodels with differing file formats and data characteristics. This will create an increased competency in modelling visualization and the ability to interchange and communicate models more easily. By combining knowledge and experience in geomodelling with expertise in Virtual Reality visualization partners of EIT Raw Materials but also external parties will have the possibility to visualize, analyze and validate their geomodels in immersive VR-environments. The current network combines partners from universities, research institutes, geological surveys and industry with a strong background in geological 3D-modelling and 3D visualization and comprises: Luleå University of Technology, Geological Survey of Finland, Geological Survey of Denmark and Greenland, TUBA Freiberg, Uppsala University, Geological Survey of France, RWTH Aachen, DMT, KGHM Cuprum, Boliden, Montan Universität Leoben, Slovenian National Building and Civil Engineering Institute, Tallinn University of Technology and Turku University. The infrastructure within the network comprises different types of capturing and visualization hardware, ranging from high resolution cubes, VR walls, VR goggle solutions, high resolution photogrammetry, UAVs, lidar-scanners, and many more.

  15. A Visual Programming Methodology for Tactical Aircrew Scheduling and Other Applications

    DTIC Science & Technology

    1991-12-01

    prgramming methodology and environment of a user-specific application remains with and is delivered as part of the application, then there is another factor...animation is useful, not only for scheduling applications, but as a general prgramming methodology. Of course, there are a number of improvements...possible using Excel because there is nothing to prevent access to cells. However, it is easy to imagine a spreadsheet which can support the

  16. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    PubMed

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  17. The remarkable visual capacities of nocturnal insects: vision at the limits with small eyes and tiny brains.

    PubMed

    Warrant, Eric J

    2017-04-05

    Nocturnal insects have evolved remarkable visual capacities, despite small eyes and tiny brains. They can see colour, control flight and land, react to faint movements in their environment, navigate using dim celestial cues and find their way home after a long and tortuous foraging trip using learned visual landmarks. These impressive visual abilities occur at light levels when only a trickle of photons are being absorbed by each photoreceptor, begging the question of how the visual system nonetheless generates the reliable signals needed to steer behaviour. In this review, I attempt to provide an answer to this question. Part of the answer lies in their compound eyes, which maximize light capture. Part lies in the slow responses and high gains of their photoreceptors, which improve the reliability of visual signals. And a very large part lies in the spatial and temporal summation of these signals in the optic lobe, a strategy that substantially enhances contrast sensitivity in dim light and allows nocturnal insects to see a brighter world, albeit a slower and coarser one. What is abundantly clear, however, is that during their evolution insects have overcome several serious potential visual limitations, endowing them with truly extraordinary night vision.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).

  18. Using component technologies for web based wavelet enhanced mammographic image visualization.

    PubMed

    Sakellaropoulos, P; Costaridou, L; Panayiotakis, G

    2000-01-01

    The poor contrast detectability of mammography can be dealt with by domain specific software visualization tools. Remote desktop client access and time performance limitations of a previously reported visualization tool are addressed, aiming at more efficient visualization of mammographic image resources existing in web or PACS image servers. This effort is also motivated by the fact that at present, web browsers do not support domain-specific medical image visualization. To deal with desktop client access the tool was redesigned by exploring component technologies, enabling the integration of stand alone domain specific mammographic image functionality in a web browsing environment (web adaptation). The integration method is based on ActiveX Document Server technology. ActiveX Document is a part of Object Linking and Embedding (OLE) extensible systems object technology, offering new services in existing applications. The standard DICOM 3.0 part 10 compatible image-format specification Papyrus 3.0 is supported, in addition to standard digitization formats such as TIFF. The visualization functionality of the tool has been enhanced by including a fast wavelet transform implementation, which allows for real time wavelet based contrast enhancement and denoising operations. Initial use of the tool with mammograms of various breast structures demonstrated its potential in improving visualization of diagnostic mammographic features. Web adaptation and real time wavelet processing enhance the potential of the previously reported tool in remote diagnosis and education in mammography.

  19. Gait Adaptability Training Improves Both Postural Stability and Dual-Tasking Ability

    NASA Technical Reports Server (NTRS)

    Brady, Rachel A.; Batson, Crystal D.; Peters, Brian T.; Ploutz-Snyder, Robert J.; Mulavara, Ajitkumar P.; Bloomberg, Jacob J.

    2010-01-01

    After spaceflight, the process of readapting to Earth's gravity commonly presents crewmembers with a variety of locomotor challenges. Our recent work has shown that the ability to adapt to a novel discordant sensorimotor environment can be increased through preflight training, so one focus of our laboratory has been the development of a gait training countermeasure to expedite the return of normal locomotor function after spaceflight. We used a training system comprising a treadmill mounted on a motion base facing a virtual visual scene that provided a variety of sensory challenges. As part of their participation in a larger retention study, 10 healthy adults completed 3 training sessions during which they walked on a treadmill at 1.1 m/s while receiving discordant support-surface and visual manipulations. After a single training session, subjects stride frequencies improved, and after 2 training sessions their auditory reaction times improved, where improvement was indicated by a return toward baseline values. Interestingly, improvements in reaction time came after stride frequency improvements plateaued. This finding suggests that postural stability was given a higher priority than a competing cognitive task. Further, it demonstrates that improvement in both postural stability and dual-tasking can be achieved with multiple training exposures. We conclude that, with training, individuals become more proficient at walking in discordant sensorimotor conditions and are able to devote more attention to competing tasks.

  20. How color enhances visual memory for natural scenes.

    PubMed

    Spence, Ian; Wong, Patrick; Rusan, Maria; Rastegar, Naghmeh

    2006-01-01

    We offer a framework for understanding how color operates to improve visual memory for images of the natural environment, and we present an extensive data set that quantifies the contribution of color in the encoding and recognition phases. Using a continuous recognition task with colored and monochrome gray-scale images of natural scenes at short exposure durations, we found that color enhances recognition memory by conferring an advantage during encoding and by strengthening the encoding-specificity effect. Furthermore, because the pattern of performance was similar at all exposure durations, and because form and color are processed in different areas of cortex, the results imply that color must be bound as an integral part of the representation at the earliest stages of processing.

  1. Realistic tissue visualization using photoacoustic image

    NASA Astrophysics Data System (ADS)

    Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong

    2018-02-01

    Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.

  2. Semantics of the visual environment encoded in parahippocampal cortex

    PubMed Central

    Bonner, Michael F.; Price, Amy Rose; Peelle, Jonathan E.; Grossman, Murray

    2016-01-01

    Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain. PMID:26679216

  3. Semantics of the Visual Environment Encoded in Parahippocampal Cortex.

    PubMed

    Bonner, Michael F; Price, Amy Rose; Peelle, Jonathan E; Grossman, Murray

    2016-03-01

    Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together, this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain.

  4. Risk factors and visual fatigue of baggage X-ray security screeners: a structural equation modelling analysis.

    PubMed

    Yu, Rui-Feng; Yang, Lin-Dong; Wu, Xin

    2017-05-01

    This study identified the risk factors influencing visual fatigue in baggage X-ray security screeners and estimated the strength of correlations between those factors and visual fatigue using structural equation modelling approach. Two hundred and five X-ray security screeners participated in a questionnaire survey. The result showed that satisfaction with the VDT's physical features and the work environment conditions were negatively correlated with the intensity of visual fatigue, whereas job stress and job burnout had direct positive influences. The path coefficient between the image quality of VDT and visual fatigue was not significant. The total effects of job burnout, job stress, the VDT's physical features and the work environment conditions on visual fatigue were 0.471, 0.469, -0.268 and -0.251 respectively. These findings indicated that both extrinsic factors relating to VDT and workplace environment and psychological factors including job burnout and job stress should be considered in the workplace design and work organisation of security screening tasks to reduce screeners' visual fatigue. Practitioner Summary: This study identified the risk factors influencing visual fatigue in baggage X-ray security screeners and estimated the strength of correlations between those factors and visual fatigue. The findings were of great importance to the workplace design and the work organisation of security screening tasks to reduce screeners' visual fatigue.

  5. Localization Using Visual Odometry and a Single Downward-Pointing Camera

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.

    2012-01-01

    Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.

  6. Multi-focused geospatial analysis using probes.

    PubMed

    Butkiewicz, Thomas; Dou, Wenwen; Wartell, Zachary; Ribarsky, William; Chang, Remco

    2008-01-01

    Traditional geospatial information visualizations often present views that restrict the user to a single perspective. When zoomed out, local trends and anomalies become suppressed and lost; when zoomed in for local inspection, spatial awareness and comparison between regions become limited. In our model, coordinated visualizations are integrated within individual probe interfaces, which depict the local data in user-defined regions-of-interest. Our probe concept can be incorporated into a variety of geospatial visualizations to empower users with the ability to observe, coordinate, and compare data across multiple local regions. It is especially useful when dealing with complex simulations or analyses where behavior in various localities differs from other localities and from the system as a whole. We illustrate the effectiveness of our technique over traditional interfaces by incorporating it within three existing geospatial visualization systems: an agent-based social simulation, a census data exploration tool, and an 3D GIS environment for analyzing urban change over time. In each case, the probe-based interaction enhances spatial awareness, improves inspection and comparison capabilities, expands the range of scopes, and facilitates collaboration among multiple users.

  7. Traffic Signs in Complex Visual Environments

    DOT National Transportation Integrated Search

    1982-11-01

    The effects of sign luminance on detection and recognition of traffic control devices is mediated through contrast with the immediate surround. Additionally, complex visual scenes are known to degrade visual performance with targets well above visual...

  8. Bring NASA Scientific Data into GIS

    NASA Astrophysics Data System (ADS)

    Xu, H.

    2016-12-01

    NASA's Earth Observation System (EOS) and many other missions produce data of huge volume and near real time which drives the research and understanding of climate change. Geographic Information System (GIS) is a technology used for the management, visualization and analysis of spatial data. Since it's inception in the 1960s, GIS has been applied to many fields at the city, state, national, and world scales. People continue to use it today to analyze and visualize trends, patterns, and relationships from the massive datasets of scientific data. There is great interest in both the scientific and GIS communities in improving technologies that can bring scientific data into a GIS environment, where scientific research and analysis can be shared through the GIS platform to the public. Most NASA scientific data are delivered in the Hierarchical Data Format (HDF), a format is both flexible and powerful. However, this flexibility results in challenges when trying to develop supported GIS software - data stored with HDF formats lack a unified standard and convention among these products. The presentation introduces an information model that enables ArcGIS software to ingest NASA scientific data and create a multidimensional raster - univariate and multivariate hypercubes - for scientific visualization and analysis. We will present the framework how ArcGIS leverages the open source GDAL (Geospatial Data Abstract Library) to support its raster data access, discuss how we overcame the GDAL drivers limitations in handing scientific products that are stored with HDF4 and HDF5 formats and how we improve the way in modeling the multidimensionality with GDAL. In additional, we will talk about the direction of ArcGIS handling NASA products and demonstrate how the multidimensional information model can help scientists work with various data products such as MODIS, MOPPIT, SMAP as well as many data products in a GIS environment.

  9. Motor effects from visually induced disorientation in man.

    DOT National Transportation Integrated Search

    1969-11-01

    The problem of disorientation in a moving optical environment was examined. Egocentric disorientation can be experienced by a pilot if the entire visual environment moves relative to his body without a clue of the objective position of the airplane i...

  10. Jupiter Environment Tool

    NASA Technical Reports Server (NTRS)

    Sturm, Erick J.; Monahue, Kenneth M.; Biehl, James P.; Kokorowski, Michael; Ngalande, Cedrick,; Boedeker, Jordan

    2012-01-01

    The Jupiter Environment Tool (JET) is a custom UI plug-in for STK that provides an interface to Jupiter environment models for visualization and analysis. Users can visualize the different magnetic field models of Jupiter through various rendering methods, which are fully integrated within STK s 3D Window. This allows users to take snapshots and make animations of their scenarios with magnetic field visualizations. Analytical data can be accessed in the form of custom vectors. Given these custom vectors, users have access to magnetic field data in custom reports, graphs, access constraints, coverage analysis, and anywhere else vectors are used within STK.

  11. Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution

    PubMed Central

    Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir

    2016-01-01

    Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks–walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience. PMID:26882473

  12. Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution.

    PubMed

    Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir

    2016-01-01

    Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks-walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.

  13. IN13B-1660: Analytics and Visualization Pipelines for Big Data on the NASA Earth Exchange (NEX) and OpenNEX

    NASA Technical Reports Server (NTRS)

    Chaudhary, Aashish; Votava, Petr; Nemani, Ramakrishna R.; Michaelis, Andrew; Kotfila, Chris

    2016-01-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  14. Analytics and Visualization Pipelines for Big ­Data on the NASA Earth Exchange (NEX) and OpenNEX

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Votava, P.; Nemani, R. R.; Michaelis, A.; Kotfila, C.

    2016-12-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  15. New Media, Evolving Multimodal Literacy Practices and the Potential Impact of Increased Use of the Visual Mode in the Urban Environment on Young Children's Learning

    ERIC Educational Resources Information Center

    Yamada-Rice, Dylan

    2011-01-01

    This article looks at the way in which the changing visual environment affects education at two levels: in communication patterns and research methodologies. The research considers differences in the variance and quantity of types of visual media and their relationship to the written mode in the urban landscapes of Tokyo and London, using…

  16. The "serendipitous brain": Low expectancy and timing uncertainty of conscious events improve awareness of unconscious ones (evidence from the Attentional Blink).

    PubMed

    Lasaponara, Stefano; Dragone, Alessio; Lecce, Francesca; Di Russo, Francesco; Doricchi, Fabrizio

    2015-10-01

    To anticipate upcoming sensory events, the brain picks-up and exploits statistical regularities in the sensory environment. However, it is untested whether cumulated predictive knowledge about consciously seen stimuli improves the access to awareness of stimuli that usually go unseen. To explore this issue, we exploited the Attentional Blink (AB) effect, where conscious processing of a first visual target (T1) hinders detection of early following targets (T2). We report that timing uncertainty and low expectancy about the occurrence of consciously seen T2s presented outside the AB period, improve detection of early and otherwise often unseen T2s presented inside the AB. Recording of high-resolution Event Related Potentials (ERPs) and the study of their intracranial sources showed that the brain achieves this improvement by initially amplifying and extending the pre-conscious storage of T2s' traces signalled by the N2 wave originating in the extra-striate cortex. This enhancement in the N2 wave is followed by specific changes in the latency and amplitude of later components in the P3 wave (P3a and P3b), signalling access of the sensory trace to the network of parietal and frontal areas modulating conscious processing. These findings show that the interaction between conscious and unconscious processing changes adaptively as a function of the probabilistic properties of the sensory environment and that the combination of an active attentional state with loose probabilistic and temporal expectancies on forthcoming conscious events favors the emergence to awareness of otherwise unnoticed visual events. This likely provides an insight on the attentional conditions that predispose an active observer to unexpected "serendipitous" findings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Content-Aware Video Adaptation under Low-Bitrate Constraint

    NASA Astrophysics Data System (ADS)

    Hsiao, Ming-Ho; Chen, Yi-Wen; Chen, Hua-Tsung; Chou, Kuan-Hung; Lee, Suh-Yin

    2007-12-01

    With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB-) weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.

  18. Intelligent Entity Behavior Within Synthetic Environments. Chapter 3

    NASA Technical Reports Server (NTRS)

    Kruk, R. V.; Howells, P. B.; Siksik, D. N.

    2007-01-01

    This paper describes some elements in the development of realistic performance and behavior in the synthetic entities (players) which support Modeling and Simulation (M&S) applications, particularly military training. Modern human-in-the-loop (virtual) training systems incorporate sophisticated synthetic environments, which provide: 1. The operational environment, including, for example, terrain databases; 2. Physical entity parameters which define performance in engineered systems, such as aircraft aerodynamics; 3. Platform/system characteristics such as acoustic, IR and radar signatures; 4. Behavioral entity parameters which define interactive performance, including knowledge/reasoning about terrain, tactics; and, 5. Doctrine, which combines knowledge and tactics into behavior rule sets. The resolution and fidelity of these model/database elements can vary substantially, but as synthetic environments are designed to be compose able, attributes may easily be added (e.g., adding a new radar to an aircraft) or enhanced (e.g. Amending or replacing missile seeker head/ Electronic Counter Measures (ECM) models to improve the realism of their interaction). To a human in the loop with synthetic entities, their observed veridicality is assessed via engagement responses (e.g. effect of countermeasures upon a closing missile), as seen on systems displays, and visual (image) behavior. The realism of visual models in a simulation (level of detail as well as motion fidelity) remains a challenge in realistic articulation of elements such as vehicle antennae and turrets, or, with human figures; posture, joint articulation, response to uneven ground. Currently the adequacy of visual representation is more dependant upon the quality and resolution of the physical models driving those entities than graphics processing power per Se. Synthetic entities in M&S applications traditionally have represented engineered systems (e.g. aircraft) with human-in-the-loop performance characteristics (e.g. visual acuity) included in the system behavioral specification. As well, performance affecting human parameters such as experience level, fatigue and stress are coming into wider use (via AI approaches) to incorporate more uncertainty as to response type as well as performance (e.g. Where an opposing entity might go and what it might do, as well as how well it might perform).

  19. Adaptive proxy map server for efficient vector spatial data rendering

    NASA Astrophysics Data System (ADS)

    Sayar, Ahmet

    2013-01-01

    The rapid transmission of vector map data over the Internet is becoming a bottleneck of spatial data delivery and visualization in web-based environment because of increasing data amount and limited network bandwidth. In order to improve both the transmission and rendering performances of vector spatial data over the Internet, we propose a proxy map server enabling parallel vector data fetching as well as caching to improve the performance of web-based map servers in a dynamic environment. Proxy map server is placed seamlessly anywhere between the client and the final services, intercepting users' requests. It employs an efficient parallelization technique based on spatial proximity and data density in case distributed replica exists for the same spatial data. The effectiveness of the proposed technique is proved at the end of the article by the application of creating map images enriched with earthquake seismic data records.

  20. Visualization and Tracking of Parallel CFD Simulations

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kremenetsky, Mark

    1995-01-01

    We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.

  1. Individual Differences in a Spatial-Semantic Virtual Environment.

    ERIC Educational Resources Information Center

    Chen, Chaomei

    2000-01-01

    Presents two empirical case studies concerning the role of individual differences in searching through a spatial-semantic virtual environment. Discusses information visualization in information systems; cognitive factors, including associative memory, spatial ability, and visual memory; user satisfaction; and cognitive abilities and search…

  2. Vision

    NASA Technical Reports Server (NTRS)

    Taylor, J. H.

    1973-01-01

    Some data on human vision, important in present and projected space activities, are presented. Visual environment and performance and structure of the visual system are also considered. Visual perception during stress is included.

  3. Divergence in cryptic leaf colour provides local camouflage in an alpine plant.

    PubMed

    Niu, Yang; Chen, Zhe; Stevens, Martin; Sun, Hang

    2017-10-11

    The efficacy of camouflage through background matching is highly environment-dependent, often resulting in intraspecific colour divergence in animals to optimize crypsis in different visual environments. This phenomenon is largely unexplored in plants, although several lines of evidence suggest they do use crypsis to avoid damage by herbivores. Using Corydalis hemidicentra, an alpine plant with cryptic leaf colour, we quantified background matching between leaves and surrounding rocks in five populations based on an approximate model of their butterfly enemy's colour perception. We also investigated the pigment basis of leaf colour variation and the association between feeding risk and camouflage efficacy. We show that plants exhibit remarkable colour divergence between populations, consistent with differences in rock appearances. Leaf colour varies because of a different quantitative combination of two basic pigments-chlorophyll and anthocyanin-plus different air spaces. As expected, leaf colours are better matched against their native backgrounds than against foreign ones in the eyes of the butterfly. Furthermore, improved crypsis tends to be associated with a higher level of feeding risk. These results suggest that divergent cryptic leaf colour may have evolved to optimize local camouflage in various visual environments, extending our understanding of colour evolution and intraspecific phenotype diversity in plants. © 2017 The Author(s).

  4. Broad-based visual benefits from training with an integrated perceptual-learning video game.

    PubMed

    Deveau, Jenni; Lovcik, Gary; Seitz, Aaron R

    2014-06-01

    Perception is the window through which we understand all information about our environment, and therefore deficits in perception due to disease, injury, stroke or aging can have significant negative impacts on individuals' lives. Research in the field of perceptual learning has demonstrated that vision can be improved in both normally seeing and visually impaired individuals, however, a limitation of most perceptual learning approaches is their emphasis on isolating particular mechanisms. In the current study, we adopted an integrative approach where the goal is not to achieve highly specific learning but instead to achieve general improvements to vision. We combined multiple perceptual learning approaches that have individually contributed to increasing the speed, magnitude and generality of learning into a perceptual-learning based video-game. Our results demonstrate broad-based benefits of vision in a healthy adult population. Transfer from the game includes; improvements in acuity (measured with self-paced standard eye-charts), improvement along the full contrast sensitivity function, and improvements in peripheral acuity and contrast thresholds. The use of this type of this custom video game framework built up from psychophysical approaches takes advantage of the benefits found from video game training while maintaining a tight link to psychophysical designs that enable understanding of mechanisms of perceptual learning and has great potential both as a scientific tool and as therapy to help improve vision. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Improving the Organisational Effectiveness of Coalition Operations (Amelioration de l’efficacite structurelle des operations en coalition)

    DTIC Science & Technology

    2012-11-01

    peak capacity 2. Prosthesis or visual aids 3. Adaptation 4. Selection Motives 1. Assessment of people‘s motives to work 2. Recruitment of people to...after each rotation. In addition, feelings of isolation, frustration, and deprivation of a group identity [35] or difficulties in adopting new...people feel uncomfortable by feedback”. Finally, trust may be a sensitive issue in a multi-national environment, and can cause dilemma situations. In

  6. Working memory training improves visual short-term memory capacity.

    PubMed

    Schwarb, Hillary; Nail, Jayde; Schumacher, Eric H

    2016-01-01

    Since antiquity, philosophers, theologians, and scientists have been interested in human memory. However, researchers today are still working to understand the capabilities, boundaries, and architecture. While the storage capabilities of long-term memory are seemingly unlimited (Bahrick, J Exp Psychol 113:1-2, 1984), working memory, or the ability to maintain and manipulate information held in memory, seems to have stringent capacity limits (e.g., Cowan, Behav Brain Sci 24:87-185, 2001). Individual differences, however, do exist and these differences can often predict performance on a wide variety of tasks (cf. Engle What is working-memory capacity? 297-314, 2001). Recently, researchers have promoted the enticing possibility that simple behavioral training can expand the limits of working memory which indeed may also lead to improvements on other cognitive processes as well (cf. Morrison and Chein, Psychol Bull Rev 18:46-60 2011). However, initial investigations across a wide variety of cognitive functions have produced mixed results regarding the transferability of training-related improvements. Across two experiments, the present research focuses on the benefit of working memory training on visual short-term memory capacity-a cognitive process that has received little attention in the training literature. Data reveal training-related improvement of global measures of visual short-term memory as well as of measures of the independent sub-processes that contribute to capacity (Awh et al., Psychol Sci 18(7):622-628, 2007). These results suggest that the ability to inhibit irrelevant information within and between trials is enhanced via n-back training allowing for selective improvement on untrained tasks. Additionally, we highlight a potential limitation of the standard adaptive training procedure and propose a modified design to ensure variability in the training environment.

  7. Effect of Developmental Binocular Vision Abnormalities on Visual Vertigo Symptoms and Treatment Outcome.

    PubMed

    Pavlou, Marousa; Acheson, James; Nicolaou, Despina; Fraser, Clare L; Bronstein, Adolfo M; Davies, Rosalyn A

    2015-10-01

    Customized vestibular rehabilitation incorporating optokinetic (OK) stimulation improves visual vertigo (VV) symptoms; however, the degree of improvement varies among individuals. Binocular vision abnormalities (misalignment of ocular axis, ie, strabismus) may be a potential risk factor. This study aimed to investigate the influence of binocular vision abnormalities on VV symptoms and treatment outcome. Sixty subjects with refractory peripheral vestibular symptoms underwent an orthoptic assessment after being recruited for participation in an 8-week customized program incorporating OK training via a full-field visual environment rotator or video display, supervised or unsupervised. Treatment response was assessed at baseline and at 8 weeks with dynamic posturography, Functional Gait Assessment (FGA), and questionnaires for symptoms, symptom triggers, and psychological state. As no significant effect of OK training type was noted for any variables, data were combined and new groups identified on the basis of the absence or presence of a binocular vision abnormality. A total of 34 among 60 subjects consented to the orthoptic assessment, of whom 8 of the 34 had binocular vision abnormalities and 30 of the 34 subjects completed both the binocular function assessment and vestibular rehabilitation program. No significant between-group differences were noted at baseline. The only significant between-group difference was observed for pre-/post-VV symptom change (P = 0.01), with significant improvements noted only for the group without binocular vision abnormalities (P < 0.0005). Common vestibular symptoms, posturography, and the FGA improved significantly for both groups (P < 0.05). Binocular vision abnormalities may affect VV symptom improvement. These findings may have important implications for the management of subjects with refractory vestibular symptoms.Video Abstract available for insights from the authors regarding clinical implication of the study findings (see Video, Supplemental Digital Content 1, http://links.lww.com/JNPT/A115).

  8. Comparison of a Visual and Head Tactile Display for Soldier Navigation

    DTIC Science & Technology

    2013-12-01

    environments for nuclear power plant operators, air traffic controllers, and pilots are information intensive. These environments usually involve the indirect...queue, correcting aircraft conflicts, giving instruction, clearance and advice to pilots , and assigning aircrafts to other work queues and airports...these dynamic, complex, and multitask environments (1) collect and integrate a plethora of visual information into decisions that are critical for

  9. The VIPER project (Visualization Integration Platform for Exploration Research): a biologically inspired autonomous reconfigurable robotic platform for diverse unstructured environments

    NASA Astrophysics Data System (ADS)

    Schubert, Oliver J.; Tolle, Charles R.

    2004-09-01

    Over the last decade the world has seen numerous autonomous vehicle programs. Wheels and track designs are the basis for many of these vehicles. This is primarily due to four main reasons: a vast preexisting knowledge base for these designs, energy efficiency of power sources, scalability of actuators, and the lack of control systems technologies for handling alternate highly complex distributed systems. Though large efforts seek to improve the mobility of these vehicles, many limitations still exist for these systems within unstructured environments, e.g. limited mobility within industrial and nuclear accident sites where existing plant configurations have been extensively changed. These unstructured operational environments include missions for exploration, reconnaissance, and emergency recovery of objects within reconfigured or collapsed structures, e.g. bombed buildings. More importantly, these environments present a clear and present danger for direct human interactions during the initial phases of recovery operations. Clearly, the current classes of autonomous vehicles are incapable of performing in these environments. Thus the next generation of designs must include highly reconfigurable and flexible autonomous robotic platforms. This new breed of autonomous vehicles will be both highly flexible and environmentally adaptable. Presented in this paper is one of the most successful designs from nature, the snake-eel-worm (SEW). This design implements shape memory alloy (SMA) actuators which allow for scaling of the robotic SEW designs from sub-micron scale to heavy industrial implementations without major conceptual redesigns as required in traditional hydraulic, pneumatic, or motor driven systems. Autonomous vehicles based on the SEW design posses the ability to easily move between air based environments and fluid based environments with limited or no reconfiguration. Under a SEW designed vehicle, one not only achieves vastly improved maneuverability within a highly unstructured environment, but also gains robotic manipulation abilities, normally relegated as secondary add-ons within existing vehicles, all within one small condensed package. The prototype design presented includes a Beowulf style computing system for advanced guidance calculations and visualization computations. All of the design and implementation pertaining to the SEW robot discussed in this paper is the product of a student team under the summer fellowship program at the DOEs INEEL.

  10. Advanced engineering environment collaboration project.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamph, Jane Ann; Pomplun, Alan R.; Kiba, Grant W.

    2008-12-01

    The Advanced Engineering Environment (AEE) is a model for an engineering design and communications system that will enhance project collaboration throughout the nuclear weapons complex (NWC). Sandia National Laboratories and Parametric Technology Corporation (PTC) worked together on a prototype project to evaluate the suitability of a portion of PTC's Windchill 9.0 suite of data management, design and collaboration tools as the basis for an AEE. The AEE project team implemented Windchill 9.0 development servers in both classified and unclassified domains and used them to test and evaluate the Windchill tool suite relative to the needs of the NWC using weaponsmore » project use cases. A primary deliverable was the development of a new real time collaborative desktop design and engineering process using PDMLink (data management tool), Pro/Engineer (mechanical computer aided design tool) and ProductView Lite (visualization tool). Additional project activities included evaluations of PTC's electrical computer aided design, visualization, and engineering calculations applications. This report documents the AEE project work to share information and lessons learned with other NWC sites. It also provides PTC with recommendations for improving their products for NWC applications.« less

  11. Ergonomics solution for crossing collisions based on field assessment of visual environment at urban intersections in Japan.

    PubMed

    Mori, Midori; Horino, Sadao; Kitajima, Sou; Ueyama, Masaru; Ebara, Takeshi; Itani, Toru

    2008-11-01

    This paper aims to assess quantitatively the actual visual environment of uncontrolled urban downtown intersections in Japan in relation to frequently occurring crossing collisions and to discuss the safety countermeasures for them. In Field Study 1 dealing with direct visibility, our ultra-wide-angle photograph analysis revealed that most of the right/left-ward visible range at 11 intersections were insufficient to check safety, and the quality of direct visibility was closely associated with causing crossing collisions. The countermeasures to reduce a blind area were determined to be a top priority. In Field Study 2 dealing with indirect visibility, more than half of the 25 traffic convex mirrors had marked shortcomings for preventive safety, and ergonomics guidelines ensuring indirect visibility were proposed for installing traffic convex mirrors. Low-cost/low-technology-oriented countermeasures are highly recommended to obtain clear/sufficient images of crucial information satisfying drivers' requirements on traffic convex mirrors in accordance with those ergonomics guidelines was highly recommended. Crossing collisions could be prevented by improvement of poor direct and indirect visibility.

  12. Real-time processing of dual band HD video for maintaining operational effectiveness in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Parker, Steve C. J.; Hickman, Duncan L.; Smith, Moira I.

    2015-05-01

    Effective reconnaissance, surveillance and situational awareness, using dual band sensor systems, require the extraction, enhancement and fusion of salient features, with the processed video being presented to the user in an ergonomic and interpretable manner. HALO™ is designed to meet these requirements and provides an affordable, real-time, and low-latency image fusion solution on a low size, weight and power (SWAP) platform. The system has been progressively refined through field trials to increase its operating envelope and robustness. The result is a video processor that improves detection, recognition and identification (DRI) performance, whilst lowering operator fatigue and reaction times in complex and highly dynamic situations. This paper compares the performance of HALO™, both qualitatively and quantitatively, with conventional blended fusion for operation in degraded visual environments (DVEs), such as those experienced during ground and air-based operations. Although image blending provides a simple fusion solution, which explains its common adoption, the results presented demonstrate that its performance is poor compared to the HALO™ fusion scheme in DVE scenarios.

  13. Robust visual tracking via multiscale deep sparse networks

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo

    2017-04-01

    In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.

  14. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  15. Proximal versus distal cue utilization in spatial navigation: the role of visual acuity?

    PubMed

    Carman, Heidi M; Mactutus, Charles F

    2002-09-01

    Proximal versus distal cue use in the Morris water maze is a widely accepted strategy for the dissociation of various problems affecting spatial navigation in rats such as aging, head trauma, lesions, and pharmacological or hormonal agents. Of the limited number of ontogenetic rat studies conducted, the majority have approached the problem of preweanling spatial navigation through a similar proximal-distal dissociation. An implicit assumption among all of these studies has been that the animal's visual system is sufficient to permit robust spatial navigation. We challenged this assumption and have addressed the role of visual acuity in spatial navigation in the preweanling Fischer 344-N rat by training animals to locate a visible (proximal) or hidden (distal) platform using double or null extramaze cues within the testing environment. All pups demonstrated improved performance across training, but animals presented with a visible platform, regardless of extramaze cues, simultaneously reached asymptotic performance levels; animals presented with a hidden platform, dependent upon location of extramaze cues, differentially reached asymptotic performance levels. Probe trial performance, defined by quadrant time and platform crossings, revealed that distal-double-cue pups demonstrated spatial navigational ability superior to that of the remaining groups. These results suggest that a pup's ability to spatially navigate a hidden platform is dependent on not only its response repertoire and task parameters, but also its visual acuity, as determined by the extramaze cue location within the testing environment. The standard hidden versus visible platform dissociation may not be a satisfactory strategy for the control of potential sensory deficits.

  16. Multi-modal virtual environment research at Armstrong Laboratory

    NASA Technical Reports Server (NTRS)

    Eggleston, Robert G.

    1995-01-01

    One mission of the Paul M. Fitts Human Engineering Division of Armstrong Laboratory is to improve the user interface for complex systems through user-centered exploratory development and research activities. In support of this goal, many current projects attempt to advance and exploit user-interface concepts made possible by virtual reality (VR) technologies. Virtual environments may be used as a general purpose interface medium, an alternative display/control method, a data visualization and analysis tool, or a graphically based performance assessment tool. An overview is given of research projects within the division on prototype interface hardware/software development, integrated interface concept development, interface design and evaluation tool development, and user and mission performance evaluation tool development.

  17. Predictive Measures of Locomotor Performance on an Unstable Walking Surface

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Caldwell, E. E.; Batson, C. D.; De Dios, Y. E.; Gadd, N. E.; Goel, R.; Wood, S. J.; Cohen, H. S.; hide

    2016-01-01

    Locomotion requires integration of visual, vestibular, and somatosensory information to produce the appropriate motor output to control movement. The degree to which these sensory inputs are weighted and reorganized in discordant sensory environments varies by individual and may be predictive of the ability to adapt to novel environments. The goals of this project are to: 1) develop a set of predictive measures capable of identifying individual differences in sensorimotor adaptability, and 2) use this information to inform the design of training countermeasures designed to enhance the ability of astronauts to adapt to gravitational transitions improving balance and locomotor performance after a Mars landing and enhancing egress capability after a landing on Earth.

  18. Mapping students' ideas to understand learning in a collaborative programming environment

    NASA Astrophysics Data System (ADS)

    Harlow, Danielle Boyd; Leak, Anne Emerson

    2014-07-01

    Recent studies in learning programming have largely focused on high school and college students; less is known about how young children learn to program. From video data of 20 students using a graphical programming interface, we identified ideas that were shared and evolved through an elementary school classroom. In mapping these ideas and their resulting changes in programs and outputs, we were able to identify the contextual features which contributed to how ideas moved through the classroom as students learned. We suggest this process of idea mapping in visual programming environments as a viable method for understanding collaborative, constructivist learning as well as a context under which experiences can be developed to improve student learning.

  19. Non-lane-discipline-based car-following model under honk environment

    NASA Astrophysics Data System (ADS)

    Rong, Ying; Wen, Huiying

    2018-04-01

    This study proposed a non-lane-discipline-based car-following model by synthetically considering the visual angles and the timid/aggressive characteristics of drivers under honk environment. We firstly derived the neutral stability condition by the linear stability theory. It showed that the parameters related to visual angles and driving characteristics of drivers under honk environment all have significant impact on the stability of non-lane-discipline traffic flow. For better understanding the inner mechanism among these factors, we further analyzed how each parameter affects the traffic flow and gained further insight into how the visual angles information influences other parameters and then influences the non-lane-discipline traffic flow under honk environment. And the results showed that the other aspects such as driving characteristics of drivers or honk effect are all interacted with the "Visual-Angle Factor". And the effect of visual angle is not just to say simply it has larger stable region or not as the existing studies. Finally, to verify the proposed model, we carried out the numerical simulation under the periodic boundary condition. And the results of numerical simulation are agreed well with the theoretical findings.

  20. Simulation Evaluation of Equivalent Vision Technologies for Aerospace Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Wilz, Susan J.; Arthur, Jarvis J.

    2009-01-01

    A fixed-based simulation experiment was conducted in NASA Langley Research Center s Integration Flight Deck simulator to investigate enabling technologies for equivalent visual operations (EVO) in the emerging Next Generation Air Transportation System operating environment. EVO implies the capability to achieve or even improve on the safety of current-day Visual Flight Rules (VFR) operations, maintain the operational tempos of VFR, and perhaps even retain VFR procedures - all independent of the actual weather and visibility conditions. Twenty-four air transport-rated pilots evaluated the use of Synthetic/Enhanced Vision Systems (S/EVS) and eXternal Vision Systems (XVS) technologies as enabling technologies for future all-weather operations. The experimental objectives were to determine the feasibility of XVS/SVS/EVS to provide for all weather (visibility) landing capability without the need (or ability) for a visual approach segment and to determine the interaction of XVS/EVS and peripheral vision cues for terminal area and surface operations. Another key element of the testing investigated the pilot's awareness and reaction to non-normal events (i.e., failure conditions) that were unexpectedly introduced into the experiment. These non-normal runs served as critical determinants in the underlying safety of all-weather operations. Experimental data from this test are cast into performance-based approach and landing standards which might establish a basis for future all-weather landing operations. Glideslope tracking performance appears to have improved with the elimination of the approach visual segment. This improvement can most likely be attributed to the fact that the pilots didn't have to simultaneously perform glideslope corrections and find required visual landing references in order to continue a landing. Lateral tracking performance was excellent regardless of the display concept being evaluated or whether or not there were peripheral cues in the side window. Although workload ratings were significantly less when peripheral cues were present compared to when there were none, these differences appear to be operationally inconsequential. Larger display concepts tested in this experiment showed significant situation awareness (SA) improvements and workload reductions compared to smaller display concepts. With a fixed display size, a color display was more influential in SA and workload ratings than a collimated display.

  1. Bedmap2; Mapping, visualizing and communicating the Antarctic sub-glacial environment.

    NASA Astrophysics Data System (ADS)

    Fretwell, Peter; Pritchard, Hamish

    2013-04-01

    Bedmap2; Mapping, visualizing and communicating the Antarctic sub-glacial environment. The Bedmap2 project has been a large cooperative effort to compile, model, map and visualize the ice-rock interface beneath the Antarctic ice sheet. Here we present the final output of that project; the Bedmap2 printed map. The map is an A1, double sided print, showing 2d and 3d visualizations of the dataset. It includes scientific interpretations, cross sections and comparisons with other areas. Paper copies of the colour double sided map will be freely distributed at this session.

  2. Experimenter's Laboratory for Visualized Interactive Science

    NASA Technical Reports Server (NTRS)

    Hansen, Elaine R.; Rodier, Daniel R.; Klemp, Marjorie K.

    1994-01-01

    ELVIS (Experimenter's Laboratory for Visualized Interactive Science) is an interactive visualization environment that enables scientists, students, and educators to visualize and analyze large, complex, and diverse sets of scientific data. It accomplishes this by presenting the data sets as 2-D, 3-D, color, stereo, and graphic images with movable and multiple light sources combined with displays of solid-surface, contours, wire-frame, and transparency. By simultaneously rendering diverse data sets acquired from multiple sources, formats, and resolutions and by interacting with the data through an intuitive, direct-manipulation interface, ELVIS provides an interactive and responsive environment for exploratory data analysis.

  3. Perceptual learning in children with visual impairment improves near visual acuity.

    PubMed

    Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N

    2013-09-17

    This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P < 0.001). Only the children in the crowded perceptual learning group showed improvements on all NVA charts. Children with visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).

  4. Understanding Immersivity: Image Generation and Transformation Processes in 3D Immersive Environments

    PubMed Central

    Kozhevnikov, Maria; Dhond, Rupali P.

    2012-01-01

    Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive (3DI) virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard and Metzler (1971) mental rotation (MR) task across the following three types of visual presentation environments; traditional 2D non-immersive (2DNI), 3D non-immersive (3DNI – anaglyphic glasses), and 3DI (head mounted display with position and head orientation tracking). In Experiment 2, we examined how the use of different backgrounds affected MR processes within the 3DI environment. In Experiment 3, we compared electroencephalogram data recorded while participants were mentally rotating visual-spatial images presented in 3DI vs. 2DNI environments. Overall, the findings of the three experiments suggest that visual-spatial processing is different in immersive and non-immersive environments, and that immersive environments may require different image encoding and transformation strategies than the two other non-immersive environments. Specifically, in a non-immersive environment, participants may utilize a scene-based frame of reference and allocentric encoding whereas immersive environments may encourage the use of a viewer-centered frame of reference and egocentric encoding. These findings also suggest that MR performed in laboratory conditions using a traditional 2D computer screen may not reflect spatial processing as it would occur in the real world. PMID:22908003

  5. Effectiveness of Environment-Based Interventions That Address Behavior, Perception, and Falls in People With Alzheimer's Disease and Related Major Neurocognitive Disorders: A Systematic Review.

    PubMed

    Jensen, Lou; Padilla, René

    This systematic review evaluated the effectiveness of environment-based interventions that address behavior, perception, and falls in the home and other settings for people with Alzheimer's disease (AD) and related major neurocognitive disorders (NCDs). Database searches were limited to outcomes studies published in English in peer-reviewed journals between January 2006 and April 2014. A total of 1,854 articles were initially identified, of which 42 met inclusion criteria. Strong evidence indicates that person-centered approaches can improve behavior. Moderate evidence supports noise regulation, environmental design, unobtrusive visual barriers, and environmental relocation strategies to reduce problematic behaviors. Evidence is insufficient for the effectiveness of mealtime ambient music, bright light, proprioceptive input, wander gardens, optical strategies, and sensory devices in improving behavior or reducing wandering and falls. Although evidence supports many environment-based interventions used by occupational therapy practitioners to address behavior, perception, and falls in people with AD and related major NCDs, more studies are needed. Copyright © 2017 by the American Occupational Therapy Association, Inc.

  6. Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.

    PubMed

    Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J

    2016-10-24

    In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. [Efficacy of topical ketorolac for improving visual function after photocoagulation in diabetic patients with focal macular edema].

    PubMed

    Razo Blanco-Hernández, Dulce Milagros; Lima-Gómez, Virgilio; Asbun-Bojalil, Juan

    2014-01-01

    Photocoagulation reduces the incidence of visual loss in diabetic patients with focal macular edema, but it can induce it for Efficacy of topical ketorolac for improving visual function after photocoagulation in diabetic patients with focal macular edema 6 weeks after treatment and produces visual improvement in some cases. Topical ketorolac may reduce the inflammation caused by photocoagulation and improve visual outcome. To determine the efficacy of topical ketorolac for improving visual function after photocoagulation in diabetic patients with focal macular edema. An experimental, comparative, prospective, longitudinal study in diabetic patients with focal macular edema was conducted. Eyes were randomized into two groups of topical treatment for 3 weeks after photocoagulation (A: ketorolac, B: placebo). Best corrected visual acuity before and after treatment was compared in each group (paired t test), and the proportion of eyes with visual improvement was compared between groups (χ(2)). The evaluation was repeated after stratifying for initial visual acuity (≥ 0.5, < 0.5). There were 105 eyes included. In group A (n= 46) mean visual acuity changed from 0.50 to 0.58 (p= 0.003), and from 0.55 to 0.55 in group B (n= 59, p= 0.83); mean percent change was 22.3% in group A and 3.5% in group B (p= 0.03). Visual improvement was identified in 25 eyes from group A (54.3%) and 19 from group B (32.2%, p= 0.019, RR 1.65); the difference only persisted when initial visual acuity was ≥ 0.5 (10 [40%], group A, 5 [14.7%], group B, p= 0.02, RR 2.72). Topical ketorolac was more effective than placebo to improve best corrected visual acuity in diabetic patients with focal macular edema.

  8. Sensorimotor enhancement with a mixed reality system for balance and mobility rehabilitation.

    PubMed

    Fung, Joyce; Perez, Claire F

    2011-01-01

    We have developed a mixed reality system incorporating virtual reality (VR), surface perturbations and light touch for gait rehabilitation. Haptic touch has emerged as a novel and efficient technique to improve postural control and dynamic stability. Our system combines visual display with the manipulation of physical environments and addition of haptic feedback to enhance balance and mobility post stroke. A research study involving 9 participants with stroke and 9 age-matched healthy individuals show that the haptic cue provided while walking is an effective means of improving gait stability in people post stroke, especially during challenging environmental conditions such as downslope walking.

  9. Visualization of stereoscopic anatomic models of the paranasal sinuses and cervical vertebrae from the surgical and procedural perspective.

    PubMed

    Chen, Jian; Smith, Andrew D; Khan, Majid A; Sinning, Allan R; Conway, Marianne L; Cui, Dongmei

    2017-11-01

    Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal cavity, septum, turbinates, paranasal sinuses, optic nerve, pituitary gland, carotid artery, cervical vertebrae, atlanto-axial joint, cervical spinal cord, cervical nerve root, and vertebral artery that can be used to teach clinical trainees (students, residents, and fellows) approaches for trans-sphenoidal pituitary surgery and cervical spine injection procedure. Volume, surface rendering and a new rendering technique, semi-auto-combined, were applied in the study. These models enable visualization, manipulation, and interaction on a computer and can be presented in a stereoscopic 3D virtual environment, which makes users feel as if they are inside the model. Anat Sci Educ 10: 598-606. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  10. Driving while using a smartphone-based mobility application: Evaluating the impact of three multi-choice user interfaces on visual-manual distraction.

    PubMed

    Louveton, N; McCall, R; Koenig, V; Avanesov, T; Engel, T

    2016-05-01

    Innovative in-car applications provided on smartphones can deliver real-time alternative mobility choices and subsequently generate visual-manual demand. Prior studies have found that multi-touch gestures such as kinetic scrolling are problematic in this respect. In this study we evaluate three prototype tasks which can be found in common mobile interaction use-cases. In a repeated-measures design, 29 participants interacted with the prototypes in a car-following task within a driving simulator environment. Task completion, driving performance and eye gaze have been analysed. We found that the slider widget used in the filtering task was too demanding and led to poor performance, while kinetic scrolling generated a comparable amount of visual distraction despite it requiring a lower degree of finger pointing accuracy. We discuss how to improve continuous list browsing in a dual-task context. Copyright © 2016 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. Faces in Context: Does Face Perception Depend on the Orientation of the Visual Scene?

    PubMed

    Taubert, Jessica; van Golde, Celine; Verstraten, Frans A J

    2016-10-01

    The mechanisms held responsible for familiar face recognition are thought to be orientation dependent; inverted faces are more difficult to recognize than their upright counterparts. Although this effect of inversion has been investigated extensively, researchers have typically sliced faces from photographs and presented them in isolation. As such, it is not known whether the perceived orientation of a face is inherited from the visual scene in which it appears. Here, we address this question by measuring performance in a simultaneous same-different task while manipulating both the orientation of the faces and the scene. We found that the face inversion effect survived scene inversion. Nonetheless, an improvement in performance when the scene was upside down suggests that sensitivity to identity increased when the faces were more easily segmented from the scene. Thus, while these data identify congruency with the visual environment as a contributing factor in recognition performance, they imply different mechanisms operate on upright and inverted faces. © The Author(s) 2016.

  12. Effects of cane length and diameter and judgment type on the constant error ratio for estimated height in blindfolded, visually impaired, and sighted participants.

    PubMed

    Huang, Kuo-Chen; Leung, Cherng-Yee; Wang, Hsiu-Feng

    2010-04-01

    The purpose of this study was to assess the ability of blindfolded, visually impaired, and sighted individuals to estimate object height as a function of cane length, cane diameter, and judgment type. 48 undergraduate students (ages 20 to 23 years) were recruited to participate in the study. Participants were divided into low-vision, severely myopic, and normal-vision groups. Five stimulus heights were explored with three cane lengths, varying cane diameters, and judgment types. The participants were asked to estimate the stimulus height with or without reference to a standard block. Results showed that the constant error ratio for estimated height improved with decreasing cane length and comparative judgment. The findings were unclear regarding the effect of cane length on haptic perception of height. Implications were discussed for designing environments, such as stair heights, chairs, the magnitude of apertures, etc., for visually impaired individuals.

  13. Novel virtual reality system integrating online self-face viewing and mirror visual feedback for stroke rehabilitation: rationale and feasibility.

    PubMed

    Shiri, Shimon; Feintuch, Uri; Lorber-Haddad, Adi; Moreh, Elior; Twito, Dvora; Tuchner-Arieli, Maya; Meiner, Zeev

    2012-01-01

    To introduce the rationale of a novel virtual reality system based on self-face viewing and mirror visual feedback, and to examine its feasibility as a rehabilitation tool for poststroke patients. A novel motion capture virtual reality system integrating online self-face viewing and mirror visual feedback has been developed for stroke rehabilitation.The system allows the replacement of the impaired arm by a virtual arm. Upon making small movements of the paretic arm, patients view themselves virtually performing healthy full-range movements. A sample of 6 patients in the acute poststroke phase received the virtual reality treatment concomitantly with conservative rehabilitation treatment. Feasibility was assessed during 10 sessions for each participant. All participants succeeded in operating the system, demonstrating its feasibility in terms of adherence and improvement in task performance. Patients' performance within the virtual environment and a set of clinical-functional measures recorded before the virtual reality treatment, at 1 week, and after 3 months indicated neurological status and general functioning improvement. These preliminary results indicate that this newly developed virtual reality system is safe and feasible. Future randomized controlled studies are required to assess whether this system has beneficial effects in terms of enhancing upper limb function and quality of life in poststroke patients.

  14. Moving beyond the White Cane: Building an Online Learning Environment for the Visually Impaired Professional.

    ERIC Educational Resources Information Center

    Mitchell, Donald P.; Scigliano, John A.

    2000-01-01

    Describes the development of an online learning environment for a visually impaired professional. Topics include physical barriers, intellectual barriers, psychological barriers, and technological barriers; selecting appropriate hardware and software; and combining technologies that include personal computers, Web-based resources, network…

  15. Plant a tree in cyberspace: metaphor and analogy as design elements in Web-based learning environments.

    PubMed

    Wolfe, C R

    2001-02-01

    Analogy and metaphor are figurative forms of communication that help people integrate new information with prior knowledge to facilitate comprehension and appropriate inferences. The novelty and versatility of the Web place cognitive burdens on learners that can be overcome through the use of analogies and metaphors. This paper explores three uses of figurative communication as design elements in Web-based learning environments, and provides empirical illustrations of each. First, extended analogies can be used as the basis of cover stories that create an analogy between the learner's position and a hypothetical situation. The Dragonfly Web pages make extensive use of analogous cover stories in the design of interactive decision-making games. Feedback from visitors, patterns of usage, and external reviews provide evidence of effectiveness. A second approach is visual analogies based on the principles of ecological psychology. An empirical example suggests that visual analogies are most effective when there is a one-to-one correspondence between the base and visual target analogs. The use of learner-generated analogies is a third approach. Data from an offline study with undergraduate science students are presented indicating that generating analogies are associated with significant improvements in the ability to place events in natural history on a time line. It is concluded that cyberspace itself might form the basis of the next guiding metaphor of mind.

  16. Look, Snap, See: Visual Literacy through the Camera.

    ERIC Educational Resources Information Center

    Spoerner, Thomas M.

    1981-01-01

    Activities involving photographs stimulate visual perceptual awareness. Children understand visual stimuli before having verbal capacity to deal with the world. Vision becomes the primary means for learning, understanding, and adjusting to the environment. Photography can provide an effective avenue to visual literacy. (Author)

  17. a Thtee-Dimensional Variational Assimilation Scheme for Satellite Aod

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Zang, Z.; You, W.

    2018-04-01

    A three-dimensional variational data assimilation scheme is designed for satellite AOD based on the IMPROVE (Interagency Monitoring of Protected Visual Environments) equation. The observation operator that simulates AOD from the control variables is established by the IMPROVE equation. All of the 16 control variables in the assimilation scheme are the mass concentrations of aerosol species from the Model for Simulation Aerosol Interactions and Chemistry scheme, so as to take advantage of this scheme in providing comprehensive analyses of species concentrations and size distributions as well as be calculating efficiently. The assimilation scheme can save computational resources as the IMPROVE equation is a quadratic equation. A single-point observation experiment shows that the information from the single-point AOD is effectively spread horizontally and vertically.

  18. Graphical programming interface: A development environment for MRI methods.

    PubMed

    Zwart, Nicholas R; Pipe, James G

    2015-11-01

    To introduce a multiplatform, Python language-based, development environment called graphical programming interface for prototyping MRI techniques. The interface allows developers to interact with their scientific algorithm prototypes visually in an event-driven environment making tasks such as parameterization, algorithm testing, data manipulation, and visualization an integrated part of the work-flow. Algorithm developers extend the built-in functionality through simple code interfaces designed to facilitate rapid implementation. This article shows several examples of algorithms developed in graphical programming interface including the non-Cartesian MR reconstruction algorithms for PROPELLER and spiral as well as spin simulation and trajectory visualization of a FLORET example. The graphical programming interface framework is shown to be a versatile prototyping environment for developing numeric algorithms used in the latest MR techniques. © 2014 Wiley Periodicals, Inc.

  19. The Perceptual Root of Object-Based Storage: An Interactive Model of Perception and Visual Working Memory

    ERIC Educational Resources Information Center

    Gao, Tao; Gao, Zaifeng; Li, Jie; Sun, Zhongqiang; Shen, Mowei

    2011-01-01

    Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual…

  20. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  1. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    PubMed

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  2. Listeners' expectation of room acoustical parameters based on visual cues

    NASA Astrophysics Data System (ADS)

    Valente, Daniel L.

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.

  3. A Software Developer’s Guide to Informal Evaluation of Visual Analytics Environments Using VAST Challenge Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Kristin A.; Scholtz, Jean; Whiting, Mark A.

    The VAST Challenge has been a popular venue for academic and industry participants for over ten years. Many participants comment that the majority of their time in preparing VAST Challenge entries is discovering elements in their software environments that need to be redesigned in order to solve the given task. Fortunately, there is no need to wait until the VAST Challenge is announced to test out software systems. The Visual Analytics Benchmark Repository contains all past VAST Challenge tasks, data, solutions and submissions. This paper details the various types of evaluations that may be conducted using the Repository information. Inmore » this paper we describe how developers can do informal evaluations of various aspects of their visual analytics environments using VAST Challenge information. Aspects that can be evaluated include the appropriateness of the software for various tasks, the various data types and formats that can be accommodated, the effectiveness and efficiency of the process supported by the software, and the intuitiveness of the visualizations and interactions. Researchers can compare their visualizations and interactions to those submitted to determine novelty. In addition, the paper provides pointers to various guidelines that software teams can use to evaluate the usability of their software. While these evaluations are not a replacement for formal evaluation methods, this information can be extremely useful during the development of visual analytics environments.« less

  4. Multiscale Modeling, Simulation and Visualization and Their Potential for Future Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler)

    2002-01-01

    This document contains the proceedings of the Training Workshop on Multiscale Modeling, Simulation and Visualization and Their Potential for Future Aerospace Systems held at NASA Langley Research Center, Hampton, Virginia, March 5 - 6, 2002. The workshop was jointly sponsored by Old Dominion University's Center for Advanced Engineering Environments and NASA. Workshop attendees were from NASA, other government agencies, industry, and universities. The objectives of the workshop were to give overviews of the diverse activities in hierarchical approach to material modeling from continuum to atomistics; applications of multiscale modeling to advanced and improved material synthesis; defects, dislocations, and material deformation; fracture and friction; thin-film growth; characterization at nano and micro scales; and, verification and validation of numerical simulations, and to identify their potential for future aerospace systems.

  5. Audio-visual sensory deprivation degrades visuo-tactile peri-personal space.

    PubMed

    Noel, Jean-Paul; Park, Hyeong-Dong; Pasqualini, Isabella; Lissek, Herve; Wallace, Mark; Blanke, Olaf; Serino, Andrea

    2018-05-01

    Self-perception is scaffolded upon the integration of multisensory cues on the body, the space surrounding the body (i.e., the peri-personal space; PPS), and from within the body. We asked whether reducing information available from external space would change: PPS, interoceptive accuracy, and self-experience. Twenty participants were exposed to 15 min of audio-visual deprivation and performed: (i) a visuo-tactile interaction task measuring their PPS; (ii) a heartbeat perception task measuring interoceptive accuracy; and (iii) a series of questionnaires related to self-perception and mental illness. These tasks were carried out in two conditions: while exposed to a standard sensory environment and under a condition of audio-visual deprivation. Results suggest that while PPS becomes ill defined after audio-visual deprivation, interoceptive accuracy is unaltered at a group-level, with some participants improving and some worsening in interoceptive accuracy. Interestingly, correlational individual differences analyses revealed that changes in PPS after audio-visual deprivation were related to interoceptive accuracy and self-reports of "unusual experiences" on an individual subject basis. Taken together, the findings argue for a relationship between the malleability of PPS, interoceptive accuracy, and an inclination toward aberrant ideation often associated with mental illness. Copyright © 2018. Published by Elsevier Inc.

  6. Research on strategy marine noise map based on i4ocean platform: Constructing flow and key approach

    NASA Astrophysics Data System (ADS)

    Huang, Baoxiang; Chen, Ge; Han, Yong

    2016-02-01

    Noise level in a marine environment has raised extensive concern in the scientific community. The research is carried out on i4Ocean platform following the process of ocean noise model integrating, noise data extracting, processing, visualizing, and interpreting, ocean noise map constructing and publishing. For the convenience of numerical computation, based on the characteristics of ocean noise field, a hybrid model related to spatial locations is suggested in the propagation model. The normal mode method K/I model is used for far field and ray method CANARY model is used for near field. Visualizing marine ambient noise data is critical to understanding and predicting marine noise for relevant decision making. Marine noise map can be constructed on virtual ocean scene. The systematic marine noise visualization framework includes preprocessing, coordinate transformation interpolation, and rendering. The simulation of ocean noise depends on realistic surface. Then the dynamic water simulation gird was improved with GPU fusion to achieve seamless combination with the visualization result of ocean noise. At the same time, the profile and spherical visualization include space, and time dimensionality were also provided for the vertical field characteristics of ocean ambient noise. Finally, marine noise map can be published with grid pre-processing and multistage cache technology to better serve the public.

  7. Development of driver’s assistant system of additional visual information of blind areas for Gazelle Next

    NASA Astrophysics Data System (ADS)

    Makarov, V.; Korelin, O.; Koblyakov, D.; Kostin, S.; Komandirov, A.

    2018-02-01

    The article is devoted to the development of the Advanced Driver Assistance Systems (ADAS) for the GAZelle NEXT car. This project is aimed at developing a visual information system for the driver integrated into the windshield racks. The developed system implements the following functions: assistance in maneuvering and parking; Recognition of road signs; Warning the driver about the possibility of a frontal collision; Control of "blind" zones; "Transparent" vision in the windshield racks, widening the field of view, behind them; Visual and sound information about the traffic situation; Control and descent from the lane of the vehicle; Monitoring of the driver’s condition; navigation system; All-round review. The scheme of action of sensors of the developed system of visual information of the driver is provided. The moments of systems on a prototype of a vehicle are considered. Possible changes in the interior and dashboard of the car are given. The results of the implementation are aimed at the implementation of the system - improved informing of the driver about the environment and the development of an ergonomic interior for this system within the new Functional Salon of the Gazelle Next vehicle equipped with a visual information system for the driver.

  8. QuakeSim 2.0

    NASA Technical Reports Server (NTRS)

    Donnellan, Andrea; Parker, Jay W.; Lyzenga, Gregory A.; Granat, Robert A.; Norton, Charles D.; Rundle, John B.; Pierce, Marlon E.; Fox, Geoffrey C.; McLeod, Dennis; Ludwig, Lisa Grant

    2012-01-01

    QuakeSim 2.0 improves understanding of earthquake processes by providing modeling tools and integrating model applications and various heterogeneous data sources within a Web services environment. QuakeSim is a multisource, synergistic, data-intensive environment for modeling the behavior of earthquake faults individually, and as part of complex interacting systems. Remotely sensed geodetic data products may be explored, compared with faults and landscape features, mined by pattern analysis applications, and integrated with models and pattern analysis applications in a rich Web-based and visualization environment. Integration of heterogeneous data products with pattern informatics tools enables efficient development of models. Federated database components and visualization tools allow rapid exploration of large datasets, while pattern informatics enables identification of subtle, but important, features in large data sets. QuakeSim is valuable for earthquake investigations and modeling in its current state, and also serves as a prototype and nucleus for broader systems under development. The framework provides access to physics-based simulation tools that model the earthquake cycle and related crustal deformation. Spaceborne GPS and Inter ferometric Synthetic Aperture (InSAR) data provide information on near-term crustal deformation, while paleoseismic geologic data provide longerterm information on earthquake fault processes. These data sources are integrated into QuakeSim's QuakeTables database system, and are accessible by users or various model applications. UAVSAR repeat pass interferometry data products are added to the QuakeTables database, and are available through a browseable map interface or Representational State Transfer (REST) interfaces. Model applications can retrieve data from Quake Tables, or from third-party GPS velocity data services; alternatively, users can manually input parameters into the models. Pattern analysis of GPS and seismicity data has proved useful for mid-term forecasting of earthquakes, and for detecting subtle changes in crustal deformation. The GPS time series analysis has also proved useful as a data-quality tool, enabling the discovery of station anomalies and data processing and distribution errors. Improved visualization tools enable more efficient data exploration and understanding. Tools provide flexibility to science users for exploring data in new ways through download links, but also facilitate standard, intuitive, and routine uses for science users and end users such as emergency responders.

  9. VisSearch: A Collaborative Web Searching Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2005-01-01

    VisSearch is a collaborative Web searching environment intended for sharing Web search results among people with similar interests, such as college students taking the same course. It facilitates students' Web searches by visualizing various Web searching processes. It also collects the visualized Web search results and applies an association rule…

  10. Visual resource management of the sea

    Treesearch

    Louis V. Mills Jr.

    1979-01-01

    The scenic quality of the marine environment has become an important concern for the design and planning professions. Increased public use of the underwater environment has resulted from technological advancements in SCUBA, recreational submarines and through development of under-water restaurants and parks. This paper presents an approach to an underwater visual...

  11. Using Visualization to Motivate Student Participation in Collaborative Online Learning Environments

    ERIC Educational Resources Information Center

    Jin, Sung-Hee

    2017-01-01

    Online participation in collaborative online learning environments is instrumental in motivating students to learn and promoting their learning satisfaction, but there has been little research on the technical supports for motivating students' online participation. The purpose of this study was to develop a visualization tool to motivate learners…

  12. Visual Literacy in Instructional Design Programs

    ERIC Educational Resources Information Center

    Ervine, Michelle D.

    2016-01-01

    In this technologically advanced environment, users have become highly visual, with television, videos, web sites and images dominating the learning environment. These new forms of searching and learning are changing the perspective of what it means to be literate. Literacy can no longer solely rely on text-based materials, but should also…

  13. Audio Visual Technology and the Teaching of Foreign Languages.

    ERIC Educational Resources Information Center

    Halbig, Michael C.

    Skills in comprehending the spoken language source are becoming increasingly important due to the audio-visual orientation of our culture. It would seem natural, therefore, to adjust the learning goals and environment accordingly. The video-cassette machine is an ideal means for creating this learning environment and developing the listening…

  14. The climate visualizer: Sense-making through scientific visualization

    NASA Astrophysics Data System (ADS)

    Gordin, Douglas N.; Polman, Joseph L.; Pea, Roy D.

    1994-12-01

    This paper describes the design of a learning environment, called the Climate Visualizer, intended to facilitate scientific sense-making in high school classrooms by providing students the ability to craft, inspect, and annotate scientific visualizations. The theoretical back-ground for our design presents a view of learning as acquiring and critiquing cultural practices and stresses the need for students to appropriate the social and material aspects of practice when learning an area. This is followed by a description of the design of the Climate Visualizer, including detailed accounts of its provision of spatial and temporal context and the quantitative and visual representations it employs. A broader context is then explored by describing its integration into the high school science classroom. This discussion explores how visualizations can promote the creation of scientific theories, especially in conjunction with the Collaboratory Notebook, an embedded environment for creating and critiquing scientific theories and visualizations. Finally, we discuss the design trade-offs we have made in light of our theoretical orientation, and our hopes for further progress.

  15. Variable practice with lenses improves visuo-motor plasticity

    NASA Technical Reports Server (NTRS)

    Roller, C. A.; Cohen, H. S.; Kimball, K. T.; Bloomberg, J. J.

    2001-01-01

    Novel sensorimotor situations present a unique challenge to an individual's adaptive ability. Using the simple and easily measured paradigm of visual-motor rearrangement created by the use of visual displacement lenses, we sought to determine whether an individual's ability to adapt to visuo-motor discordance could be improved through training. Subjects threw small balls at a stationary target during a 3-week practice regimen involving repeated exposure to one set of lenses in block practice (x 2.0 magnifying lenses), multiple sets of lenses in variable practice (x 2.0 magnifying, x 0.5 minifying and up-down reversing lenses) or sham lenses. At the end of training, adaptation to a novel visuo-motor situation (20-degree right shift lenses) was tested. We found that (1) training with variable practice can increase adaptability to a novel visuo-motor situation, (2) increased adaptability is retained for at least 1 month and is transferable to further novel visuo-motor permutations and (3) variable practice improves performance of a simple motor task even in the undisturbed state. These results have implications for the design of clinical rehabilitation programs and countermeasures to enhance astronaut adaptability, facilitating adaptive transitions between gravitational environments.

  16. The influence of the aquatic environment on the control of postural sway.

    PubMed

    Marinho-Buzelli, Andresa R; Rouhani, Hossein; Masani, Kei; Verrier, Mary C; Popovic, Milos R

    2017-01-01

    Balance training in the aquatic environment is often used in rehabilitation practice to improve static and dynamic balance. Although aquatic therapy is widely used in clinical practice, we still lack evidence on how immersion in water actually impacts postural control. We examined how postural sway measured using centre of pressure and trunk acceleration parameters are influenced by the aquatic environment along with the effects of visual information. Our results suggest that the aquatic environment increases postural instability, measured by the centre of pressure parameters in the time-domain. The mean velocity and area were more significantly affected when individuals stood with eyes closed in the aquatic environment. In addition, a more forward posture was assumed in water with eyes closed in comparison to standing on land. In water, the low frequencies of sway were more dominant compared to standing on dry land. Trunk acceleration differed in water and dry land only for the larger upper trunk acceleration in mediolateral direction during standing in water. This finding shows that the study participants potentially resorted to using their upper trunk to compensate for postural instability in mediolateral direction. Only the lower trunk seemed to change acceleration pattern in anteroposterior and mediolateral directions when the eyes were closed, and it did so depending on the environment conditions. The increased postural instability and the change in postural control strategies that the aquatic environment offers may be a beneficial stimulus for improving balance control. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. The quality of visual information about the lower extremities influences visuomotor coordination during virtual obstacle negotiation.

    PubMed

    Kim, Aram; Kretch, Kari S; Zhou, Zixuan; Finley, James M

    2018-05-09

    Successful negotiation of obstacles during walking relies on the integration of visual information about the environment with ongoing locomotor commands. When information about the body and environment are removed through occlusion of the lower visual field, individuals increase downward head pitch angle, reduce foot placement precision, and increase safety margins during crossing. However, whether these effects are mediated by loss of visual information about the lower extremities, the obstacle, or both remains to be seen. Here, we used a fully immersive, virtual obstacle negotiation task to investigate how visual information about the lower extremities is integrated with information about the environment to facilitate skillful obstacle negotiation. Participants stepped over virtual obstacles while walking on a treadmill with one of three types of visual feedback about the lower extremities: no feedback, end-point feedback, or a link-segment model. We found that absence of visual information about the lower extremities led to an increase in the variability of leading foot placement after crossing. The presence of a visual representation of the lower extremities promoted greater downward head pitch angle during the approach to and subsequent crossing of an obstacle. In addition, having greater downward head pitch was associated with closer placement of the trailing foot to the obstacle, further placement of the leading foot after the obstacle, and higher trailing foot clearance. These results demonstrate that the fidelity of visual information about the lower extremities influences both feed-forward and feedback aspects of visuomotor coordination during obstacle negotiation.

  18. Behavior Selection of Mobile Robot Based on Integration of Multimodal Information

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kaneko, Masahide

    Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.

  19. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.

  20. ITEMS Project: An online sequence for teaching mathematics and astronomy

    NASA Astrophysics Data System (ADS)

    Martínez, Bernat; Pérez, Josep

    2010-10-01

    This work describes an elearning sequence for teaching geometry and astronomy in lower secondary school created inside the ITEMS (Improving Teacher Education in Mathematics and Science) project. It is based on results from the astronomy education research about studentsŠ difficulties in understanding elementary astronomical observations and models. The sequence consists of a set of computer animations embedded in an elearning environment aimed at supporting students in learning about astronomy ideas that require the use of geometrical concepts and visual-spatial reasoning.

  1. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    PubMed

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  2. Stereoscopic visualization and haptic technology used to create a virtual environment for remote surgery - biomed 2011.

    PubMed

    Bornhoft, J M; Strabala, K W; Wortman, T D; Lehman, A C; Oleynikov, D; Farritor, S M

    2011-01-01

    The objective of this research is to study the effectiveness of using a stereoscopic visualization system for performing remote surgery. The use of stereoscopic vision has become common with the advent of the da Vinci® system (Intuitive, Sunnyvale CA). This system creates a virtual environment that consists of a 3-D display for visual feedback and haptic tactile feedback, together providing an intuitive environment for remote surgical applications. This study will use simple in vivo robotic surgical devices and compare the performance of surgeons using the stereoscopic interfacing system to the performance of surgeons using one dimensional monitors. The stereoscopic viewing system consists of two cameras, two monitors, and four mirrors. The cameras are mounted to a multi-functional miniature in vivo robot; and mimic the depth perception of the actual human eyes. This is done by placing the cameras at a calculated angle and distance apart. Live video streams from the left and right cameras are displayed on the left and right monitors, respectively. A system of angled mirrors allows the left and right eyes to see the video stream from the left and right monitor, respectively, creating the illusion of depth. The haptic interface consists of two PHANTOM Omni® (SensAble, Woburn Ma) controllers. These controllers measure the position and orientation of a pen-like end effector with three degrees of freedom. As the surgeon uses this interface, they see a 3-D image and feel force feedback for collision and workspace limits. The stereoscopic viewing system has been used in several surgical training tests and shows a potential improvement in depth perception and 3-D vision. The haptic system accurately gives force feedback that aids in surgery. Both have been used in non-survival animal surgeries, and have successfully been used in suturing and gallbladder removal. Bench top experiments using the interfacing system have also been conducted. A group of participants completed two different surgical training tasks using both a two dimensional visual system and the stereoscopic visual system. Results suggest that the stereoscopic visual system decreased the amount of time taken to complete the tasks. All participants also reported that the stereoscopic system was easier to utilize than the two dimensional system. Haptic controllers combined with stereoscopic vision provides for a more intuitive virtual environment. This system provides the surgeon with 3-D vision, depth perception, and the ability to receive feedback through forces applied in the haptic controller while performing surgery. These capabilities potentially enable the performance of more complex surgeries with a higher level of precision.

  3. Are children with low vision adapted to the visual environment in classrooms of mainstream schools?

    PubMed

    Negiloni, Kalpa; Ramani, Krishna Kumar; Jeevitha, R; Kalva, Jayashree; Sudhir, Rachapalle Reddi

    2018-02-01

    The study aimed to evaluate the classroom environment of children with low vision and provide recommendations to reduce visual stress, with focus on mainstream schooling. The medical records of 110 children (5-17 years) seen in low vision clinic during 1 year period (2015) at a tertiary care center in south India were extracted. The visual function levels of children were compared to the details of their classroom environment. The study evaluated and recommended the chalkboard visual task size and viewing distance required for children with mild, moderate, and severe visual impairment (VI). The major causes of low vision based on the site of abnormality and etiology were retinal (80%) and hereditary (67%) conditions, respectively, in children with mild (n = 18), moderate (n = 72), and severe (n = 20) VI. Many of the children (72%) had difficulty in viewing chalkboard and common strategies used for better visibility included copying from friends (47%) and going closer to chalkboard (42%). To view the chalkboard with reduced visual stress, a child with mild VI can be seated at a maximum distance of 4.3 m from the chalkboard, with the minimum size of visual task (height of lowercase letter writing on chalkboard) recommended to be 3 cm. For 3/60-6/60 range, the maximum viewing distance with the visual task size of 4 cm is recommended to be 85 cm to 1.7 m. Simple modifications of the visual task size and seating arrangements can aid children with low vision with better visibility of chalkboard and reduced visual stress to manage in mainstream schools.

  4. Functional vision in children with perinatal brain damage.

    PubMed

    Alimović, Sonja; Jurić, Nikolina; Bošnjak, Vlatka Mejaški

    2014-09-01

    Many authors have discussed the effects of visual stimulations on visual functions, but there is no research about the effects on using vision in everyday activities (i.e. functional vision). Children with perinatal brain damage can develop cerebral visual impairment with preserved visual functions (e.g. visual acuity, contrast sensitivity) but poor functional vision. Our aim was to discuss the importance of assessing and stimulating functional vision in children with perinatal brain damage. We assessed visual functions (grating visual acuity, contrast sensitivity) and functional vision (the ability of maintaining visual attention and using vision in communication) in 99 children with perinatal brain damage and visual impairment. All children were assessed before and after the visual stimulation program. Our first assessment results showed that children with perinatal brain damage had significantly more problems in functional vision than in basic visual functions. During the visual stimulation program both variables of functional vision and contrast sensitivity improved significantly, while grating acuity improved only in 2.7% of children. We also found that improvement of visual attention significantly correlated to improvement on all other functions describing vision. Therefore, functional vision assessment, especially assessment of visual attention is indispensable in early monitoring of child with perinatal brain damage.

  5. Using virtual reality technology for aircraft visual inspection training: presence and comparison studies.

    PubMed

    Vora, Jeenal; Nair, Santosh; Gramopadhye, Anand K; Duchowski, Andrew T; Melloy, Brian J; Kanki, Barbara

    2002-11-01

    The aircraft maintenance industry is a complex system consisting of several interrelated human and machine components. Recognizing this, the Federal Aviation Administration (FAA) has pursued human factors related research. In the maintenance arena the research has focused on the aircraft inspection process and the aircraft inspector. Training has been identified as the primary intervention strategy to improve the quality and reliability of aircraft inspection. If training is to be successful, it is critical that we provide aircraft inspectors with appropriate training tools and environment. In response to this need, the paper outlines the development of a virtual reality (VR) system for aircraft inspection training. VR has generated much excitement but little formal proof that it is useful. However, since VR interfaces are difficult and expensive to build, the computer graphics community needs to be able to predict which applications will benefit from VR. To address this important issue, this research measured the degree of immersion and presence felt by subjects in a virtual environment simulator. Specifically, it conducted two controlled studies using the VR system developed for visual inspection task of an aft-cargo bay at the VR Lab of Clemson University. Beyond assembling the visual inspection virtual environment, a significant goal of this project was to explore subjective presence as it affects task performance. The results of this study indicated that the system scored high on the issues related to the degree of presence felt by the subjects. As a next logical step, this study, then, compared VR to an existing PC-based aircraft inspection simulator. The results showed that the VR system was better and preferred over the PC-based training tool.

  6. Not Just a Game … When We Play Together, We Learn Together: Interactive Virtual Environments and Gaming Engines for Geospatial Visualization

    NASA Astrophysics Data System (ADS)

    Shipman, J. S.; Anderson, J. W.

    2017-12-01

    An ideal tool for ecologists and land managers to investigate the impacts of both projected environmental changes and policy alternatives is the creation of immersive, interactive, virtual landscapes. As a new frontier in visualizing and understanding geospatial data, virtual landscapes require a new toolbox for data visualization that includes traditional GIS tools and uncommon tools such as the Unity3d game engine. Game engines provide capabilities to not only explore data but to build and interact with dynamic models collaboratively. These virtual worlds can be used to display and illustrate data that is often more understandable and plausible to both stakeholders and policy makers than is achieved using traditional maps.Within this context we will present funded research that has been developed utilizing virtual landscapes for geographic visualization and decision support among varied stakeholders. We will highlight the challenges and lessons learned when developing interactive virtual environments that require large multidisciplinary team efforts with varied competences. The results will emphasize the importance of visualization and interactive virtual environments and the link with emerging research disciplines within Visual Analytics.

  7. XML-Based Visual Specification of Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad

    2001-01-01

    The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.

  8. Visual improvement for bad handwriting based on Monte-Carlo method

    NASA Astrophysics Data System (ADS)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2014-03-01

    A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.

  9. Computer-Based Tools for Inquiry in Undergraduate Classrooms: Results from the VGEE

    NASA Astrophysics Data System (ADS)

    Pandya, R. E.; Bramer, D. J.; Elliott, D.; Hay, K. E.; Mallaiahgari, L.; Marlino, M. R.; Middleton, D.; Ramamurhty, M. K.; Scheitlin, T.; Weingroff, M.; Wilhelmson, R.; Yoder, J.

    2002-05-01

    The Visual Geophysical Exploration Environment (VGEE) is a suite of computer-based tools designed to help learners connect observable, large-scale geophysical phenomena to underlying physical principles. Technologically, this connection is mediated by java-based interactive tools: a multi-dimensional visualization environment, authentic scientific data-sets, concept models that illustrate fundamental physical principles, and an interactive web-based work management system for archiving and evaluating learners' progress. Our preliminary investigations showed, however, that the tools alone are not sufficient to empower undergraduate learners; learners have trouble in organizing inquiry and using the visualization tools effectively. To address these issues, the VGEE includes an inquiry strategy and scaffolding activities that are similar to strategies used successfully in K-12 classrooms. The strategy is organized around the steps: identify, relate, explain, and integrate. In the first step, students construct visualizations from data to try to identify salient features of a particular phenomenon. They compare their previous conceptions of a phenomenon to the data examine their current knowledge and motivate investigation. Next, students use the multivariable functionality of the visualization environment to relate the different features they identified. Explain moves the learner temporarily outside the visualization to the concept models, where they explore fundamental physical principles. Finally, in integrate, learners use these fundamental principles within the visualization environment by literally placing the concept model within the visualization environment as a probe and watching it respond to larger-scale patterns. This capability, unique to the VGEE, addresses the disconnect that novice learners often experience between fundamental physics and observable phenomena. It also allows learners the opportunity to reflect on and refine their knowledge as well as anchor it within a context for long-term retention. We are implementing the VGEE in one of two otherwise identical entry-level atmospheric courses. In addition to comparing student learning and attitudes in the two courses, we are analyzing student participation with the VGEE to evaluate the effectiveness and usability of the VGEE. In particular, we seek to identify the scaffolding students need to construct physically meaningful multi-dimensional visualizations, and evaluate the effectiveness of the visualization-embedded concept-models in addressing inert knowledge. We will also examine the utility of the inquiry strategy in developing content knowledge, process-of-science knowledge, and discipline-specific investigatory skills. Our presentation will include video examples of student use to illustrate our findings.

  10. Examining sensory ability, feature matching and assessment-based adaptation for a brain-computer interface using the steady-state visually evoked potential.

    PubMed

    Brumberg, Jonathan S; Nguyen, Anh; Pitt, Kevin M; Lorenz, Sean D

    2018-01-31

    We investigated how overt visual attention and oculomotor control influence successful use of a visual feedback brain-computer interface (BCI) for accessing augmentative and alternative communication (AAC) devices in a heterogeneous population of individuals with profound neuromotor impairments. BCIs are often tested within a single patient population limiting generalization of results. This study focuses on examining individual sensory abilities with an eye toward possible interface adaptations to improve device performance. Five individuals with a range of neuromotor disorders participated in four-choice BCI control task involving the steady state visually evoked potential. The BCI graphical interface was designed to simulate a commercial AAC device to examine whether an integrated device could be used successfully by individuals with neuromotor impairment. All participants were able to interact with the BCI and highest performance was found for participants able to employ an overt visual attention strategy. For participants with visual deficits to due to impaired oculomotor control, effective performance increased after accounting for mismatches between the graphical layout and participant visual capabilities. As BCIs are translated from research environments to clinical applications, the assessment of BCI-related skills will help facilitate proper device selection and provide individuals who use BCI the greatest likelihood of immediate and long term communicative success. Overall, our results indicate that adaptations can be an effective strategy to reduce barriers and increase access to BCI technology. These efforts should be directed by comprehensive assessments for matching individuals to the most appropriate device to support their complex communication needs. Implications for Rehabilitation Brain computer interfaces using the steady state visually evoked potential can be integrated with an augmentative and alternative communication device to provide access to language and literacy for individuals with neuromotor impairment. Comprehensive assessments are needed to fully understand the sensory, motor, and cognitive abilities of individuals who may use brain-computer interfaces for proper feature matching as selection of the most appropriate device including optimization device layouts and control paradigms. Oculomotor impairments negatively impact brain-computer interfaces that use the steady state visually evoked potential, but modifications to place interface stimuli and communication items in the intact visual field can improve successful outcomes.

  11. Some lessons learned in three years with ADS-33C. [rotorcraft handling qualities specification

    NASA Technical Reports Server (NTRS)

    Key, David L.; Blanken, Chris L.; Hoh, Roger H.

    1993-01-01

    Three years of using the U.S. Army's rotorcraft handling qualities specification, Aeronautical Design Standard - 33, has shown it to be surprisingly robust. It appears to provide an excellent basis for design and for assessment, however, as the subtleties become more well understood, several areas needing refinement became apparent. Three responses to these needs have been documented in this paper: (1) The yaw-axis attitude quickness for hover target acquisition and tracking can be relaxed slightly. (2) Understanding and application of criteria for degraded visual environments needed elaboration. This and some guidelines for testing to obtain visual cue ratings have been documented. (3) The flight test maneuvers were an innovation that turned out to be very valuable. Their extensive use has made it necessary to tighten definitions and testing guidance. This was accomplished for a good visual environment and is underway for degraded visual environments.

  12. VIPER: Virtual Intelligent Planetary Exploration Rover

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Flueckiger, Lorenzo; Nguyen, Laurent; Washington, Richard

    2001-01-01

    Simulation and visualization of rover behavior are critical capabilities for scientists and rover operators to construct, test, and validate plans for commanding a remote rover. The VIPER system links these capabilities. using a high-fidelity virtual-reality (VR) environment. a kinematically accurate simulator, and a flexible plan executive to allow users to simulate and visualize possible execution outcomes of a plan under development. This work is part of a larger vision of a science-centered rover control environment, where a scientist may inspect and explore the environment via VR tools, specify science goals, and visualize the expected and actual behavior of the remote rover. The VIPER system is constructed from three generic systems, linked together via a minimal amount of customization into the integrated system. The complete system points out the power of combining plan execution, simulation, and visualization for envisioning rover behavior; it also demonstrates the utility of developing generic technologies. which can be combined in novel and useful ways.

  13. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    PubMed

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  14. Acoustic facilitation of object movement detection during self-motion

    PubMed Central

    Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.

    2011-01-01

    In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050

  15. Visualization in aerospace research with a large wall display system

    NASA Astrophysics Data System (ADS)

    Matsuo, Yuichi

    2002-05-01

    National Aerospace Laboratory of Japan has built a large- scale visualization system with a large wall-type display. The system has been operational since April 2001 and comprises a 4.6x1.5-meter (15x5-foot) rear projection screen with 3 BARCO 812 high-resolution CRT projectors. The reason we adopted the 3-gun CRT projectors is support for stereoscopic viewing, ease with color/luminosity matching and accuracy of edge-blending. The system is driven by a new SGI Onyx 3400 server of distributed shared-memory architecture with 32 CPUs, 64Gbytes memory, 1.5TBytes FC RAID disk and 6 IR3 graphics pipelines. Software is another important issue for us to make full use of the system. We have introduced some applications available in a multi- projector environment such as AVS/MPE, EnSight Gold and COVISE, and been developing some software tools that create volumetric images with using SGI graphics libraries. The system is mainly used for visualization fo computational fluid dynamics (CFD) simulation sin aerospace research. Visualized CFD results are of our help for designing an improved configuration of aerospace vehicles and analyzing their aerodynamic performances. These days we also use it for various collaborations among researchers.

  16. GWVis: A Tool for Comparative Ground-Water Data Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Best, Daniel M.; Lewis, Robert R.

    2010-11-01

    The Ground-Water Visualization application (GWVis) presents ground-water data visually in order to educate the public on ground-water issues. It is also intended for presentations to government and other funding agencies. Current three dimensional models of ground-water are overly complex, while the two dimensional representations (i.e., on paper) are neither comprehensive, nor engaging. At present, GWVis operates on water head elevation data over a given time span, together with a matching (fixed) underlying geography. Two elevation scenarios are compared with each other, typically a control data set (actual field data) and a simulation. Scenario comparison can be animated for the timemore » span provided. We developed GWVis using the Python programming language, associated libraries, and pyOpenGL extension packages to improve performance and control of attributes of the mode (such as color, positioning, scale, and interpolation). GWVis bridges the gap between two dimensional and dynamic three dimensional research visualizations by providing an intuitive, interactive design that allows participants to view the model from different perspectives and to infer information about scenarios. By incorporating scientific data in an environment that can be easily understood, GWVis allows the information to be presented to a large audience base.« less

  17. Risk analysis of urban gas pipeline network based on improved bow-tie model

    NASA Astrophysics Data System (ADS)

    Hao, M. J.; You, Q. J.; Yue, Z.

    2017-11-01

    Gas pipeline network is a major hazard source in urban areas. In the event of an accident, there could be grave consequences. In order to understand more clearly the causes and consequences of gas pipeline network accidents, and to develop prevention and mitigation measures, the author puts forward the application of improved bow-tie model to analyze risks of urban gas pipeline network. The improved bow-tie model analyzes accident causes from four aspects: human, materials, environment and management; it also analyzes the consequences from four aspects: casualty, property loss, environment and society. Then it quantifies the causes and consequences. Risk identification, risk analysis, risk assessment, risk control, and risk management will be clearly shown in the model figures. Then it can suggest prevention and mitigation measures accordingly to help reduce accident rate of gas pipeline network. The results show that the whole process of an accident can be visually investigated using the bow-tie model. It can also provide reasons for and predict consequences of an unfortunate event. It is of great significance in order to analyze leakage failure of gas pipeline network.

  18. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    PubMed Central

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10–12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant’s MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation. PMID:26347642

  19. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    PubMed

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.

  20. Interactive Learning Environment: Web-based Virtual Hydrological Simulation System using Augmented and Immersive Reality

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2014-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The hydrological simulation system is a web-based 3D interactive learning environment for teaching hydrological processes and concepts. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create or load predefined scenarios, control environmental parameters, and evaluate environmental mitigation alternatives. The web-based simulation system provides an environment for students to learn about the hydrological processes (e.g. flooding and flood damage), and effects of development and human activity in the floodplain. The system utilizes latest web technologies and graphics processing unit (GPU) for water simulation and object collisions on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various visualization and interaction modes.

  1. Color polymorphic lures target different visual channels in prey.

    PubMed

    White, Thomas E; Kemp, Darrell J

    2016-06-01

    Selection for signal efficacy in variable environments may favor color polymorphism, but little is known about this possibility outside of sexual systems. Here we used the color polymorphic orb-web spider Gasteracantha fornicata, whose yellow- or white-banded dorsal signal attracts dipteran prey, to test the hypothesis that morphs may be tuned to optimize either chromatic or achromatic conspicuousness in their visually noisy forest environments. We used data from extensive observations of naturally existing spiders and precise assessments of visual environments to model signal conspicuousness according to dipteran vision. Modeling supported a distinct bias in the chromatic (yellow morph) or achromatic (white morph) contrast presented by spiders at the times when they caught prey, as opposed to all other times at which they may be viewed. Hence, yellow spiders were most successful when their signal produced maximum color contrast against viewing backgrounds, whereas white spiders were most successful when they presented relatively greatest luminance contrast. Further modeling across a hypothetical range of lure variation confirmed that yellow versus white signals should, respectively, enhance chromatic versus achromatic conspicuousness to flies, in G. fornicata's visual environments. These findings suggest that color polymorphism may be adaptively maintained by selection for conspicuousness within different visual channels in receivers. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  2. A Wheelchair User with Visual and Intellectual Disabilities Managing Simple Orientation Technology for Indoor Travel

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; O'Reilly, Mark F.; Singh, Nirbhay N.; Sigafoos, Jeff; Campodonico, Francesca; Oliva, Doretta

    2009-01-01

    Persons with profound visual impairments and other disabilities, such as neuromotor and intellectual disabilities, may encounter serious orientation and mobility problems even in familiar indoor environments, such as their homes. Teaching these persons to develop maps of their daily environment, using miniature replicas of the areas or some…

  3. Motor Effects from Visually Induced Disorientation in Man.

    ERIC Educational Resources Information Center

    Brecher, M. Herbert; Brecher, Gerhard A.

    The problem of disorientation in a moving optical environment was examined. A pilot can experience egocentric disorientation if the entire visual environment moves relative to his body without a clue as to the objectives position of the airplane in respect to the ground. A simple method of measuring disorientation was devised. In this method…

  4. Reconfigurable Image Generator

    NASA Technical Reports Server (NTRS)

    Archdeacon, John L. (Inventor); Iwai, Nelson H. (Inventor); Kato, Kenji H. (Inventor); Sweet, Barbara T. (Inventor)

    2017-01-01

    A RiG may simulate visual conditions of a real world environment, and generate the necessary amount of pixels in a visual simulation at rates up to 120 frames per second. RiG may also include a database generation system capable of producing visual databases suitable to drive the visual fidelity required by the RiG.

  5. Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter

    2011-01-01

    The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…

  6. Visual Programming: A Programming Tool for Increasing Mathematics Achivement

    ERIC Educational Resources Information Center

    Swanier, Cheryl A.; Seals, Cheryl D.; Billionniere, Elodie V.

    2009-01-01

    This paper aims to address the need of increasing student achievement in mathematics using a visual programming language such as Scratch. This visual programming language facilitates creating an environment where students in K-12 education can develop mathematical simulations while learning a visual programming language at the same time.…

  7. Evolutionary adaptations: theoretical and practical implications for visual ergonomics.

    PubMed

    Fostervold, Knut Inge; Watten, Reidulf G; Volden, Frode

    2014-01-01

    The literature discussing visual ergonomics often mention that human vision is adapted to light emitted by the sun. However, theoretical and practical implications of this viewpoint is seldom discussed or taken into account. The paper discusses some of the main theoretical implications of an evolutionary approach to visual ergonomics. Based on interactional theory and ideas from ecological psychology an evolutionary stress model is proposed as a theoretical framework for future research in ergonomics and human factors. The model stresses the importance of developing work environments that fits with our evolutionary adaptations. In accordance with evolutionary psychology, the environment of evolutionary adaptedness (EEA) and evolutionarily-novel environments (EN) are used as key concepts. Using work with visual display units (VDU) as an example, the paper discusses how this knowledge can be utilized in an ergonomic analysis of risk factors in the work environment. The paper emphasises the importance of incorporating evolutionary theory in the field of ergonomics. Further, the paper encourages scientific practices that further our understanding of any phenomena beyond the borders of traditional proximal explanations.

  8. Discovering new methods of data fusion, visualization, and analysis in 3D immersive environments for hyperspectral and laser altimetry data

    NASA Astrophysics Data System (ADS)

    Moore, C. A.; Gertman, V.; Olsoy, P.; Mitchell, J.; Glenn, N. F.; Joshi, A.; Norpchen, D.; Shrestha, R.; Pernice, M.; Spaete, L.; Grover, S.; Whiting, E.; Lee, R.

    2011-12-01

    Immersive virtual reality environments such as the IQ-Station or CAVE° (Cave Automated Virtual Environment) offer new and exciting ways to visualize and explore scientific data and are powerful research and educational tools. Combining remote sensing data from a range of sensor platforms in immersive 3D environments can enhance the spectral, textural, spatial, and temporal attributes of the data, which enables scientists to interact and analyze the data in ways never before possible. Visualization and analysis of large remote sensing datasets in immersive environments requires software customization for integrating LiDAR point cloud data with hyperspectral raster imagery, the generation of quantitative tools for multidimensional analysis, and the development of methods to capture 3D visualizations for stereographic playback. This study uses hyperspectral and LiDAR data acquired over the China Hat geologic study area near Soda Springs, Idaho, USA. The data are fused into a 3D image cube for interactive data exploration and several methods of recording and playback are investigated that include: 1) creating and implementing a Virtual Reality User Interface (VRUI) patch configuration file to enable recording and playback of VRUI interactive sessions within the CAVE and 2) using the LiDAR and hyperspectral remote sensing data and GIS data to create an ArcScene 3D animated flyover, where left- and right-eye visuals are captured from two independent monitors for playback in a stereoscopic player. These visualizations can be used as outreach tools to demonstrate how integrated data and geotechnology techniques can help scientists see, explore, and more adequately comprehend scientific phenomena, both real and abstract.

  9. Computer-based multisensory learning in children with developmental dyslexia.

    PubMed

    Kast, Monika; Meyer, Martin; Vögeli, Christian; Gross, Markus; Jäncke, Lutz

    2007-01-01

    Several attempts have been made to remediate developmental dyslexia using various training environments. Based on the well-known retrieval structure model, the memory strength of phonemes and graphemes should be strengthened by visual and auditory associations between graphemes and phonemes. Using specifically designed training software, we examined whether establishing a multitude of visuo-auditory associations might help to mitigate writing errors in children with developmental dyslexia. Forty-three children with developmental dyslexia and 37 carefully matched normal reading children performed a computer-based writing training (15-20 minutes 4 days a week) for three months with the aim to recode a sequential textual input string into a multi-sensory representation comprising visual and auditory codes (including musical tones). The study included four matched groups: a group of children with developmental dyslexia (n=20) and a control group (n=18) practiced with the training software in the first period (3 months, 15-20 minutes 4 days a week), while a second group of children with developmental dyslexia (n=23) (waiting group) and a second control group (n=19) received no training during the first period. In the second period the children with developmental dyslexia and controls who did not receive training during the first period now took part in the training. Children with developmental dyslexia who did not perform computer-based training during the first period hardly improved their writing skills (post-pre improvement of 0-9%), the dyslexic children receiving training strongly improved their writing skills (post-pre improvement of 19-35%). The group who did the training during the second period also revealed improvement of writing skills (post-pre improvement of 27-35%). Interestingly, we noticed a strong transfer from trained to non-trained words in that the children who underwent the training were also better able to write words correctly that were not part of the training software. In addition, even non-impaired readers and writers (controls) benefited from this training. Three-month of visual-auditory multimedia training strongly improved writing skills in children with developmental dyslexia and non-dyslexic children. Thus, according to the retrieval structure model, multi-sensory training using visual and auditory cues enhances writing performance in children with developmental dyslexia and non-dyslexic children.

  10. The visual-landscape analysis during the integration of high-rise buildings within the historic urban environment

    NASA Astrophysics Data System (ADS)

    Akristiniy, Vera A.; Dikova, Elena A.

    2018-03-01

    The article is devoted to one of the types of urban planning studies - the visual-landscape analysis during the integration of high-rise buildings within the historic urban environment for the purposes of providing pre-design and design studies in terms of preserving the historical urban environment and the implementation of the reconstructional resource of the area. In the article formed and systematized the stages and methods of conducting the visual-landscape analysis taking into account the influence of high-rise buildings on objects of cultural heritage and valuable historical buildings of the city. Practical application of the visual-landscape analysis provides an opportunity to assess the influence of hypothetical location of high-rise buildings on the perception of a historically developed environment and optimal building parameters. The contents of the main stages in the conduct of the visual - landscape analysis and their key aspects, concerning the construction of predicted zones of visibility of the significant historically valuable urban development objects and hypothetically planned of the high-rise buildings are revealed. The obtained data are oriented to the successive development of the planning and typological structure of the city territory and preservation of the compositional influence of valuable fragments of the historical environment in the structure of the urban landscape. On their basis, an information database is formed to determine the permissible urban development parameters of the high-rise buildings for the preservation of the compositional integrity of the urban area.

  11. Teaching Technology Education to Visually Impaired Students.

    ERIC Educational Resources Information Center

    Mann, Rene

    1987-01-01

    Discusses various types of visual impairments and how the learning environment can be adapted to limit their effect. Presents suggestions for adapting industrial arts laboratory activities to maintain safety standards while allowing the visually impaired to participate. (CH)

  12. Simulation environment and graphical visualization environment: a COPD use-case.

    PubMed

    Huertas-Migueláñez, Mercedes; Mora, Daniel; Cano, Isaac; Maier, Dieter; Gomez-Cabrero, David; Lluch-Ariet, Magí; Miralles, Felip

    2014-11-28

    Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios.

  13. Interobject grouping facilitates visual awareness.

    PubMed

    Stein, Timo; Kaiser, Daniel; Peelen, Marius V

    2015-01-01

    In organizing perception, the human visual system takes advantage of regularities in the visual input to perceptually group related image elements. Simple stimuli that can be perceptually grouped based on physical regularities, for example by forming an illusory contour, have a competitive advantage in entering visual awareness. Here, we show that regularities that arise from the relative positioning of complex, meaningful objects in the visual environment also modulate visual awareness. Using continuous flash suppression, we found that pairs of objects that were positioned according to real-world spatial regularities (e.g., a lamp above a table) accessed awareness more quickly than the same object pairs shown in irregular configurations (e.g., a table above a lamp). This advantage was specific to upright stimuli and abolished by stimulus inversion, meaning that it did not reflect physical stimulus confounds or the grouping of simple image elements. Thus, knowledge of the spatial configuration of objects in the environment shapes the contents of conscious perception.

  14. Ergonomics in the office environment

    NASA Technical Reports Server (NTRS)

    Courtney, Theodore K.

    1993-01-01

    Perhaps the four most popular 'ergonomic' office culprits are: (1) the computer or visual display terminal (VDT); (2) the office chair; (3) the workstation; and (4) other automated equipment such as the facsimile machine, photocopier, etc. Among the ergonomics issues in the office environment are visual fatigue, musculoskeletal disorders, and radiation/electromagnetic (VLF,ELF) field exposure from VDT's. We address each of these in turn and then review some regulatory considerations regarding such stressors in the office and general industrial environment.

  15. Pictorial communication in virtual and real environments

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R. (Editor)

    1991-01-01

    Papers about the communication between human users and machines in real and synthetic environments are presented. Individual topics addressed include: pictorial communication, distortions in memory for visual displays, cartography and map displays, efficiency of graphical perception, volumetric visualization of 3D data, spatial displays to increase pilot situational awareness, teleoperation of land vehicles, computer graphics system for visualizing spacecraft in orbit, visual display aid for orbital maneuvering, multiaxis control in telemanipulation and vehicle guidance, visual enhancements in pick-and-place tasks, target axis effects under transformed visual-motor mappings, adapting to variable prismatic displacement. Also discussed are: spatial vision within egocentric and exocentric frames of reference, sensory conflict in motion sickness, interactions of form and orientation, perception of geometrical structure from congruence, prediction of three-dimensionality across continuous surfaces, effects of viewpoint in the virtual space of pictures, visual slant underestimation, spatial constraints of stereopsis in video displays, stereoscopic stance perception, paradoxical monocular stereopsis and perspective vergence. (No individual items are abstracted in this volume)

  16. Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.

    PubMed

    Sanchez, Yerly; Pinzon, David; Zheng, Bin

    2017-10-01

    To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.

  17. Change in vision, visual disability, and health after cataract surgery.

    PubMed

    Helbostad, Jorunn L; Oedegaard, Maria; Lamb, Sarah E; Delbaere, Kim; Lord, Stephen R; Sletvold, Olav

    2013-04-01

    Cataract surgery improves vision and visual functioning; the effect on general health is not established. We investigated if vision, visual functioning, and general health follow the same trajectory of change the year after cataract surgery and if changes in vision explain changes in visual disability and general health. One-hundred forty-eight persons, with a mean (SD) age of 78.9 (5.0) years (70% bilateral surgery), were assessed before and 6 weeks and 12 months after surgery. Visual disability and general health were assessed by the CatQuest-9SF and the Short Formular-36. Corrected binocular visual acuity, visual field, stereo acuity, and contrast vision improved (P < 0.001) from before to 6 weeks after surgery, with further improvements of visual acuity evident up to 12 months (P = 0.034). Cataract surgery had an effect on visual disability 1 year later (P < 0.001). Physical and mental health improved after surgery (P < 0.01) but had returned to presurgery level after 12 months. Vision changes did not explain visual disability and general health 6 weeks after surgery. Vision improved and visual disability decreased in the year after surgery, whereas changes in general health and visual functioning were short-term effects. Lack of associations between changes in vision and self-reported disability and general health suggests that the degree of vision changes and self-reported health do not have a linear relationship.

  18. Supplementation with macular carotenoids improves visual performance of transgenic mice.

    PubMed

    Li, Binxing; Rognon, Gregory T; Mattinson, Ty; Vachali, Preejith P; Gorusupudi, Aruna; Chang, Fu-Yen; Ranganathan, Arunkumar; Nelson, Kelly; George, Evan W; Frederick, Jeanne M; Bernstein, Paul S

    2018-07-01

    Carotenoid supplementation can improve human visual performance, but there is still no validated rodent model to test their effects on visual function in laboratory animals. We recently showed that mice deficient in β-carotene oxygenase 2 (BCO2) and/or β-carotene oxygenase 1 (BCO1) enzymes can accumulate carotenoids in their retinas, allowing us to investigate the effects of carotenoids on the visual performance of mice. Using OptoMotry, a device to measure visual function in rodents, we examined the effect of zeaxanthin, lutein, and β-carotene on visual performance of various BCO knockout mice. We then transgenically expressed the human zeaxanthin-binding protein GSTP1 (hGSTP1) in the rods of bco2 -/- mice to examine if delivering more zeaxanthin to retina will improve their visual function further. The visual performance of bco2 -/- mice fed with zeaxanthin or lutein was significantly improved relative to control mice fed with placebo beadlets. β-Carotene had no significant effect in bco2 -/- mice but modestly improved cone visual function of bco1 -/- mice. Expression of hGSTP1 in the rods of bco2 -/- mice resulted in a 40% increase of retinal zeaxanthin and further improvement of visual performance. This work demonstrates that these "macular pigment mice" may serve as animal models to study carotenoid function in the retina. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. A Dual Track Treadmill in a Virtual Reality Environment as a Countermeasure for Neurovestibular Adaptations in Microgravity

    NASA Technical Reports Server (NTRS)

    DAndrea, Susan E.; Kahelin, Michael W.; Horowitz, Jay G.; OConnor, Philip A.

    2004-01-01

    While the neurovestibular system is capable of adapting to altered environments such as microgravity, the adaptive state achieved in space in inadequate for 1G. This leads to giant and postural instabilities when returning to a gravity environment and may create serious problems in future mission to Mars. New methods are needed to improve the understanding of the adaptive capabilities of the human neurovestibular system and to develop more effective countermeasures. The concept behind the current study is that by challenging the neurovestibular system while walking or running a treadmill can help to read just the relationship between the visual, vestibular and proprioceptive signals that are altered in a microgravity environment. As a countermeasure, this device could also benefit the musculoskeletal and cardiovascular systems and at the same time decrease the overall time spent exercising. The overall goal of this research is to design, develop, build and test a dual track treadmill, which utilizes virtual reality, VR, displays.

  20. A Dual Track Treadmill in a Virtual Reality Environment as a Countermeasure for Neurovestibular Adaptations in Microgravity

    NASA Technical Reports Server (NTRS)

    DAndrea, Susan E.; Kahelin, Michael W.; Horowitz, Jay G.; OConnor, Philip A.

    2004-01-01

    While the neurovestibular system is capable of adapting to altered environments such as microgravity, the adaptive state achieved in space in inadequate for 1G. This leads to gait and postural instabilities when returning to a gravity environment and may create serious problems in future missions to Mars. New methods are needed to improve the understanding of the adaptive capabilities of the human neurovestibular system and to develop more effective countermeasures. The concept behind the current study is that by challenging the neurovestibular system while walking or running, a treadmill can help to readjust the relationship between the visual, vestibular and proprioceptive signals that are altered in a microgravity environment. As a countermeasure, this device could also benefit the musculoskeletal and cardiovascular systems and at the same time decrease the overall time spent exercising. The overall goal of this research is to design, develop, build and test a dual track treadmill, which utilizes virtual reality,

  1. Spatial partitions systematize visual search and enhance target memory.

    PubMed

    Solman, Grayden J F; Kingstone, Alan

    2017-02-01

    Humans are remarkably capable of finding desired objects in the world, despite the scale and complexity of naturalistic environments. Broadly, this ability is supported by an interplay between exploratory search and guidance from episodic memory for previously observed target locations. Here we examined how the environment itself may influence this interplay. In particular, we examined how partitions in the environment-like buildings, rooms, and furniture-can impact memory during repeated search. We report that the presence of partitions in a display, independent of item configuration, reliably improves episodic memory for item locations. Repeated search through partitioned displays was faster overall and was characterized by more rapid ballistic orienting in later repetitions. Explicit recall was also both faster and more accurate when displays were partitioned. Finally, we found that search paths were more regular and systematic when displays were partitioned. Given the ubiquity of partitions in real-world environments, these results provide important insights into the mechanisms of naturalistic search and its relation to memory.

  2. On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments

    NASA Astrophysics Data System (ADS)

    Çöltekin, A.; Lokka, I.; Zahner, M.

    2016-06-01

    Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.

  3. Is improved contrast sensitivity a natural consequence of visual training?

    PubMed Central

    Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.

    2015-01-01

    Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736

  4. A year-long caregiver training program improves cognition in preschool Ugandan children with human immunodeficiency virus.

    PubMed

    Boivin, Michael J; Bangirana, Paul; Nakasujja, Noeline; Page, Connie F; Shohet, Cilly; Givon, Deborah; Bass, Judith K; Opoka, Robert O; Klein, Pnina S

    2013-11-01

    To evaluate mediational intervention for sensitizing caregivers (MISC). MISC biweekly caregiver training significantly enhanced child development compared with biweekly training on health and nutrition (active control) and to evaluate whether MISC training improved the emotional well-being of the caregivers compared with controls. Sixty of 120 rural Ugandan preschool child/caregiver dyads with HIV were assigned by randomized clusters to biweekly MISC training, alternating between home and clinic for 1 year. Control dyads received a health and nutrition curriculum. Children were evaluated at baseline, 6 months, and 1 year with the Mullen Early Learning Scales and the Color-Object Association Test for memory. Caldwell Home Observation for Measurement of the Environment and videotaped child/caregiver MISC interactions also were evaluated. Caregivers were evaluated for depression and anxiety with the Hopkins Symptoms Checklist. Between-group repeated-measures ANCOVA comparisons were made with age, sex, CD4 levels, viral load, material socioeconomic status, physical development, and highly active anti-retroviral therapy treatment status as covariates. The children given MISC had significantly greater gains compared with controls on the Mullen Visual Reception scale (visual-spatial memory) and on Color-Object Association Test memory. MISC caregivers significantly improved on Caldwell Home Observation for Measurement of the Environment scale and total frequency of MISC videotaped interactions. MISC caregivers also were less depressed. Mortality was less for children given MISC compared with controls during the training year. MISC was effective in teaching Ugandan caregivers to enhance their children's cognitive development through practical and sustainable techniques applied during daily interactions in the home. Copyright © 2013 Mosby, Inc. All rights reserved.

  5. Are children with low vision adapted to the visual environment in classrooms of mainstream schools?

    PubMed Central

    Negiloni, Kalpa; Ramani, Krishna Kumar; Jeevitha, R; Kalva, Jayashree; Sudhir, Rachapalle Reddi

    2018-01-01

    Purpose: The study aimed to evaluate the classroom environment of children with low vision and provide recommendations to reduce visual stress, with focus on mainstream schooling. Methods: The medical records of 110 children (5–17 years) seen in low vision clinic during 1 year period (2015) at a tertiary care center in south India were extracted. The visual function levels of children were compared to the details of their classroom environment. The study evaluated and recommended the chalkboard visual task size and viewing distance required for children with mild, moderate, and severe visual impairment (VI). Results: The major causes of low vision based on the site of abnormality and etiology were retinal (80%) and hereditary (67%) conditions, respectively, in children with mild (n = 18), moderate (n = 72), and severe (n = 20) VI. Many of the children (72%) had difficulty in viewing chalkboard and common strategies used for better visibility included copying from friends (47%) and going closer to chalkboard (42%). To view the chalkboard with reduced visual stress, a child with mild VI can be seated at a maximum distance of 4.3 m from the chalkboard, with the minimum size of visual task (height of lowercase letter writing on chalkboard) recommended to be 3 cm. For 3/60–6/60 range, the maximum viewing distance with the visual task size of 4 cm is recommended to be 85 cm to 1.7 m. Conclusion: Simple modifications of the visual task size and seating arrangements can aid children with low vision with better visibility of chalkboard and reduced visual stress to manage in mainstream schools. PMID:29380777

  6. Asking better questions: How presentation formats influence information search.

    PubMed

    Wu, Charley M; Meder, Björn; Filimon, Flavia; Nelson, Jonathan D

    2017-08-01

    While the influence of presentation formats have been widely studied in Bayesian reasoning tasks, we present the first systematic investigation of how presentation formats influence information search decisions. Four experiments were conducted across different probabilistic environments, where subjects (N = 2,858) chose between 2 possible search queries, each with binary probabilistic outcomes, with the goal of maximizing classification accuracy. We studied 14 different numerical and visual formats for presenting information about the search environment, constructed across 6 design features that have been prominently related to improvements in Bayesian reasoning accuracy (natural frequencies, posteriors, complement, spatial extent, countability, and part-to-whole information). The posterior variants of the icon array and bar graph formats led to the highest proportion of correct responses, and were substantially better than the standard probability format. Results suggest that presenting information in terms of posterior probabilities and visualizing natural frequencies using spatial extent (a perceptual feature) were especially helpful in guiding search decisions, although environments with a mixture of probabilistic and certain outcomes were challenging across all formats. Subjects who made more accurate probability judgments did not perform better on the search task, suggesting that simple decision heuristics may be used to make search decisions without explicitly applying Bayesian inference to compute probabilities. We propose a new take-the-difference (TTD) heuristic that identifies the accuracy-maximizing query without explicit computation of posterior probabilities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Subjective evaluation of HEVC in mobile devices

    NASA Astrophysics Data System (ADS)

    Garcia, Ray; Kalva, Hari

    2013-03-01

    Mobile compute environments provide a unique set of user needs and expectations that designers must consider. With increased multimedia use in mobile environments, video encoding methods within the smart phone market segment are key factors that contribute to positive user experience. Currently available display resolutions and expected cellular bandwidth are major factors the designer must consider when determining which encoding methods should be supported. The desired goal is to maximize the consumer experience, reduce cost, and reduce time to market. This paper presents a comparative evaluation of the quality of user experience when HEVC and AVC/H.264 video coding standards were used. The goal of the study was to evaluate any improvements in user experience when using HEVC. Subjective comparisons were made between H.264/AVC and HEVC encoding standards in accordance with Doublestimulus impairment scale (DSIS) as defined by ITU-R BT.500-13. Test environments are based on smart phone LCD resolutions and expected cellular bit rates, such as 200kbps and 400kbps. Subjective feedback shows both encoding methods are adequate at 400kbps constant bit rate. However, a noticeable consumer experience gap was observed for 200 kbps. Significantly less H.264 subjective quality is noticed with video sequences that have multiple objects moving and no single point of visual attraction. Video sequences with single points of visual attraction or few moving objects tended to have higher H.264 subjective quality.

  8. Experience Report: Visual Programming in the Real World

    NASA Technical Reports Server (NTRS)

    Baroth, E.; Hartsough, C

    1994-01-01

    This paper reports direct experience with two commercial, widely used visual programming environments. While neither of these systems is object oriented, the tools have transformed the development process and indicate a direction for visual object oriented tools to proceed.

  9. Nocturnality constrains morphological and functional diversity in the eyes of reef fishes.

    PubMed

    Schmitz, Lars; Wainwright, Peter C

    2011-11-19

    Ambient light levels are often considered to drive the evolution of eye form and function. Diel activity pattern is the main mechanism controlling the visual environment of teleost reef fish, with day-active (diurnal) fish active in well-illuminated conditions, whereas night-active (nocturnal) fish cope with dim light. Physiological optics predicts several specific evolutionary responses to dim-light vision that should be reflected in visual performance features of the eye. We analyzed a large comparative dataset on morphological traits of the eyes in 265 species of teleost reef fish in 43 different families. The eye morphology of nocturnal reef teleosts is characterized by a syndrome that indicates better light sensitivity, including large relative eye size, high optical ratio and large, rounded pupils. Improved dim-light image formation comes at the cost of reduced depth of focus and reduction of potential accommodative lens movement. Diurnal teleost reef fish, released from the stringent functional requirements of dim-light vision have much higher morphological and optical diversity than nocturnal species, with large ranges of optical ratio, depth of focus, and lens accommodation. Physical characteristics of the environment are an important factor in the evolution and diversification of the vertebrate eye. Both teleost reef fish and terrestrial amniotes meet the functional requirements of dim-light vision with a similar evolutionary response of morphological and optical modifications. The trade-off between improved dim-light vision and reduced optical diversity may be a key factor in explaining the lower trophic diversity of nocturnal reef teleosts.

  10. Six easy steps on how to create a lean sigma value stream map for a multidisciplinary clinical operation.

    PubMed

    Lee, Emily; Grooms, Richard; Mamidala, Soumya; Nagy, Paul

    2014-12-01

    Value stream mapping (VSM) is a very useful technique to visualize and quantify the complex workflows often seen in clinical environments. VSM brings together multidisciplinary teams to identify parts of processes, collect data, and develop interventional ideas. An example involving pediatric MRI with general anesthesia VSM is outlined. As the process progresses, the map shows a large delay between the fax referral and the date of the scheduled and registered appointment. Ideas for improved efficiency and metrics were identified to measure improvement within a 6-month period, and an intervention package was developed for the department. Copyright © 2014. Published by Elsevier Inc.

  11. Colour coding scrubs as a means of improving perioperative communication.

    PubMed

    Litak, Dominika

    2011-05-01

    Effective communication within the operating department is essential for achieving patient safety. A large part of the perioperative communication is non-verbal. One type of non-verbal communication is 'object communication', the most common form of which is clothing. The colour coding of clothing such as scrubs has the potential to optimise perioperative communication with the patients and between the staff. A colour contains a coded message, and is a visual cue for an immediate identification of personnel. This is of key importance in the perioperative environment. The idea of colour coded scrubs in the perioperative setting has not been much explored to date and, given the potential contributiontowards improvement of patient outcomes, deserves consideration.

  12. Improved Discrimination of Visual Stimuli Following Repetitive Transcranial Magnetic Stimulation

    PubMed Central

    Waterston, Michael L.; Pack, Christopher C.

    2010-01-01

    Background Repetitive transcranial magnetic stimulation (rTMS) at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a “virtual lesion” in stimulated brain regions, with correspondingly diminished behavioral performance. Methodology/Principal Findings Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz) stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. Conclusions/Significance Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception. PMID:20442776

  13. Rapid adaptation to a novel light environment: The importance of ontogeny and phenotypic plasticity in shaping the visual system of Nicaraguan Midas cichlid fish (Amphilophus citrinellus spp.).

    PubMed

    Härer, Andreas; Torres-Dowdall, Julián; Meyer, Axel

    2017-10-01

    Colonization of novel habitats is typically challenging to organisms. In the initial stage after colonization, approximation to fitness optima in the new environment can occur by selection acting on standing genetic variation, modification of developmental patterns or phenotypic plasticity. Midas cichlids have recently colonized crater Lake Apoyo from great Lake Nicaragua. The photic environment of crater Lake Apoyo is shifted towards shorter wavelengths compared to great Lake Nicaragua and Midas cichlids from both lakes differ in visual sensitivity. We investigated the contribution of ontogeny and phenotypic plasticity in shaping the visual system of Midas cichlids after colonizing this novel photic environment. To this end, we measured cone opsin expression both during development and after experimental exposure to different light treatments. Midas cichlids from both lakes undergo ontogenetic changes in cone opsin expression, but visual sensitivity is consistently shifted towards shorter wavelengths in crater lake fish, which leads to a paedomorphic retention of their visual phenotype. This shift might be mediated by lower levels of thyroid hormone in crater lake Midas cichlids (measured indirectly as dio2 and dio3 gene expression). Exposing fish to different light treatments revealed that cone opsin expression is phenotypically plastic in both species during early development, with short and long wavelength light slowing or accelerating ontogenetic changes, respectively. Notably, this plastic response was maintained into adulthood only in the derived crater lake Midas cichlids. We conclude that the rapid evolution of Midas cichlids' visual system after colonizing crater Lake Apoyo was mediated by a shift in visual sensitivity during ontogeny and was further aided by phenotypic plasticity during development. © 2017 John Wiley & Sons Ltd.

  14. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    PubMed

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  15. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    PubMed Central

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P.

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394

  16. Corridor One:An Integrated Distance Visualization Enuronments for SSI+ASCI Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christopher R. Johnson, Charles D. Hansen

    2001-10-29

    The goal of Corridor One: An Integrated Distance Visualization Environment for ASCI and SSI Application was to combine the forces of six leading edge laboratories working in the areas of visualization and distributed computing and high performance networking (Argonne National Laboratory, Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, University of Illinois, University of Utah and Princeton University) to develop and deploy the most advanced integrated distance visualization environment for large-scale scientific visualization and demonstrate it on applications relevant to the DOE SSI and ASCI programs. The Corridor One team brought world class expertise in parallel rendering, deep image basedmore » rendering, immersive environment technology, large-format multi-projector wall based displays, volume and surface visualization algorithms, collaboration tools and streaming media technology, network protocols for image transmission, high-performance networking, quality of service technology and distributed computing middleware. Our strategy was to build on the very successful teams that produced the I-WAY, ''Computational Grids'' and CAVE technology and to add these to the teams that have developed the fastest parallel visualizations systems and the most widely used networking infrastructure for multicast and distributed media. Unfortunately, just as we were getting going on the Corridor One project, DOE cut the program after the first year. As such, our final report consists of our progress during year one of the grant.« less

  17. Adaptation to Laterally Displacing Prisms in Anisometropic Amblyopia.

    PubMed

    Sklar, Jaime C; Goltz, Herbert C; Gane, Luke; Wong, Agnes M F

    2015-06-01

    Using visual feedback to modify sensorimotor output in response to changes in the external environment is essential for daily function. Prism adaptation is a well-established experimental paradigm to quantify sensorimotor adaptation; that is, how the sensorimotor system adapts to an optically-altered visuospatial environment. Amblyopia is a neurodevelopmental disorder characterized by spatiotemporal deficits in vision that impacts manual and oculomotor function. This study explored the effects of anisometropic amblyopia on prism adaptation. Eight participants with anisometropic amblyopia and 11 visually-normal adults, all right-handed, were tested. Participants pointed to visual targets and were presented with feedback of hand position near the terminus of limb movement in three blocks: baseline, adaptation, and deadaptation. Adaptation was induced by viewing with binocular 11.4° (20 prism diopter [PD]) left-shifting prisms. All tasks were performed during binocular viewing. Participants with anisometropic amblyopia required significantly more trials (i.e., increased time constant) to adapt to prismatic optical displacement than visually-normal controls. During the rapid error correction phase of adaptation, people with anisometropic amblyopia also exhibited greater variance in motor output than visually-normal controls. Amblyopia impacts on the ability to adapt the sensorimotor system to an optically-displaced visual environment. The increased time constant and greater variance in motor output during the rapid error correction phase of adaptation may indicate deficits in processing of visual information as a result of degraded spatiotemporal vision in amblyopia.

  18. Immersive volume rendering of blood vessels

    NASA Astrophysics Data System (ADS)

    Long, Gregory; Kim, Han Suk; Marsden, Alison; Bazilevs, Yuri; Schulze, Jürgen P.

    2012-03-01

    In this paper, we present a novel method of visualizing flow in blood vessels. Our approach reads unstructured tetrahedral data, resamples it, and uses slice based 3D texture volume rendering. Due to the sparse structure of blood vessels, we utilize an octree to efficiently store the resampled data by discarding empty regions of the volume. We use animation to convey time series data, wireframe surface to give structure, and utilize the StarCAVE, a 3D virtual reality environment, to add a fully immersive element to the visualization. Our tool has great value in interdisciplinary work, helping scientists collaborate with clinicians, by improving the understanding of blood flow simulations. Full immersion in the flow field allows for a more intuitive understanding of the flow phenomena, and can be a great help to medical experts for treatment planning.

  19. Disaster Response Modeling Through Discrete-Event Simulation

    NASA Technical Reports Server (NTRS)

    Wang, Jeffrey; Gilmer, Graham

    2012-01-01

    Organizations today are required to plan against a rapidly changing, high-cost environment. This is especially true for first responders to disasters and other incidents, where critical decisions must be made in a timely manner to save lives and resources. Discrete-event simulations enable organizations to make better decisions by visualizing complex processes and the impact of proposed changes before they are implemented. A discrete-event simulation using Simio software has been developed to effectively analyze and quantify the imagery capabilities of domestic aviation resources conducting relief missions. This approach has helped synthesize large amounts of data to better visualize process flows, manage resources, and pinpoint capability gaps and shortfalls in disaster response scenarios. Simulation outputs and results have supported decision makers in the understanding of high risk locations, key resource placement, and the effectiveness of proposed improvements.

  20. The Role of the Human Extrastriate Visual Cortex in Mirror Symmetry Discrimination: A TMS-Adaptation Study

    ERIC Educational Resources Information Center

    Cattaneo, Zaira; Mattavelli, Giulia; Papagno, Costanza; Herbert, Andrew; Silvanto, Juha

    2011-01-01

    The human visual system is able to efficiently extract symmetry information from the visual environment. Prior neuroimaging evidence has revealed symmetry-preferring neuronal representations in the dorsolateral extrastriate visual cortex; the objective of the present study was to investigate the necessity of these representations in symmetry…

  1. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Verbalizing, Visualizing, and Navigating: The Effect of Strategies on Encoding a Large-Scale Virtual Environment

    ERIC Educational Resources Information Center

    Kraemer, David J. M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.

    2017-01-01

    Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In 2 experiments, participants watched videos of routes through 4 virtual cities and were subsequently tested on their memory for observed landmarks and their ability to…

  3. Effects of Visual Cues and Self-Explanation Prompts: Empirical Evidence in a Multimedia Environment

    ERIC Educational Resources Information Center

    Lin, Lijia; Atkinson, Robert K.; Savenye, Wilhelmina C.; Nelson, Brian C.

    2016-01-01

    The purpose of this study was to investigate the impacts of visual cues and different types of self-explanation prompts on learning, cognitive load, and intrinsic motivation in an interactive multimedia environment that was designed to deliver a computer-based lesson about the human cardiovascular system. A total of 126 college students were…

  4. Visual Reasoning in Computational Environment: A Case of Graph Sketching

    ERIC Educational Resources Information Center

    Leung, Allen; Chan, King Wah

    2004-01-01

    This paper reports the case of a form six (grade 12) Hong Kong student's exploration of graph sketching in a computational environment. In particular, the student summarized his discovery in the form of two empirical laws. The student was interviewed and the interviewed data were used to map out a possible path of his visual reasoning. Critical…

  5. I Scratch and Sense but Can I Program? An Investigation of Learning with a Block Based Programming Language

    ERIC Educational Resources Information Center

    Simpkins, N. K.

    2014-01-01

    This article reports an investigation into undergraduate student experiences and views of a visual or "blocks" based programming language and its environment. An additional and central aspect of this enquiry is to substantiate the perceived degree of transferability of programming skills learnt within the visual environment to a typical…

  6. Data sonification and sound visualization.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaper, H. G.; Tipei, S.; Wiebel, E.

    1999-07-01

    Sound can help us explore and analyze complex data sets in scientific computing. The authors describe a digital instrument for additive sound synthesis (Diass) and a program to visualize sounds in a virtual reality environment (M4Cave). Both are part of a comprehensive music composition environment that includes additional software for computer-assisted composition and automatic music notation.

  7. Reusable science tools for analog exploration missions: xGDS Web Tools, VERVE, and Gigapan Voyage

    NASA Astrophysics Data System (ADS)

    Lee, Susan Y.; Lees, David; Cohen, Tamar; Allan, Mark; Deans, Matthew; Morse, Theodore; Park, Eric; Smith, Trey

    2013-10-01

    The Exploration Ground Data Systems (xGDS) project led by the Intelligent Robotics Group (IRG) at NASA Ames Research Center creates software tools to support multiple NASA-led planetary analog field experiments. The two primary tools that fall under the xGDS umbrella are the xGDS Web Tools (xGDS-WT) and Visual Environment for Remote Virtual Exploration (VERVE). IRG has also developed a hardware and software system that is closely integrated with our xGDS tools and is used in multiple field experiments called Gigapan Voyage. xGDS-WT, VERVE, and Gigapan Voyage are examples of IRG projects that improve the ratio of science return versus development effort by creating generic and reusable tools that leverage existing technologies in both hardware and software. xGDS Web Tools provides software for gathering and organizing mission data for science and engineering operations, including tools for planning traverses, monitoring autonomous or piloted vehicles, visualization, documentation, analysis, and search. VERVE provides high performance three dimensional (3D) user interfaces used by scientists, robot operators, and mission planners to visualize robot data in real time. Gigapan Voyage is a gigapixel image capturing and processing tool that improves situational awareness and scientific exploration in human and robotic analog missions. All of these technologies emphasize software reuse and leverage open source and/or commercial-off-the-shelf tools to greatly improve the utility and reduce the development and operational cost of future similar technologies. Over the past several years these technologies have been used in many NASA-led robotic field campaigns including the Desert Research and Technology Studies (DRATS), the Pavilion Lake Research Project (PLRP), the K10 Robotic Follow-Up tests, and most recently we have become involved in the NASA Extreme Environment Mission Operations (NEEMO) field experiments. A major objective of these joint robot and crew experiments is to improve NASAs understanding of how to most effectively execute and increase science return from exploration missions. This paper focuses on an integrated suite of xGDS software and compatible hardware tools: xGDS Web Tools, VERVE, and Gigapan Voyage, how they are used, and the design decisions that were made to allow them to be easily developed, integrated, tested, and reused by multiple NASA field experiments and robotic platforms.

  8. [Visually-impaired adolescents' interpersonal relationships at school].

    PubMed

    Bezerra, Camilla Pontes; Pagliuca, Lorita Marlena Freitag

    2007-09-01

    This study describes the school environment and how interpersonal relationships are conducted in view of the needs of visually handicapped adolescents. Data were collected through observations of the physical environment of two schools in Fortaleza, Ceara, Brazil, with the support of a checklist, in order to analyze the existence of obstacles. Four visually handicapped adolescents from 14 to 20 years of age were interviewed. Conclusions were that the obstacles that hamper the free locomotion, communication, and physical and social interaction of the blind--or people with other eye disorders--during their activities at school are numerous.

  9. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  10. PERSPECTIVE: Is acuity enough? Other considerations in clinical investigations of visual prostheses

    NASA Astrophysics Data System (ADS)

    Lepri, Bernard P.

    2009-06-01

    Visual impairing eye diseases are the major frontier facing ophthalmic research today in light of our rapidly aging population. The visual skills necessary for improving the quality of daily function and life are inextricably linked to these impairing diseases. Both research and reimbursement programs are emphasizing outcome-based results. Is improvement in visual acuity alone enough to improve the function and quality of life of visually impaired persons? This perspective summarizes the types of effectiveness endpoints for clinical investigations of visual prostheses that go beyond visual acuity. The clinical investigation of visual prostheses should include visual function, functional vision and quality of life measures. Specifically, they encompass contrast sensitivity, orientation and mobility, activities of daily living and quality of life assessments. The perspective focuses on the design of clinical trials for visual prostheses and the methods of determining effectiveness above and beyond visual acuity that will yield outcomes that are measured by improved function in the visual world and quality of life. The visually impaired population is the primary consideration in this presentation with particular emphases on retinitis pigmentosa and age-related macular degeneration. Clinical trials for visual prostheses cannot be isolated from the need for medical rehabilitation in order to obtain measurements of effectiveness that produce outcomes/evidence-based success. This approach will facilitate improvement in daily function and quality of life of patients with diseases that cause chronic vision impairment. The views and opinions are those of the author and do not necessarily reflect those of the US Food and Drug Administration, the US Department of Health and Human Services or the Public Health Service.

  11. The impact of modality and working memory capacity on achievement in a multimedia environment

    NASA Astrophysics Data System (ADS)

    Stromfors, Charlotte M.

    This study explored the impact of working memory capacity and student learning in a dual modality, multimedia environment titled Visualizing Topography. This computer-based instructional program focused on the basic skills in reading and interpreting topographic maps. Two versions of the program presented the same instructional content but varied the modality of verbal information: the audio-visual condition coordinated topographic maps and narration; the visual-visual condition provided the same topographic maps with readable text. An analysis of covariance procedure was conducted to evaluate the effects due to the two conditions in relation to working memory capacity, controlling for individual differences in spatial visualization and prior knowledge. The scores on the Figural Intersection Test were used to separate subjects into three levels in terms of their measured working memory capacity: low, medium, and high. Subjects accessed Visualizing Topography by way of the Internet and proceeded independently through the program. The program architecture was linear in format. Subjects had a minimum amount of flexibility within each of five segments, but not between segments. One hundred and fifty-one subjects were randomly assigned to either the audio-visual or the visual-visual condition. The average time spent in the program was thirty-one minutes. The results of the ANCOVA revealed a small to moderate modality effect favoring an audio-visual condition. The results also showed that subjects with low and medium working capacity benefited more from the audio-visual condition than the visual-visual condition, while subjects with a high working memory capacity did not benefit from either condition. Although splitting the data reduced group sizes, ANCOVA results by gender suggested that the audio-visual condition favored females with low working memory capacities. The results have implications for designers of educational software, the teachers who select software, and the students themselves. Splitting information into two, non-redundant sources, one audio and one visual, may effectively extend working memory capacity. This is especially significant for the student population encountering difficult science concepts that require the formation and manipulation of mental representations. It is recommended that multimedia environments be designed or selected with attention to modality conditions that facilitate student learning.

  12. Accurately Decoding Visual Information from fMRI Data Obtained in a Realistic Virtual Environment

    DTIC Science & Technology

    2015-06-09

    Center for Learning and Memory , The University of Texas at Austin, 100 E 24th Street, Stop C7000, Austin, TX 78712, USA afloren@utexas.edu Received: 18...information from fMRI data obtained in a realistic virtual environment. Front. Hum. Neurosci. 9:327. doi: 10.3389/fnhum.2015.00327 Accurately decoding...visual information from fMRI data obtained in a realistic virtual environment Andrew Floren 1*, Bruce Naylor 2, Risto Miikkulainen 3 and David Ress 4

  13. Planning, Implementation and Optimization of Future space Missions using an Immersive Visualization Environement (IVE) Machine

    NASA Astrophysics Data System (ADS)

    Harris, E.

    Planning, Implementation and Optimization of Future Space Missions using an Immersive Visualization Environment (IVE) Machine E. N. Harris, Lockheed Martin Space Systems, Denver, CO and George.W. Morgenthaler, U. of Colorado at Boulder History: A team of 3-D engineering visualization experts at the Lockheed Martin Space Systems Company have developed innovative virtual prototyping simulation solutions for ground processing and real-time visualization of design and planning of aerospace missions over the past 6 years. At the University of Colorado, a team of 3-D visualization experts are developing the science of 3-D visualization and immersive visualization at the newly founded BP Center for Visualization, which began operations in October, 2001. (See IAF/IAA-01-13.2.09, "The Use of 3-D Immersive Visualization Environments (IVEs) to Plan Space Missions," G. A. Dorn and G. W. Morgenthaler.) Progressing from Today's 3-D Engineering Simulations to Tomorrow's 3-D IVE Mission Planning, Simulation and Optimization Techniques: 3-D (IVEs) and visualization simulation tools can be combined for efficient planning and design engineering of future aerospace exploration and commercial missions. This technology is currently being developed and will be demonstrated by Lockheed Martin in the (IVE) at the BP Center using virtual simulation for clearance checks, collision detection, ergonomics and reach-ability analyses to develop fabrication and processing flows for spacecraft and launch vehicle ground support operations and to optimize mission architecture and vehicle design subject to realistic constraints. Demonstrations: Immediate aerospace applications to be demonstrated include developing streamlined processing flows for Reusable Space Transportation Systems and Atlas Launch Vehicle operations and Mars Polar Lander visual work instructions. Long-range goals include future international human and robotic space exploration missions such as the development of a Mars Reconnaissance Orbiter and Lunar Base construction scenarios. Innovative solutions utilizing Immersive Visualization provide the key to streamlining the mission planning and optimizing engineering design phases of future aerospace missions.

  14. Feasibility study on mental healthcare using indoor plants for office workers

    NASA Astrophysics Data System (ADS)

    Kubota, Tsuyoshi; Matsumoto, Hiroshi; Genjo, Kaori; Nakano, Takaoki

    2017-10-01

    In recent years, it has become a problem that office workers' stresses affect their intellectual productivity. As one of strategies mitigating the stress while working, many studies on the effect of indoor plants introduced into the office have been conducted. The psychological and physiological effects of indoor plants have been expected to mitigate the office workers' stresses. Also, the effects of green amenities such as improvement of productivity, control of the indoor thermal environment, relaxation and recovery of visual fatigue, and improvement of air quality have been expected. In this study, a field investigation on the green amenity effects of indoor plants on office workers' psychological and physiological responses in an actual office was conducted and discussed. This paper describes the measurement results of the physical environment and workers' psychological and physiological responses under the condition with shelves installed with indoor plants in an office room. It was suggested that indoor plants such as mint, basil and begonia, and a combination of red and green plants were effective for mitigating worker's stresses.

  15. Using animation quality metric to improve efficiency of global illumination computation for dynamic environments

    NASA Astrophysics Data System (ADS)

    Myszkowski, Karol; Tawara, Takehiro; Seidel, Hans-Peter

    2002-06-01

    In this paper, we consider applications of perception-based video quality metrics to improve the performance of global lighting computations for dynamic environments. For this purpose we extend the Visible Difference Predictor (VDP) developed by Daly to handle computer animations. We incorporate into the VDP the spatio-velocity CSF model developed by Kelly. The CSF model requires data on the velocity of moving patterns across the image plane. We use the 3D image warping technique to compensate for the camera motion, and we conservatively assume that the motion of animated objects (usually strong attractors of the visual attention) is fully compensated by the smooth pursuit eye motion. Our global illumination solution is based on stochastic photon tracing and takes advantage of temporal coherence of lighting distribution, by processing photons both in the spatial and temporal domains. The VDP is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained.

  16. Visual exploration and analysis of human-robot interaction rules

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Boyles, Michael J.

    2013-01-01

    We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.

  17. The Resource, Spring 2002

    DTIC Science & Technology

    2002-01-01

    wrappers to other widely used languages, namely TCL/TK, Java, and Python . VTK is very powerful and covers polygonal models and image processing classes and...follows: � Large Data Visualization and Rendering � Information Visualization for Beginners � Rendering and Visualization in Parallel Environments

  18. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    PubMed Central

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930

  19. Estimation of visual maps with a robot network equipped with vision sensors.

    PubMed

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  20. Enhancing astronaut performance using sensorimotor adaptability training.

    PubMed

    Bloomberg, Jacob J; Peters, Brian T; Cohen, Helen S; Mulavara, Ajitkumar P

    2015-01-01

    Astronauts experience disturbances in balance and gait function when they return to Earth. The highly plastic human brain enables individuals to modify their behavior to match the prevailing environment. Subjects participating in specially designed variable sensory challenge training programs can enhance their ability to rapidly adapt to novel sensory situations. This is useful in our application because we aim to train astronauts to rapidly formulate effective strategies to cope with the balance and locomotor challenges associated with new gravitational environments-enhancing their ability to "learn to learn." We do this by coupling various combinations of sensorimotor challenges with treadmill walking. A unique training system has been developed that is comprised of a treadmill mounted on a motion base to produce movement of the support surface during walking. This system provides challenges to gait stability. Additional sensory variation and challenge are imposed with a virtual visual scene that presents subjects with various combinations of discordant visual information during treadmill walking. This experience allows them to practice resolving challenging and conflicting novel sensory information to improve their ability to adapt rapidly. Information obtained from this work will inform the design of the next generation of sensorimotor countermeasures for astronauts.

  1. 4D microscope-integrated OCT improves accuracy of ophthalmic surgical maneuvers

    NASA Astrophysics Data System (ADS)

    Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Shen, Liangbo; Todorich, Bozho; Shieh, Christine; Kuo, Anthony; Toth, Cynthia; Izatt, Joseph A.

    2016-03-01

    Ophthalmic surgeons manipulate micron-scale tissues using stereopsis through an operating microscope and instrument shadowing for depth perception. While ophthalmic microsurgery has benefitted from rapid advances in instrumentation and techniques, the basic principles of the stereo operating microscope have not changed since the 1930's. Optical Coherence Tomography (OCT) has revolutionized ophthalmic imaging and is now the gold standard for preoperative and postoperative evaluation of most retinal and many corneal procedures. We and others have developed initial microscope-integrated OCT (MIOCT) systems for concurrent OCT and operating microscope imaging, but these are limited to 2D real-time imaging and require offline post-processing for 3D rendering and visualization. Our previously presented 4D MIOCT system can record and display the 3D surgical field stereoscopically through the microscope oculars using a dual-channel heads-up display (HUD) at up to 10 micron-scale volumes per second. In this work, we show that 4D MIOCT guidance improves the accuracy of depth-based microsurgical maneuvers (with statistical significance) in mock surgery trials in a wet lab environment. Additionally, 4D MIOCT was successfully performed in 38/45 (84%) posterior and 14/14 (100%) anterior eye human surgeries, and revealed previously unrecognized lesions that were invisible through the operating microscope. These lesions, such as residual and potentially damaging retinal deformation during pathologic membrane peeling, were visualized in real-time by the surgeon. Our integrated system provides an enhanced 4D surgical visualization platform that can improve current ophthalmic surgical practice and may help develop and refine future microsurgical techniques.

  2. Figure ground discrimination in age-related macular degeneration.

    PubMed

    Tran, Thi Ha Chau; Guyader, Nathalie; Guerin, Anne; Despretz, Pascal; Boucart, Muriel

    2011-03-01

    To investigate impairment in discriminating a figure from its background and to study its relation to visual acuity and lesion size in patients with neovascular age-related macular degeneration (AMD). Seventeen patients with neovascular AMD and visual acuity <20/50 were included. Seventeen age-matched healthy subjects participated as controls. Complete ophthalmologic examination was performed on all participants. The stimuli were photographs of scenes containing animals (targets) or other objects (distractors), displayed on a computer monitor screen. Performance was compared in four background conditions: the target in the natural scene; the target isolated on a white background; the target separated by a white space from a structured scene; the target separated by a white space from a nonstructured, shapeless background. Target discriminability (d') was recorded. Performance was lower for patients than for controls. For the patients, it was easier to detect the target when it was separated from its background (under isolated, structured, and nonstructured conditions) than it was when located in a scene. Performance was improved in patients with increasing exposure time but remained lower in controls. Correlations were found between visual acuity, lesion size, and sensitivity for patients. Figure/ground segregation is impaired in patients with AMD. A white space surrounding an object is sufficient to improve the object's detection and to facilitate figure/ground segregation. These results may have practical applications to the rehabilitation of the environment in patients with AMD.

  3. Long-Term Visual Training Increases Visual Acuity and Long-Term Monocular Deprivation Promotes Ocular Dominance Plasticity in Adult Standard Cage-Raised Mice.

    PubMed

    Hosang, Leon; Yusifov, Rashad; Löwel, Siegrid

    2018-01-01

    For routine behavioral tasks, mice predominantly rely on olfactory cues and tactile information. In contrast, their visual capabilities appear rather restricted, raising the question whether they can improve if vision gets more behaviorally relevant. We therefore performed long-term training using the visual water task (VWT): adult standard cage (SC)-raised mice were trained to swim toward a rewarded grating stimulus so that using visual information avoided excessive swimming toward nonrewarded stimuli. Indeed, and in contrast to old mice raised in a generally enriched environment (Greifzu et al., 2016), long-term VWT training increased visual acuity (VA) on average by more than 30% to 0.82 cycles per degree (cyc/deg). In an individual animal, VA even increased to 1.49 cyc/deg, i.e., beyond the rat range of VAs. Since visual experience enhances the spatial frequency threshold of the optomotor (OPT) reflex of the open eye after monocular deprivation (MD), we also quantified monocular vision after VWT training. Monocular VA did not increase reliably, and eye reopening did not initiate a decline to pre-MD values as observed by optomotry; VA values rather increased by continued VWT training. Thus, optomotry and VWT measure different parameters of mouse spatial vision. Finally, we tested whether long-term MD induced ocular dominance (OD) plasticity in the visual cortex of adult [postnatal day (P)162-P182] SC-raised mice. This was indeed the case: 40-50 days of MD induced OD shifts toward the open eye in both VWT-trained and, surprisingly, also in age-matched mice without VWT training. These data indicate that (1) long-term VWT training increases adult mouse VA, and (2) long-term MD induces OD shifts also in adult SC-raised mice.

  4. Electrophysiological measurement of interest during walking in a simulated environment.

    PubMed

    Takeda, Yuji; Okuma, Takashi; Kimura, Motohiro; Kurata, Takeshi; Takenaka, Takeshi; Iwaki, Sunao

    2014-09-01

    A reliable neuroscientific technique for objectively estimating the degree of interest in a real environment is currently required in the research fields of neuroergonomics and neuroeconomics. Toward the development of such a technique, the present study explored electrophysiological measures that reflect an observer's interest in a nearly-real visual environment. Participants were asked to walk through a simulated shopping mall and the attractiveness of the shopping mall was manipulated by opening and closing the shutters of stores. During the walking task, participants were exposed to task-irrelevant auditory probes (two-stimulus oddball sequence). The results showed a smaller P2/early P3a component of task-irrelevant auditory event-related potentials and a larger lambda response of eye-fixation-related potentials in an interesting environment (i.e., open-shutter condition) than in a boring environment (i.e., closed-shutter condition); these findings can be reasonably explained by supposing that participants allocated more attentional resources to visual information in an interesting environment than in a boring environment, and thus residual attentional resources that could be allocated to task-irrelevant auditory probes were reduced. The P2/early P3a component and the lambda response may be useful measures of interest in a real visual environment. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. BasinVis 1.0: A MATLAB®-based program for sedimentary basin subsidence analysis and visualization

    NASA Astrophysics Data System (ADS)

    Lee, Eun Young; Novotny, Johannes; Wagreich, Michael

    2016-06-01

    Stratigraphic and structural mapping is important to understand the internal structure of sedimentary basins. Subsidence analysis provides significant insights for basin evolution. We designed a new software package to process and visualize stratigraphic setting and subsidence evolution of sedimentary basins from well data. BasinVis 1.0 is implemented in MATLAB®, a multi-paradigm numerical computing environment, and employs two numerical methods: interpolation and subsidence analysis. Five different interpolation methods (linear, natural, cubic spline, Kriging, and thin-plate spline) are provided in this program for surface modeling. The subsidence analysis consists of decompaction and backstripping techniques. BasinVis 1.0 incorporates five main processing steps; (1) setup (study area and stratigraphic units), (2) loading well data, (3) stratigraphic setting visualization, (4) subsidence parameter input, and (5) subsidence analysis and visualization. For in-depth analysis, our software provides cross-section and dip-slip fault backstripping tools. The graphical user interface guides users through the workflow and provides tools to analyze and export the results. Interpolation and subsidence results are cached to minimize redundant computations and improve the interactivity of the program. All 2D and 3D visualizations are created by using MATLAB plotting functions, which enables users to fine-tune the results using the full range of available plot options in MATLAB. We demonstrate all functions in a case study of Miocene sediment in the central Vienna Basin.

  6. Focal and Ambient Processing of Built Environments: Intellectual and Atmospheric Experiences of Architecture

    PubMed Central

    Rooney, Kevin K.; Condia, Robert J.; Loschky, Lester C.

    2017-01-01

    Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one’s fist at arm’s length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words) PMID:28360867

  7. Focal and Ambient Processing of Built Environments: Intellectual and Atmospheric Experiences of Architecture.

    PubMed

    Rooney, Kevin K; Condia, Robert J; Loschky, Lester C

    2017-01-01

    Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one's fist at arm's length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words).

  8. Novel 3D/VR interactive environment for MD simulations, visualization and analysis.

    PubMed

    Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P

    2014-12-18

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.

  9. Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis

    PubMed Central

    Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.

    2014-01-01

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300

  10. Wayfinding and Glaucoma: A Virtual Reality Experiment.

    PubMed

    Daga, Fábio B; Macagno, Eduardo; Stevenson, Cory; Elhosseiny, Ahmed; Diniz-Filho, Alberto; Boer, Erwin R; Schulze, Jürgen; Medeiros, Felipe A

    2017-07-01

    Wayfinding, the process of determining and following a route between an origin and a destination, is an integral part of everyday tasks. The purpose of this study was to investigate the impact of glaucomatous visual field loss on wayfinding behavior using an immersive virtual reality (VR) environment. This cross-sectional study included 31 glaucomatous patients and 20 healthy subjects without evidence of overall cognitive impairment. Wayfinding experiments were modeled after the Morris water maze navigation task and conducted in an immersive VR environment. Two rooms were built varying only in the complexity of the visual scene in order to promote allocentric-based (room A, with multiple visual cues) versus egocentric-based (room B, with single visual cue) spatial representations of the environment. Wayfinding tasks in each room consisted of revisiting previously visible targets that subsequently became invisible. For room A, glaucoma patients spent on average 35.0 seconds to perform the wayfinding task, whereas healthy subjects spent an average of 24.4 seconds (P = 0.001). For room B, no statistically significant difference was seen on average time to complete the task (26.2 seconds versus 23.4 seconds, respectively; P = 0.514). For room A, each 1-dB worse binocular mean sensitivity was associated with 3.4% (P = 0.001) increase in time to complete the task. Glaucoma patients performed significantly worse on allocentric-based wayfinding tasks conducted in a VR environment, suggesting visual field loss may affect the construction of spatial cognitive maps relevant to successful wayfinding. VR environments may represent a useful approach for assessing functional vision endpoints for clinical trials of emerging therapies in ophthalmology.

  11. Multimodal fusion of polynomial classifiers for automatic person recgonition

    NASA Astrophysics Data System (ADS)

    Broun, Charles C.; Zhang, Xiaozheng

    2001-03-01

    With the prevalence of the information age, privacy and personalization are forefront in today's society. As such, biometrics are viewed as essential components of current evolving technological systems. Consumers demand unobtrusive and non-invasive approaches. In our previous work, we have demonstrated a speaker verification system that meets these criteria. However, there are additional constraints for fielded systems. The required recognition transactions are often performed in adverse environments and across diverse populations, necessitating robust solutions. There are two significant problem areas in current generation speaker verification systems. The first is the difficulty in acquiring clean audio signals in all environments without encumbering the user with a head- mounted close-talking microphone. Second, unimodal biometric systems do not work with a significant percentage of the population. To combat these issues, multimodal techniques are being investigated to improve system robustness to environmental conditions, as well as improve overall accuracy across the population. We propose a multi modal approach that builds on our current state-of-the-art speaker verification technology. In order to maintain the transparent nature of the speech interface, we focus on optical sensing technology to provide the additional modality-giving us an audio-visual person recognition system. For the audio domain, we use our existing speaker verification system. For the visual domain, we focus on lip motion. This is chosen, rather than static face or iris recognition, because it provides dynamic information about the individual. In addition, the lip dynamics can aid speech recognition to provide liveness testing. The visual processing method makes use of both color and edge information, combined within Markov random field MRF framework, to localize the lips. Geometric features are extracted and input to a polynomial classifier for the person recognition process. A late integration approach, based on a probabilistic model, is employed to combine the two modalities. The system is tested on the XM2VTS database combined with AWGN in the audio domain over a range of signal-to-noise ratios.

  12. Retinal Prosthesis System for Advanced Retinitis Pigmentosa: A Health Technology Assessment

    PubMed Central

    Lee, Christine; Tu, Hong Anh; Weir, Mark; Holubowich, Corinne

    2016-01-01

    Background Retinitis pigmentosa is a group of genetic disorders that involves the breakdown and loss of photoreceptors in the retina, resulting in progressive retinal degeneration and eventual blindness. The Argus II Retinal Prosthesis System is the only currently available surgical implantable device approved by Health Canada. It has been shown to improve visual function in patients with severe visual loss from advanced retinitis pigmentosa. The objective of this analysis was to examine the clinical effectiveness, cost-effectiveness, budget impact, and safety of the Argus II system in improving visual function, as well as exploring patient experiences with the system. Methods We performed a systematic search of the literature for studies examining the effects of the Argus II retinal prosthesis system in patients with advanced retinitis pigmentosa, and appraised the evidence according to the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) Working Group criteria, focusing on visual function, functional outcomes, quality of life, and adverse events. We developed a Markov decision-analytic model to assess the cost-effectiveness of the Argus II system compared with standard care over a 10-year time horizon. We also conducted a 5-year budget impact analysis. We used a qualitative design and an interview methodology to examine patients’ lived experience, and we used a modified grounded theory methodology to analyze information from interviews. Transcripts were coded, and themes were compared against one another. Results One multicentre international study and one single-centre study were included in the clinical review. In both studies, patients showed improved visual function with the Argus II system. However, the sight-threatening surgical complication rate was substantial. In the base-case analysis, the Argus II system was cost-effective compared with standard care only if willingness-to-pay was more than $207,616 per quality-adjusted life-year. The 5-year budget impact of funding the Argus II system ranged from $800,404 to $837,596. Retinitis pigmentosa significantly affects people's ability to navigate physical and virtual environments. Argus II was described as enabling the fundamental elements of sight. As such, it had a positive impact on quality of life for people with retinitis pigmentosa. Conclusions Based on evidence of moderate quality, patients with advanced retinitis pigmentosa who were implanted with the Argus II retinal prosthesis system showed significant improvement in visual function, real-life functional outcomes, and quality of life, but there were complications associated with the surgery that could be managed through standard ophthalmologic treatments. The costs for the technology are high. PMID:27468325

  13. Retinal Prosthesis System for Advanced Retinitis Pigmentosa: A Health Technology Assessment.

    PubMed

    2016-01-01

    Retinitis pigmentosa is a group of genetic disorders that involves the breakdown and loss of photoreceptors in the retina, resulting in progressive retinal degeneration and eventual blindness. The Argus II Retinal Prosthesis System is the only currently available surgical implantable device approved by Health Canada. It has been shown to improve visual function in patients with severe visual loss from advanced retinitis pigmentosa. The objective of this analysis was to examine the clinical effectiveness, cost-effectiveness, budget impact, and safety of the Argus II system in improving visual function, as well as exploring patient experiences with the system. We performed a systematic search of the literature for studies examining the effects of the Argus II retinal prosthesis system in patients with advanced retinitis pigmentosa, and appraised the evidence according to the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) Working Group criteria, focusing on visual function, functional outcomes, quality of life, and adverse events. We developed a Markov decision-analytic model to assess the cost-effectiveness of the Argus II system compared with standard care over a 10-year time horizon. We also conducted a 5-year budget impact analysis. We used a qualitative design and an interview methodology to examine patients' lived experience, and we used a modified grounded theory methodology to analyze information from interviews. Transcripts were coded, and themes were compared against one another. One multicentre international study and one single-centre study were included in the clinical review. In both studies, patients showed improved visual function with the Argus II system. However, the sight-threatening surgical complication rate was substantial. In the base-case analysis, the Argus II system was cost-effective compared with standard care only if willingness-to-pay was more than $207,616 per quality-adjusted life-year. The 5-year budget impact of funding the Argus II system ranged from $800,404 to $837,596. Retinitis pigmentosa significantly affects people's ability to navigate physical and virtual environments. Argus II was described as enabling the fundamental elements of sight. As such, it had a positive impact on quality of life for people with retinitis pigmentosa. Based on evidence of moderate quality, patients with advanced retinitis pigmentosa who were implanted with the Argus II retinal prosthesis system showed significant improvement in visual function, real-life functional outcomes, and quality of life, but there were complications associated with the surgery that could be managed through standard ophthalmologic treatments. The costs for the technology are high.

  14. Improving the performance of the amblyopic visual system

    PubMed Central

    Levi, Dennis M.; Li, Roger W.

    2008-01-01

    Experience-dependent plasticity is closely linked with the development of sensory function; however, there is also growing evidence for plasticity in the adult visual system. This review re-examines the notion of a sensitive period for the treatment of amblyopia in the light of recent experimental and clinical evidence for neural plasticity. One recently proposed method for improving the effectiveness and efficiency of treatment that has received considerable attention is ‘perceptual learning’. Specifically, both children and adults with amblyopia can improve their perceptual performance through extensive practice on a challenging visual task. The results suggest that perceptual learning may be effective in improving a range of visual performance and, importantly, the improvements may transfer to visual acuity. Recent studies have sought to explore the limits and time course of perceptual learning as an adjunct to occlusion and to investigate the neural mechanisms underlying the visual improvement. These findings, along with the results of new clinical trials, suggest that it might be time to reconsider our notions about neural plasticity in amblyopia. PMID:19008199

  15. Optical cylinder designs to increase the field of vision in the osteo-odonto-keratoprosthesis.

    PubMed

    Hull, C C; Liu, C S; Sciscio, A; Eleftheriadis, H; Herold, J

    2000-12-01

    The single optical cylinders used in the osteo-odonto-keratoprosthesis (OOKP) are known to produce very small visual fields. Values of 40 degrees are typically quoted. The purpose of this paper is to present designs for new optical cylinders that significantly increase the field of view and therefore improve the visual rehabilitation of patients having an OOKP. Computer ray-tracing techniques were used to design and analyse improved one- and two-piece optical cylinders made from polymethyl methacrylate. All designs were required to have a potential visual acuity of 6/6 before consideration was given to the visual field and optimising off-axis image quality. Aspheric surfaces were used where this significantly improved off-axis image quality. Single optical cylinders, with increased posterior cylinder (intraocular) diameters, gave an increase in the theoretical visual field of 18% (from 76 degrees to 90 degrees) over current designs. Two-piece designs based on an inverted telephoto principle gave theoretical field angles over 120 degrees. Aspheric surfaces were shown to improve the off-axis image quality while maintaining a potential visual acuity of at least 6/6. This may well increase the measured visual field by improving the retinal illuminance off-axis. Results demonstrate that it is possible to significantly increase the theoretical maximum visual field through OOKP optical cylinders. Such designs will improve the visual rehabilitation of patients undergoing this procedure.

  16. Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study

    ERIC Educational Resources Information Center

    Bulf, Hermann; Valenza, Eloisa

    2013-01-01

    Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…

  17. Colouring the Gaps in Learning Design: Aesthetics and the Visual in Learning

    ERIC Educational Resources Information Center

    Carroll, Fiona; Kop, Rita

    2016-01-01

    The visual is a dominant mode of information retrieval and understanding however, the focus on the visual dimension of Technology Enhanced Learning (TEL) is still quite weak in relation to its predominant focus on usability. To accommodate the future needs of the visual learner, designers of e-learning environments should advance the current…

  18. A proposed biophysical approach to Visual absorption capability (VAC)

    Treesearch

    W. C. Yeomans

    1979-01-01

    In British Columbia, visual analysis is in its formative stages and has only recently been accepted by Government as a resource component, notably within the Resource Analysis Branch, Ministry of Environment. Visual absorption capability (VAC), is an integral factor in visual resource assessment. VAC is examined by the author in the degree to which it relates to...

  19. Application of Frameworks in the Analysis and (Re)design of Interactive Visual Learning Tools

    ERIC Educational Resources Information Center

    Liang, Hai-Ning; Sedig, Kamran

    2009-01-01

    Interactive visual learning tools (IVLTs) are software environments that encode and display information visually and allow learners to interact with the visual information. This article examines the application and utility of frameworks in the analysis and design of IVLTs at the micro level. Frameworks play an important role in any design. They…

  20. Streaming Visual Analytics Workshop Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Kristin A.; Burtner, Edwin R.; Kritzstein, Brian P.

    How can we best enable users to understand complex emerging events and make appropriate assessments from streaming data? This was the central question addressed at a three-day workshop on streaming visual analytics. This workshop was organized by Pacific Northwest National Laboratory for a government sponsor. It brought together forty researchers and subject matter experts from government, industry, and academia. This report summarizes the outcomes from that workshop. It describes elements of the vision for a streaming visual analytic environment and set of important research directions needed to achieve this vision. Streaming data analysis is in many ways the analysis andmore » understanding of change. However, current visual analytics systems usually focus on static data collections, meaning that dynamically changing conditions are not appropriately addressed. The envisioned mixed-initiative streaming visual analytics environment creates a collaboration between the analyst and the system to support the analysis process. It raises the level of discourse from low-level data records to higher-level concepts. The system supports the analyst’s rapid orientation and reorientation as situations change. It provides an environment to support the analyst’s critical thinking. It infers tasks and interests based on the analyst’s interactions. The system works as both an assistant and a devil’s advocate, finding relevant data and alerts as well as considering alternative hypotheses. Finally, the system supports sharing of findings with others. Making such an environment a reality requires research in several areas. The workshop discussions focused on four broad areas: support for critical thinking, visual representation of change, mixed-initiative analysis, and the use of narratives for analysis and communication.« less

Top