ERIC Educational Resources Information Center
Smith, Linda B.; Yu, Chen; Yoshida, Hanako; Fausey, Caitlin M.
2015-01-01
Head-mounted video cameras (with and without an eye camera to track gaze direction) are being increasingly used to study infants' and young children's visual environments and provide new and often unexpected insights about the visual world from a child's point of view. The challenge in using head cameras is principally conceptual and concerns the…
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
Serchi, V; Peruzzi, A; Cereatti, A; Della Croce, U
2016-01-01
The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target) did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and "region of interest" analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.
Metric Scale Calculation for Visual Mapping Algorithms
NASA Astrophysics Data System (ADS)
Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.
2018-05-01
Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.
ERIC Educational Resources Information Center
Lorenzo, Gonzalo; Pomares, Jorge; Lledo, Asuncion
2013-01-01
This paper presents the use of immersive virtual reality systems in the educational intervention with Asperger students. The starting points of this study are features of these students' cognitive style that requires an explicit teaching style supported by visual aids and highly structured environments. The proposed immersive virtual reality…
Maravall, Darío; de Lope, Javier; Fuentes, Juan P
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P.
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394
How virtual reality works: illusions of vision in "real" and virtual environments
NASA Astrophysics Data System (ADS)
Stark, Lawrence W.
1995-04-01
Visual illusions abound in normal vision--illusions of clarity and completeness, of continuity in time and space, of presence and vivacity--and are part and parcel of the visual world inwhich we live. These illusions are discussed in terms of the human visual system, with its high- resolution fovea, moved from point to point in the visual scene by rapid saccadic eye movements (EMs). This sampling of visual information is supplemented by a low-resolution, wide peripheral field of view, especially sensitive to motion. Cognitive-spatial models controlling perception, imagery, and 'seeing,' also control the EMs that shift the fovea in the Scanpath mode. These illusions provide for presence, the sense off being within an environment. They equally well lead to 'Telepresence,' the sense of being within a virtual display, especially if the operator is intensely interacting within an eye-hand and head-eye human-machine interface that provides for congruent visual and motor frames of reference. Interaction, immersion, and interest compel telepresence; intuitive functioning and engineered information flows can optimize human adaptation to the artificial new world of virtual reality, as virtual reality expands into entertainment, simulation, telerobotics, and scientific visualization and other professional work.
Effect of a moving optical environment on the subjective median.
DOT National Transportation Integrated Search
1971-04-01
The placement of a point in the median vertical plane under the influence of a moving optical environment was tested in 12 subjects. It was found that the median plane was displaced in the same direction as the movement of the visual environment when...
NASA Technical Reports Server (NTRS)
Saganti, P. B.; Zapp, E. N.; Wilson, J. W.; Cucinotta, F. A.
2001-01-01
The US Lab module of the International Space Station (ISS) is a primary working area where the crewmembers are expected to spend majority of their time. Because of the directionality of radiation fields caused by the Earth shadow, trapped radiation pitch angle distribution, and inherent variations in the ISS shielding, a model is needed to account for these local variations in the radiation distribution. We present the calculated radiation dose (rem/yr) values for over 3,000 different points in the working area of the Lab module and estimated radiation dose values for over 25,000 different points in the human body for a given ambient radiation environment. These estimated radiation dose values are presented in a three dimensional animated interactive visualization format. Such interactive animated visualization of the radiation distribution can be generated in near real-time to track changes in the radiation environment during the orbit precession of the ISS.
ViSBARD: Visual System for Browsing, Analysis and Retrieval of Data
NASA Astrophysics Data System (ADS)
Roberts, D. Aaron; Boller, Ryan; Rezapkin, V.; Coleman, J.; McGuire, R.; Goldstein, M.; Kalb, V.; Kulkarni, R.; Luckyanova, M.; Byrnes, J.; Kerbel, U.; Candey, R.; Holmes, C.; Chimiak, R.; Harris, B.
2018-04-01
ViSBARD interactively visualizes and analyzes space physics data. It provides an interactive integrated 3-D and 2-D environment to determine correlations between measurements across many spacecraft. It supports a variety of spacecraft data products and MHD models and is easily extensible to others. ViSBARD provides a way of visualizing multiple vector and scalar quantities as measured by many spacecraft at once. The data are displayed three-dimesionally along the orbits which may be displayed either as connected lines or as points. The data display allows the rapid determination of vector configurations, correlations between many measurements at multiple points, and global relationships. With the addition of magnetohydrodynamic (MHD) model data, this environment can also be used to validate simulation results with observed data, use simulated data to provide a global context for sparse observed data, and apply feature detection techniques to the simulated data.
Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard
2010-02-01
The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.
An evaluation of attention models for use in SLAM
NASA Astrophysics Data System (ADS)
Dodge, Samuel; Karam, Lina
2013-12-01
In this paper we study the application of visual saliency models for the simultaneous localization and mapping (SLAM) problem. We consider visual SLAM, where the location of the camera and a map of the environment can be generated using images from a single moving camera. In visual SLAM, the interest point detector is of key importance. This detector must be invariant to certain image transformations so that features can be matched across di erent frames. Recent work has used a model of human visual attention to detect interest points, however it is unclear as to what is the best attention model for this purpose. To this aim, we compare the performance of interest points from four saliency models (Itti, GBVS, RARE, and AWS) with the performance of four traditional interest point detectors (Harris, Shi-Tomasi, SIFT, and FAST). We evaluate these detectors under several di erent types of image transformation and nd that the Itti saliency model, in general, achieves the best performance in terms of keypoint repeatability.
NASA Astrophysics Data System (ADS)
Hess, M. R.; Petrovic, V.; Kuester, F.
2017-08-01
Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.
NASA Astrophysics Data System (ADS)
Viertler, Franz; Hajek, Manfred
2015-05-01
To overcome the challenge of helicopter flight in degraded visual environments, current research considers headmounted displays with 3D-conformal (scene-linked) visual cues as most promising display technology. For pilot-in-theloop simulations with HMDs, a highly accurate registration of the augmented visual system is required. In rotorcraft flight simulators the outside visual cues are usually provided by a dome projection system, since a wide field-of-view (e.g. horizontally > 200° and vertically > 80°) is required, which can hardly be achieved with collimated viewing systems. But optical see-through HMDs do mostly not have an equivalent focus compared to the distance of the pilot's eye-point position to the curved screen, which is also dependant on head motion. Hence, a dynamic vergence correction has been implemented to avoid binocular disparity. In addition, the parallax error induced by even small translational head motions is corrected with a head-tracking system to be adjusted onto the projected screen. For this purpose, two options are presented. The correction can be achieved by rendering the view with yaw and pitch offset angles dependent on the deviating head position from the design eye-point of the spherical projection system. Furthermore, it can be solved by implementing a dynamic eye-point in the multi-channel projection system for the outside visual cues. Both options have been investigated for the integration of a binocular HMD into the Rotorcraft Simulation Environment (ROSIE) at the Technische Universitaet Muenchen. Pros and cons of both possibilities with regard on integration issues and usability in flight simulations will be discussed.
The Developing Infant Creates a Curriculum for Statistical Learning.
Smith, Linda B; Jayaraman, Swapnaa; Clerkin, Elizabeth; Yu, Chen
2018-04-01
New efforts are using head cameras and eye-trackers worn by infants to capture everyday visual environments from the point of view of the infant learner. From this vantage point, the training sets for statistical learning develop as the sensorimotor abilities of the infant develop, yielding a series of ordered datasets for visual learning that differ in content and structure between timepoints but are highly selective at each timepoint. These changing environments may constitute a developmentally ordered curriculum that optimizes learning across many domains. Future advances in computational models will be necessary to connect the developmentally changing content and statistics of infant experience to the internal machinery that does the learning. Copyright © 2018 Elsevier Ltd. All rights reserved.
VIPER: Virtual Intelligent Planetary Exploration Rover
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Flueckiger, Lorenzo; Nguyen, Laurent; Washington, Richard
2001-01-01
Simulation and visualization of rover behavior are critical capabilities for scientists and rover operators to construct, test, and validate plans for commanding a remote rover. The VIPER system links these capabilities. using a high-fidelity virtual-reality (VR) environment. a kinematically accurate simulator, and a flexible plan executive to allow users to simulate and visualize possible execution outcomes of a plan under development. This work is part of a larger vision of a science-centered rover control environment, where a scientist may inspect and explore the environment via VR tools, specify science goals, and visualize the expected and actual behavior of the remote rover. The VIPER system is constructed from three generic systems, linked together via a minimal amount of customization into the integrated system. The complete system points out the power of combining plan execution, simulation, and visualization for envisioning rover behavior; it also demonstrates the utility of developing generic technologies. which can be combined in novel and useful ways.
Data Visualization Using Immersive Virtual Reality Tools
NASA Astrophysics Data System (ADS)
Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.
2013-01-01
The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this visualization tool freely available to the academic community within a few months, on an experimental (beta testing) basis.
Error amplification to promote motor learning and motivation in therapy robotics.
Shirzad, Navid; Van der Loos, H F Machiel
2012-01-01
To study the effects of different feedback error amplification methods on a subject's upper-limb motor learning and affect during a point-to-point reaching exercise, we developed a real-time controller for a robotic manipulandum. The reaching environment was visually distorted by implementing a thirty degrees rotation between the coordinate systems of the robot's end-effector and the visual display. Feedback error amplification was provided to subjects as they trained to learn reaching within the visually rotated environment. Error amplification was provided either visually or through both haptic and visual means, each method with two different amplification gains. Subjects' performance (i.e., trajectory error) and self-reports to a questionnaire were used to study the speed and amount of adaptation promoted by each error amplification method and subjects' emotional changes. We found that providing haptic and visual feedback promotes faster adaptation to the distortion and increases subjects' satisfaction with the task, leading to a higher level of attentiveness during the exercise. This finding can be used to design a novel exercise regimen, where alternating between error amplification methods is used to both increase a subject's motor learning and maintain a minimum level of motivational engagement in the exercise. In future experiments, we will test whether such exercise methods will lead to a faster learning time and greater motivation to pursue a therapy exercise regimen.
Robot Manipulations: A Synergy of Visualization, Computation and Action for Spatial Instruction
ERIC Educational Resources Information Center
Verner, Igor M.
2004-01-01
This article considers the use of a learning environment, RoboCell, where manipulations of objects are performed by robot operations specified through the learner's application of mathematical and spatial reasoning. A curriculum is proposed relating to robot kinematics and point-to-point motion, rotation of objects, and robotic assembly of spatial…
A Student's Construction of Transformations of Functions in a Multiple Representational Environment.
ERIC Educational Resources Information Center
Borba, Marcelo C.; Confrey, Jere
1996-01-01
Reports on a case study of a 16-year-old student working on transformations of functions in a computer-based, multirepresentational environment. Presents an analysis of the work during the transition from the use of visualization and analysis of discrete points to the use of algebraic symbolism. (AIM)
Luminance gradient at object borders communicates object location to the human oculomotor system.
Kilpeläinen, Markku; Georgeson, Mark A
2018-01-25
The locations of objects in our environment constitute arguably the most important piece of information our visual system must convey to facilitate successful visually guided behaviour. However, the relevant objects are usually not point-like and do not have one unique location attribute. Relatively little is known about how the visual system represents the location of such large objects as visual processing is, both on neural and perceptual level, highly edge dominated. In this study, human observers made saccades to the centres of luminance defined squares (width 4 deg), which appeared at random locations (8 deg eccentricity). The phase structure of the square was manipulated such that the points of maximum luminance gradient at the square's edges shifted from trial to trial. The average saccade endpoints of all subjects followed those shifts in remarkable quantitative agreement. Further experiments showed that the shifts were caused by the edge manipulations, not by changes in luminance structure near the centre of the square or outside the square. We conclude that the human visual system programs saccades to large luminance defined square objects based on edge locations derived from the points of maximum luminance gradients at the square's edges.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Immersive Visualization of the Solid Earth
NASA Astrophysics Data System (ADS)
Kreylos, O.; Kellogg, L. H.
2017-12-01
Immersive visualization using virtual reality (VR) display technology offers unique benefits for the visual analysis of complex three-dimensional data such as tomographic images of the mantle and higher-dimensional data such as computational geodynamics models of mantle convection or even planetary dynamos. Unlike "traditional" visualization, which has to project 3D scalar data or vectors onto a 2D screen for display, VR can display 3D data in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection and interfere with interpretation. As a result, researchers can apply their spatial reasoning skills to 3D data in the same way they can to real objects or environments, as well as to complex objects like vector fields. 3D Visualizer is an application to visualize 3D volumetric data, such as results from mantle convection simulations or seismic tomography reconstructions, using VR display technology and a strong focus on interactive exploration. Unlike other visualization software, 3D Visualizer does not present static visualizations, such as a set of cross-sections at pre-selected positions and orientations, but instead lets users ask questions of their data, for example by dragging a cross-section through the data's domain with their hands and seeing data mapped onto that cross-section in real time, or by touching a point inside the data domain, and immediately seeing an isosurface connecting all points having the same data value as the touched point. Combined with tools allowing 3D measurements of positions, distances, and angles, and with annotation tools that allow free-hand sketching directly in 3D data space, the outcome of using 3D Visualizer is not primarily a set of pictures, but derived data to be used for subsequent analysis. 3D Visualizer works best in virtual reality, either in high-end facility-scale environments such as CAVEs, or using commodity low-cost virtual reality headsets such as HTC's Vive. The recent emergence of high-quality commodity VR means that researchers can buy a complete VR system off the shelf, install it and the 3D Visualizer software themselves, and start using it for data analysis immediately.
Kim, Aram; Kretch, Kari S; Zhou, Zixuan; Finley, James M
2018-05-09
Successful negotiation of obstacles during walking relies on the integration of visual information about the environment with ongoing locomotor commands. When information about the body and environment are removed through occlusion of the lower visual field, individuals increase downward head pitch angle, reduce foot placement precision, and increase safety margins during crossing. However, whether these effects are mediated by loss of visual information about the lower extremities, the obstacle, or both remains to be seen. Here, we used a fully immersive, virtual obstacle negotiation task to investigate how visual information about the lower extremities is integrated with information about the environment to facilitate skillful obstacle negotiation. Participants stepped over virtual obstacles while walking on a treadmill with one of three types of visual feedback about the lower extremities: no feedback, end-point feedback, or a link-segment model. We found that absence of visual information about the lower extremities led to an increase in the variability of leading foot placement after crossing. The presence of a visual representation of the lower extremities promoted greater downward head pitch angle during the approach to and subsequent crossing of an obstacle. In addition, having greater downward head pitch was associated with closer placement of the trailing foot to the obstacle, further placement of the leading foot after the obstacle, and higher trailing foot clearance. These results demonstrate that the fidelity of visual information about the lower extremities influences both feed-forward and feedback aspects of visuomotor coordination during obstacle negotiation.
Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930
Estimation of visual maps with a robot network equipped with vision sensors.
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.
Curveslam: Utilizing Higher Level Structure In Stereo Vision-Based Navigation
2012-01-01
consider their applica- tion to SLAM . The work of [31] [32] develops a spline-based SLAM framework, but this is only for application to LIDAR -based SLAM ...Existing approaches to visual Simultaneous Localization and Mapping ( SLAM ) typically utilize points as visual feature primitives to represent landmarks...regions of interest. Further, previous SLAM techniques that propose the use of higher level structures often place constraints on the environment, such as
Adaptation to Laterally Displacing Prisms in Anisometropic Amblyopia.
Sklar, Jaime C; Goltz, Herbert C; Gane, Luke; Wong, Agnes M F
2015-06-01
Using visual feedback to modify sensorimotor output in response to changes in the external environment is essential for daily function. Prism adaptation is a well-established experimental paradigm to quantify sensorimotor adaptation; that is, how the sensorimotor system adapts to an optically-altered visuospatial environment. Amblyopia is a neurodevelopmental disorder characterized by spatiotemporal deficits in vision that impacts manual and oculomotor function. This study explored the effects of anisometropic amblyopia on prism adaptation. Eight participants with anisometropic amblyopia and 11 visually-normal adults, all right-handed, were tested. Participants pointed to visual targets and were presented with feedback of hand position near the terminus of limb movement in three blocks: baseline, adaptation, and deadaptation. Adaptation was induced by viewing with binocular 11.4° (20 prism diopter [PD]) left-shifting prisms. All tasks were performed during binocular viewing. Participants with anisometropic amblyopia required significantly more trials (i.e., increased time constant) to adapt to prismatic optical displacement than visually-normal controls. During the rapid error correction phase of adaptation, people with anisometropic amblyopia also exhibited greater variance in motor output than visually-normal controls. Amblyopia impacts on the ability to adapt the sensorimotor system to an optically-displaced visual environment. The increased time constant and greater variance in motor output during the rapid error correction phase of adaptation may indicate deficits in processing of visual information as a result of degraded spatiotemporal vision in amblyopia.
Mannerisms: A Preschool Practitioner's Point of View.
ERIC Educational Resources Information Center
Heiner, Donna
1980-01-01
The extent and nature of remediation are said to depend on careful observation of children in the environment. Remedial techniques appropriate for older children must be adapted to meet the individual situation of each preschool visually handicapped child. (Author)
Photogrammetric point cloud compression for tactical networks
NASA Astrophysics Data System (ADS)
Madison, Andrew C.; Massaro, Richard D.; Wayant, Clayton D.; Anderson, John E.; Smith, Clint B.
2017-05-01
We report progress toward the development of a compression schema suitable for use in the Army's Common Operating Environment (COE) tactical network. The COE facilitates the dissemination of information across all Warfighter echelons through the establishment of data standards and networking methods that coordinate the readout and control of a multitude of sensors in a common operating environment. When integrated with a robust geospatial mapping functionality, the COE enables force tracking, remote surveillance, and heightened situational awareness to Soldiers at the tactical level. Our work establishes a point cloud compression algorithm through image-based deconstruction and photogrammetric reconstruction of three-dimensional (3D) data that is suitable for dissimination within the COE. An open source visualization toolkit was used to deconstruct 3D point cloud models based on ground mobile light detection and ranging (LiDAR) into a series of images and associated metadata that can be easily transmitted on a tactical network. Stereo photogrammetric reconstruction is then conducted on the received image stream to reveal the transmitted 3D model. The reported method boasts nominal compression ratios typically on the order of 250 while retaining tactical information and accurate georegistration. Our work advances the scope of persistent intelligence, surveillance, and reconnaissance through the development of 3D visualization and data compression techniques relevant to the tactical operations environment.
PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features
Zhao, Ji; Guo, Yue; He, Wenhao; Yuan, Kui
2018-01-01
To address the problem of estimating camera trajectory and to build a structural three-dimensional (3D) map based on inertial measurements and visual observations, this paper proposes point–line visual–inertial odometry (PL-VIO), a tightly-coupled monocular visual–inertial odometry system exploiting both point and line features. Compared with point features, lines provide significantly more geometrical structure information on the environment. To obtain both computation simplicity and representational compactness of a 3D spatial line, Plücker coordinates and orthonormal representation for the line are employed. To tightly and efficiently fuse the information from inertial measurement units (IMUs) and visual sensors, we optimize the states by minimizing a cost function which combines the pre-integrated IMU error term together with the point and line re-projection error terms in a sliding window optimization framework. The experiments evaluated on public datasets demonstrate that the PL-VIO method that combines point and line features outperforms several state-of-the-art VIO systems which use point features only. PMID:29642648
Testing of visual field with virtual reality goggles in manual and visual grasp modes.
Wroblewski, Dariusz; Francis, Brian A; Sadun, Alfredo; Vakili, Ghazal; Chopra, Vikas
2014-01-01
Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye) that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1) manual, with patient response registered with a mouse click, and (2) visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA) testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1) minimal systematic differences between measurements taken in visual grasp and manual modes, (2) the average standard deviation of the difference distributions of about 5 dB, and (3) a systematic shift (of 4-6 dB) to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients' acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode.
Collaborative volume visualization with applications to underwater acoustic signal processing
NASA Astrophysics Data System (ADS)
Jarvis, Susan; Shane, Richard T.
2000-08-01
Distributed collaborative visualization systems represent a technology whose time has come. Researchers at the Fraunhofer Center for Research in Computer Graphics have been working in the areas of collaborative environments and high-end visualization systems for several years. The medical application. TeleInVivo, is an example of a system which marries visualization and collaboration. With TeleInvivo, users can exchange and collaboratively interact with volumetric data sets in geographically distributed locations. Since examination of many physical phenomena produce data that are naturally volumetric, the visualization frameworks used by TeleInVivo have been extended for non-medical applications. The system can now be made compatible with almost any dataset that can be expressed in terms of magnitudes within a 3D grid. Coupled with advances in telecommunications, telecollaborative visualization is now possible virtually anywhere. Expert data quality assurance and analysis can occur remotely and interactively without having to send all the experts into the field. Building upon this point-to-point concept of collaborative visualization, one can envision a larger pooling of resources to form a large overview of a region of interest from contributions of numerous distributed members.
Acquiring Semantically Meaningful Models for Robotic Localization, Mapping and Target Recognition
2014-12-21
information, including suggesstions for reducing this burden, to Washington Headquarters Services , Directorate for Information Operations and Reports, 1215...Representations • Point features tracking • Recovery of relative motion, visual odometry • Loop closure • Environment models, sparse clouds of points...that co- occur with the object of interest Chair-Background Table-Background Object Level Segmentation Jaccard Index Silber .[5] 15.12 RenFox[4
NASA Astrophysics Data System (ADS)
Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.
2016-06-01
Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.
NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data
Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.
2005-01-01
NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.
ERIC Educational Resources Information Center
Sinclair, Nathalie; Moss, Joan
2012-01-01
The overall aim of our research project is to explore the impact of dynamic geometry environments (DGEs) on children's geometrical thinking. The point of departure for the study presented in this paper is the analytically and empirically grounded assumption that as the geometric discourse develops, the direct visual identification of geometric…
TLS for generating multi-LOD of 3D building model
NASA Astrophysics Data System (ADS)
Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.
2014-02-01
The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.
NASA Astrophysics Data System (ADS)
Moore, C. A.; Gertman, V.; Olsoy, P.; Mitchell, J.; Glenn, N. F.; Joshi, A.; Norpchen, D.; Shrestha, R.; Pernice, M.; Spaete, L.; Grover, S.; Whiting, E.; Lee, R.
2011-12-01
Immersive virtual reality environments such as the IQ-Station or CAVE° (Cave Automated Virtual Environment) offer new and exciting ways to visualize and explore scientific data and are powerful research and educational tools. Combining remote sensing data from a range of sensor platforms in immersive 3D environments can enhance the spectral, textural, spatial, and temporal attributes of the data, which enables scientists to interact and analyze the data in ways never before possible. Visualization and analysis of large remote sensing datasets in immersive environments requires software customization for integrating LiDAR point cloud data with hyperspectral raster imagery, the generation of quantitative tools for multidimensional analysis, and the development of methods to capture 3D visualizations for stereographic playback. This study uses hyperspectral and LiDAR data acquired over the China Hat geologic study area near Soda Springs, Idaho, USA. The data are fused into a 3D image cube for interactive data exploration and several methods of recording and playback are investigated that include: 1) creating and implementing a Virtual Reality User Interface (VRUI) patch configuration file to enable recording and playback of VRUI interactive sessions within the CAVE and 2) using the LiDAR and hyperspectral remote sensing data and GIS data to create an ArcScene 3D animated flyover, where left- and right-eye visuals are captured from two independent monitors for playback in a stereoscopic player. These visualizations can be used as outreach tools to demonstrate how integrated data and geotechnology techniques can help scientists see, explore, and more adequately comprehend scientific phenomena, both real and abstract.
Peeters, David; Snijders, Tineke M; Hagoort, Peter; Özyürek, Aslı
2017-01-27
In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nemati, Farshad; Whishaw, Ian Q
2007-08-22
The exploratory behavior of rats on an open field is organized in that animals spend disproportionate amounts of time at certain locations, termed home bases, which serve as centers for excursions. Although home bases are preferentially formed near distinctive cues, including visual cues, animals also visit and pause and move slowly, or linger, at many other locations in a test environment. In order to further examine the organization of exploratory behavior, the present study examined the influence of the point of entry on animals placed on an open field table that was illuminated either by room light or infrared light (a wavelength in which they cannot see) and near which, or on which, distinctive cues were placed. The main findings were that in both room light and infrared light tests, rats visited and lingered at the point of entry significantly more often than comparative control locations. Although the rats also visited and lingered in the vicinity of salient visual cues, the point of entry still remained a focus of visits. Finally, the preference for the point of entry increased as a function of salience of the cues marking that location. That the point of entry influences the organization of exploratory behavior is discussed in relation to the idea that the exploratory behavior of the rat is directed toward optimizing security as well as forming a spatial representation of the environment.
Testing of Visual Field with Virtual Reality Goggles in Manual and Visual Grasp Modes
Wroblewski, Dariusz; Francis, Brian A.; Sadun, Alfredo; Vakili, Ghazal; Chopra, Vikas
2014-01-01
Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye) that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1) manual, with patient response registered with a mouse click, and (2) visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA) testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1) minimal systematic differences between measurements taken in visual grasp and manual modes, (2) the average standard deviation of the difference distributions of about 5 dB, and (3) a systematic shift (of 4–6 dB) to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients' acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode. PMID:25050326
NASA Astrophysics Data System (ADS)
Madokoro, H.; Tsukada, M.; Sato, K.
2013-07-01
This paper presents an unsupervised learning-based object category formation and recognition method for mobile robot vision. Our method has the following features: detection of feature points and description of features using a scale-invariant feature transform (SIFT), selection of target feature points using one class support vector machines (OC-SVMs), generation of visual words using self-organizing maps (SOMs), formation of labels using adaptive resonance theory 2 (ART-2), and creation and classification of categories on a category map of counter propagation networks (CPNs) for visualizing spatial relations between categories. Classification results of dynamic images using time-series images obtained using two different-size robots and according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category formation of appearance changes of objects.
Bias to experience approaching motion in a three-dimensional virtual environment.
Lewis, Clifford F; McBeath, Michael K
2004-01-01
We used two-frame apparent motion in a three-dimensional virtual environment to test whether observers had biases to experience approaching or receding motion in depth. Observers viewed a tunnel of tiles receding in depth, that moved ambiguously either toward or away from them. We found that observers exhibited biases to experience approaching motion. The strengths of the biases were decreased when stimuli pointed away, but size of the display screen had no effect. Tests with diamond-shaped tiles that varied in the degree of pointing asymmetry resulted in a linear trend in which the bias was strongest for stimuli pointing toward the viewer, and weakest for stimuli pointing away. We show that the overall bias to experience approaching motion is consistent with a computational strategy of matching corresponding features between adjacent foreshortened stimuli in consecutive visual frames. We conclude that there are both adaptational and geometric reasons to favor the experience of approaching motion.
Rooney, Kevin K.; Condia, Robert J.; Loschky, Lester C.
2017-01-01
Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one’s fist at arm’s length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words) PMID:28360867
Rooney, Kevin K; Condia, Robert J; Loschky, Lester C
2017-01-01
Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one's fist at arm's length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words).
Visual space under free viewing conditions.
Doumen, Michelle J A; Kappers, Astrid M L; Koenderink, Jan J
2005-10-01
Most research on visual space has been done under restricted viewing conditions and in reduced environments. In our experiments, observers performed an exocentric pointing task, a collinearity task, and a parallelity task in a entirely visible room. We varied the relative distances between the objects and the observer and the separation angle between the two objects. We were able to compare our data directly with data from experiments in an environment with less monocular depth information present. We expected that in a richer environment and under less restrictive viewing conditions, the settings would deviate less from the veridical settings. However, large systematic deviations from veridical settings were found for all three tasks. The structure of these deviations was task dependent, and the structure and the deviations themselves were comparable to those obtained under more restricted circumstances. Thus, the additional information was not used effectively by the observers.
Simple Smartphone-Based Guiding System for Visually Impaired People
Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying
2017-01-01
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them. PMID:28608811
Simple Smartphone-Based Guiding System for Visually Impaired People.
Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying
2017-06-13
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.
Combining Multiple Forms Of Visual Information To Specify Contact Relations In Spatial Layout
NASA Astrophysics Data System (ADS)
Sedgwick, Hal A.
1990-03-01
An expert system, called Layout2, has been described, which models a subset of available visual information for spatial layout. The system is used to examine detailed interactions between multiple, partially redundant forms of information in an environment-centered geometrical model of an environment obeying certain rather general constraints. This paper discusses the extension of Layout2 to include generalized contact relations between surfaces. In an environment-centered model, the representation of viewer-centered distance is replaced by the representation of environmental location. This location information is propagated through the representation of the environment by a network of contact relations between contiguous surfaces. Perspective information interacts with other forms of information to specify these contact relations. The experimental study of human perception of contact relations in extended spatial layouts is also discussed. Differences between human results and Layout2 results reveal limitations in the human ability to register available information; they also point to the existence of certain forms of information not yet formalized in Layout2.
Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.
Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M
2015-01-01
This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.
Dynamic Stimuli And Active Processing In Human Visual Perception
NASA Astrophysics Data System (ADS)
Haber, Ralph N.
1990-03-01
Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.
The extent of visual space inferred from perspective angles
Erkelens, Casper J.
2015-01-01
Retinal images are perspective projections of the visual environment. Perspective projections do not explain why we perceive perspective in 3-D space. Analysis of underlying spatial transformations shows that visual space is a perspective transformation of physical space if parallel lines in physical space vanish at finite distance in visual space. Perspective angles, i.e., the angle perceived between parallel lines in physical space, were estimated for rails of a straight railway track. Perspective angles were also estimated from pictures taken from the same point of view. Perspective angles between rails ranged from 27% to 83% of their angular size in the retinal image. Perspective angles prescribe the distance of vanishing points of visual space. All computed distances were shorter than 6 m. The shallow depth of a hypothetical space inferred from perspective angles does not match the depth of visual space, as it is perceived. Incongruity between the perceived shape of a railway line on the one hand and the experienced ratio between width and length of the line on the other hand is huge, but apparently so unobtrusive that it has remained unnoticed. The incompatibility between perspective angles and perceived distances casts doubt on evidence for a curved visual space that has been presented in the literature and was obtained from combining judgments of distances and angles with physical positions. PMID:26034567
The role of vision in odor-plume tracking by walking and flying insects.
Willis, Mark A; Avondet, Jennifer L; Zheng, Elizabeth
2011-12-15
The walking paths of male cockroaches, Periplaneta americana, tracking point-source plumes of female pheromone often appear similar in structure to those observed from flying male moths. Flying moths use visual-flow-field feedback of their movements to control steering and speed over the ground and to detect the wind speed and direction while tracking plumes of odors. Walking insects are also known to use flow field cues to steer their trajectories. Can the upwind steering we observe in plume-tracking walking male cockroaches be explained by visual-flow-field feedback, as in flying moths? To answer this question, we experimentally occluded the compound eyes and ocelli of virgin P. americana males, separately and in combination, and challenged them with different wind and odor environments in our laboratory wind tunnel. They were observed responding to: (1) still air and no odor, (2) wind and no odor, (3) a wind-borne point-source pheromone plume and (4) a wide pheromone plume in wind. If walking cockroaches require visual cues to control their steering with respect to their environment, we would expect their tracks to be less directed and more variable if they cannot see. Instead, we found few statistically significant differences among behaviors exhibited by intact control cockroaches or those with their eyes occluded, under any of our environmental conditions. Working towards our goal of a comprehensive understanding of chemo-orientation in insects, we then challenged flying and walking male moths to track pheromone plumes with and without visual feedback. Neither walking nor flying moths performed as well as walking cockroaches when there was no visual information available.
Local Homing Navigation Based on the Moment Model for Landmark Distribution and Features
Lee, Changmin; Kim, DaeEun
2017-01-01
For local homing navigation, an agent is supposed to return home based on the surrounding environmental information. According to the snapshot model, the home snapshot and the current view are compared to determine the homing direction. In this paper, we propose a novel homing navigation method using the moment model. The suggested moment model also follows the snapshot theory to compare the home snapshot and the current view, but the moment model defines a moment of landmark inertia as the sum of the product of the feature of the landmark particle with the square of its distance. The method thus uses range values of landmarks in the surrounding view and the visual features. The center of the moment can be estimated as the reference point, which is the unique convergence point in the moment potential from any view. The homing vector can easily be extracted from the centers of the moment measured at the current position and the home location. The method effectively guides homing direction in real environments, as well as in the simulation environment. In this paper, we take a holistic approach to use all pixels in the panoramic image as landmarks and use the RGB color intensity for the visual features in the moment model in which a set of three moment functions is encoded to determine the homing vector. We also tested visual homing or the moment model with only visual features, but the suggested moment model with both the visual feature and the landmark distance shows superior performance. We demonstrate homing performance with various methods classified by the status of the feature, the distance and the coordinate alignment. PMID:29149043
The role of vision in odor-plume tracking by walking and flying insects
Willis, Mark A.; Avondet, Jennifer L.; Zheng, Elizabeth
2011-01-01
SUMMARY The walking paths of male cockroaches, Periplaneta americana, tracking point-source plumes of female pheromone often appear similar in structure to those observed from flying male moths. Flying moths use visual-flow-field feedback of their movements to control steering and speed over the ground and to detect the wind speed and direction while tracking plumes of odors. Walking insects are also known to use flow field cues to steer their trajectories. Can the upwind steering we observe in plume-tracking walking male cockroaches be explained by visual-flow-field feedback, as in flying moths? To answer this question, we experimentally occluded the compound eyes and ocelli of virgin P. americana males, separately and in combination, and challenged them with different wind and odor environments in our laboratory wind tunnel. They were observed responding to: (1) still air and no odor, (2) wind and no odor, (3) a wind-borne point-source pheromone plume and (4) a wide pheromone plume in wind. If walking cockroaches require visual cues to control their steering with respect to their environment, we would expect their tracks to be less directed and more variable if they cannot see. Instead, we found few statistically significant differences among behaviors exhibited by intact control cockroaches or those with their eyes occluded, under any of our environmental conditions. Working towards our goal of a comprehensive understanding of chemo-orientation in insects, we then challenged flying and walking male moths to track pheromone plumes with and without visual feedback. Neither walking nor flying moths performed as well as walking cockroaches when there was no visual information available. PMID:22116754
Mapping the World through Science and Art.
ERIC Educational Resources Information Center
Dambekalns, Lydia
One of the most interesting challenges facing educators today is how to engage students in meaningful study of the environment in which they live. This paper presents the benefits of studying scientific data from an aesthetic point of view. The visual display of the earth's surface through aerial photographs and satellite map images was used as…
Demonstrating a Web-Design Technique in a Distance-Learning Environment
ERIC Educational Resources Information Center
Zdenek, Sean
2004-01-01
Objective: To lead a brief training session over a distance-learning network. Type of speech: Informative. Point value: 20% of course grade. Requirements: (a) References: Not specified; (b) Length: 15 minutes; (c) Visual aid: Yes; (d) Outline: No; (e) Prerequisite reading: Chapters 12-16, 18 (Bailey, 2002); (f) Additional requirements: None. This…
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Bednarcyk, Brett A.; Hussain, Aquila; Katiyar, Vivek
2010-01-01
A unified framework is presented that enables coupled multiscale analysis of composite structures and associated graphical pre- and postprocessing within the Abaqus/CAE environment. The recently developed, free, Finite Element Analysis--Micromechanics Analysis Code (FEAMAC) software couples NASA's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with Abaqus/Standard and Abaqus/Explicit to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. The Graphical User Interfaces (FEAMAC-Pre and FEAMAC-Post), developed through collaboration between SIMULIA Erie and the NASA Glenn Research Center, enable users to employ a new FEAMAC module within Abaqus/CAE that provides access to the composite microscale. FEA IAC-Pre is used to define and store constituent material properties, set-up and store composite repeating unit cells, and assign composite materials as sections with all data being stored within the CAE database. Likewise FEAMAC-Post enables multiscale field quantity visualization (contour plots, X-Y plots), with point and click access to the microscale i.e., fiber and matrix fields).
Augmented Citizen Science for Environmental Monitoring and Education
NASA Astrophysics Data System (ADS)
Albers, B.; de Lange, N.; Xu, S.
2017-09-01
Environmental monitoring and ecological studies detect and visualize changes of the environment over time. Some agencies are committed to document the development of conservation and status of geotopes and geosites, which is time-consuming and cost-intensive. Citizen science and crowd sourcing are modern approaches to collect data and at the same time to raise user awareness for environmental changes. Citizen scientists can take photographs of point of interests (POI) with smartphones and the PAN App, which is presented in this article. The user is navigated to a specific point and is then guided with an augmented reality approach to take a photo in a specific direction. The collected photographs are processed to time-lapse videos to visualize environmental changes. Users and experts in environmental agencies can use this data for long-term documentation.
Practical method for appearance match between soft copy and hard copy
NASA Astrophysics Data System (ADS)
Katoh, Naoya
1994-04-01
CRT monitors are often used as a soft proofing device for the hard copy image output. However, what the user sees on the monitor does not match its output, even if the monitor and the output device are calibrated with CIE/XYZ or CIE/Lab. This is especially obvious when correlated color temperature (CCT) of CRT monitor's white point significantly differs from ambient light. In a typical office environment, one uses a computer graphic monitor having a CCT of 9300K in a room of white fluorescent light of 4150K CCT. In such a case, human visual system is partially adapted to the CRT monitor's white point and partially to the ambient light. The visual experiments were performed on the effect of the ambient lighting. Practical method for soft copy color reproduction that matches the hard copy image in appearance is presented in this paper. This method is fundamentally based on a simple von Kries' adaptation model and takes into account the human visual system's partial adaptation and contrast matching.
Filling gaps in visual motion for target capture
Bosco, Gianfranco; Delle Monache, Sergio; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation. PMID:25755637
Filling gaps in visual motion for target capture.
Bosco, Gianfranco; Monache, Sergio Delle; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.
SimITK: rapid ITK prototyping using the Simulink visual programming environment
NASA Astrophysics Data System (ADS)
Dickinson, A. W. L.; Mousavi, P.; Gobbi, D. G.; Abolmaesumi, P.
2011-03-01
The Insight Segmentation and Registration Toolkit (ITK) is a long-established, software package used for image analysis, visualization, and image-guided surgery applications. This package is a collection of C++ libraries, that can pose usability problems for users without C++ programming experience. To bridge the gap between the programming complexities and the required learning curve of ITK, we present a higher-level visual programming environment that represents ITK methods and classes by wrapping them into "blocks" within MATLAB's visual programming environment, Simulink. These blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. Due to the heavily C++ templated nature of ITK, direct interaction between Simulink and ITK requires an intermediary to convert their respective datatypes and allow intercommunication. We have developed a "Virtual Block" that serves as an intermediate wrapper around the ITK class and is responsible for resolving the templated datatypes used by ITK to native types used by Simulink. Presently, the wrapping procedure for SimITK is semi-automatic in that it requires XML descriptions of the ITK classes as a starting point, as this data is used to create all other necessary integration files. The generation of all source code and object code from the XML is done automatically by a CMake build script that yields Simulink blocks as the final result. An example 3D segmentation workflow using cranial-CT data as well as a 3D MR-to-CT registration workflow are presented as a proof-of-concept.
Ecological validity of virtual environments to assess human navigation ability
van der Ham, Ineke J. M.; Faber, Annemarie M. E.; Venselaar, Matthijs; van Kreveld, Marc J.; Löffler, Maarten
2015-01-01
Route memory is frequently assessed in virtual environments. These environments can be presented in a fully controlled manner and are easy to use. Yet they lack the physical involvement that participants have when navigating real environments. For some aspects of route memory this may result in reduced performance in virtual environments. We assessed route memory performance in four different environments: real, virtual, virtual with directional information (compass), and hybrid. In the hybrid environment, participants walked the route outside on an open field, while all route information (i.e., path, landmarks) was shown simultaneously on a handheld tablet computer. Results indicate that performance in the real life environment was better than in the virtual conditions for tasks relying on survey knowledge, like pointing to start and end point, and map drawing. Performance in the hybrid condition however, hardly differed from real life performance. Performance in the virtual environment did not benefit from directional information. Given these findings, the hybrid condition may offer the best of both worlds: the performance level is comparable to that of real life for route memory, yet it offers full control of visual input during route learning. PMID:26074831
Modeling Color Difference for Visualization Design.
Szafir, Danielle Albers
2018-01-01
Color is frequently used to encode values in visualizations. For color encodings to be effective, the mapping between colors and values must preserve important differences in the data. However, most guidelines for effective color choice in visualization are based on either color perceptions measured using large, uniform fields in optimal viewing environments or on qualitative intuitions. These limitations may cause data misinterpretation in visualizations, which frequently use small, elongated marks. Our goal is to develop quantitative metrics to help people use color more effectively in visualizations. We present a series of crowdsourced studies measuring color difference perceptions for three common mark types: points, bars, and lines. Our results indicate that peoples' abilities to perceive color differences varies significantly across mark types. Probabilistic models constructed from the resulting data can provide objective guidance for designers, allowing them to anticipate viewer perceptions in order to inform effective encoding design.
Jiang, Kang; Ling, Feiyang; Feng, Zhongxiang; Ma, Changxi; Kumfer, Wesley; Shao, Chen; Wang, Kun
2018-06-01
With the rapid growth in mobile phone use worldwide, traffic safety experts have begun to consider the impact of mobile phone distractions on pedestrian crossing safety. This study sought to investigate how mobile phone distractions (music distraction, phone conversation distraction and text distraction) affect the behavior of pedestrians while they are crossing the street. An outdoor-environment experiment was conducted among 28 college student pedestrians. Two HD videos and an eye tracker were employed to record and analyze crossing behavior and visual attention allocation. The results of the research showed that the three mobile phone distractions cause different levels of impairment to pedestrians' crossing performance, with the greatest effect from text distraction, followed by phone conversation distraction and music distraction. Pedestrians distracted by music initiate crossing later, have increased pupil diameter, and reduce their scanning frequency, fixation points and fixation times toward traffic signal area priorities. In addition to the above effects, pedestrians distracted by phone conversation cross the street more slowly, direct fewer fixation points to the right traffic area, and spend less fixation time and lower average fixation duration on the left traffic area. Moreover, pedestrians distracted by texting look left and right less often and switch, distribute and maintain less visual attention on the traffic environment. These findings may inform researchers, policy makers, and pedestrians. Copyright © 2018 Elsevier Ltd. All rights reserved.
Visual Depth from Motion Parallax and Eye Pursuit
Stroyan, Keith; Nawrot, Mark
2012-01-01
A translating observer viewing a rigid environment experiences “motion parallax,” the relative movement upon the observer’s retina of variously positioned objects in the scene. This retinal movement of images provides a cue to the relative depth of objects in the environment, however retinal motion alone cannot mathematically determine relative depth of the objects. Visual perception of depth from lateral observer translation uses both retinal image motion and eye movement. In (Nawrot & Stroyan, 2009, Vision Res. 49, p.1969) we showed mathematically that the ratio of the rate of retinal motion over the rate of smooth eye pursuit mathematically determines depth relative to the fixation point in central vision. We also reported on psychophysical experiments indicating that this ratio is the important quantity for perception. Here we analyze the motion/pursuit cue for the more general, and more complicated, case when objects are distributed across the horizontal viewing plane beyond central vision. We show how the mathematical motion/pursuit cue varies with different points across the plane and with time as an observer translates. If the time varying retinal motion and smooth eye pursuit are the only signals used for this visual process, it is important to know what is mathematically possible to derive about depth and structure. Our analysis shows that the motion/pursuit ratio determines an excellent description of depth and structure in these broader stimulus conditions, provides a detailed quantitative hypothesis of these visual processes for the perception of depth and structure from motion parallax, and provides a computational foundation to analyze the dynamic geometry of future experiments. PMID:21695531
Walking simulator for evaluation of ophthalmic devices
NASA Astrophysics Data System (ADS)
Barabas, James; Woods, Russell L.; Peli, Eli
2005-03-01
Simulating mobility tasks in a virtual environment reduces risk for research subjects, and allows for improved experimental control and measurement. We are currently using a simulated shopping mall environment (where subjects walk on a treadmill in front of a large projected video display) to evaluate a number of ophthalmic devices developed at the Schepens Eye Research Institute for people with vision impairment, particularly visual field defects. We have conducted experiments to study subject's perception of "safe passing distance" when walking towards stationary obstacles. The subject's binary responses about potential collisions are analyzed by fitting a psychometric function, which gives an estimate of the subject's perceived safe passing distance, and the variability of subject responses. The system also enables simulations of visual field defects using head and eye tracking, enabling better understanding of the impact of visual field loss. Technical infrastructure for our simulated walking environment includes a custom eye and head tracking system, a gait feedback system to adjust treadmill speed, and a handheld 3-D pointing device. Images are generated by a graphics workstation, which contains a model with photographs of storefronts from an actual shopping mall, where concurrent validation experiments are being conducted.
Mapping Gnss Restricted Environments with a Drone Tandem and Indirect Position Control
NASA Astrophysics Data System (ADS)
Cledat, E.; Cucci, D. A.
2017-08-01
The problem of autonomously mapping highly cluttered environments, such as urban and natural canyons, is intractable with the current UAV technology. The reason lies in the absence or unreliability of GNSS signals due to partial sky occlusion or multi-path effects. High quality carrier-phase observations are also required in efficient mapping paradigms, such as Assisted Aerial Triangulation, to achieve high ground accuracy without the need of dense networks of ground control points. In this work we consider a drone tandem in which the first drone flies outside the canyon, where GNSS constellation is ideal, visually tracks the second drone and provides an indirect position control for it. This enables both autonomous guidance and accurate mapping of GNSS restricted environments without the need of ground control points. We address the technical feasibility of this concept considering preliminary real-world experiments in comparable conditions and we perform a mapping accuracy prediction based on a simulation scenario.
a Framework for Voxel-Based Global Scale Modeling of Urban Environments
NASA Astrophysics Data System (ADS)
Gehrung, Joachim; Hebel, Marcus; Arens, Michael; Stilla, Uwe
2016-10-01
The generation of 3D city models is a very active field of research. Modeling environments as point clouds may be fast, but has disadvantages. These are easily solvable by using volumetric representations, especially when considering selective data acquisition, change detection and fast changing environments. Therefore, this paper proposes a framework for the volumetric modeling and visualization of large scale urban environments. Beside an architecture and the right mix of algorithms for the task, two compression strategies for volumetric models as well as a data quality based approach for the import of range measurements are proposed. The capabilities of the framework are shown on a mobile laser scanning dataset of the Technical University of Munich. Furthermore the loss of the compression techniques is evaluated and their memory consumption is compared to that of raw point clouds. The presented results show that generation, storage and real-time rendering of even large urban models are feasible, even with off-the-shelf hardware.
Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans
2017-03-20
From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jones, P. W.; Strelitz, R. A.
2012-12-01
The output of a simulation is best comprehended through the agency and methods of visualization, but a vital component of good science is knowledge of uncertainty. While great strides have been made in the quantification of uncertainty, especially in simulation, there is still a notable gap: there is no widely accepted means of simultaneously viewing the data and the associated uncertainty in one pane. Visualization saturates the screen, using the full range of color, shadow, opacity and tricks of perspective to display even a single variable. There is no room in the visualization expert's repertoire left for uncertainty. We present a method of visualizing uncertainty without sacrificing the clarity and power of the underlying visualization that works as well in 3-D and time-varying visualizations as it does in 2-D. At its heart, it relies on a principal tenet of continuum mechanics, replacing the notion of value at a point with a more diffuse notion of density as a measure of content in a region. First, the uncertainties calculated or tabulated at each point are transformed into a piecewise continuous field of uncertainty density . We next compute a weighted Voronoi tessellation of a user specified N convex polygonal/polyhedral cells such that each cell contains the same amount of uncertainty as defined by . The problem thus devolves into minimizing . Computation of such a spatial decomposition is O(N*N ), and can be computed iteratively making it possible to update easily over time as well as faster. The polygonal mesh does not interfere with the visualization of the data and can be easily toggled on or off. In this representation, a small cell implies a great concentration of uncertainty, and conversely. The content weighted polygons are identical to the cartogram familiar to the information visualization community in the depiction of things voting results per stat. Furthermore, one can dispense with the mesh or edges entirely to be replaced by symbols or glyphs at the generating points (effectively the center of the polygon). This methodology readily admits to rigorous statistical analysis using standard components found in R and thus entirely compatible with the visualization package we use (Visit and/or ParaView), the language we use (Python) and the UVCDAT environment that provides the programmer and analyst workbench. We will demonstrate the power and effectiveness of this methodology in climate studies. We will further argue that our method of defining (or predicting) values in a region has many advantages over the traditional visualization notion of value at a point.
Subjective evaluation of HEVC in mobile devices
NASA Astrophysics Data System (ADS)
Garcia, Ray; Kalva, Hari
2013-03-01
Mobile compute environments provide a unique set of user needs and expectations that designers must consider. With increased multimedia use in mobile environments, video encoding methods within the smart phone market segment are key factors that contribute to positive user experience. Currently available display resolutions and expected cellular bandwidth are major factors the designer must consider when determining which encoding methods should be supported. The desired goal is to maximize the consumer experience, reduce cost, and reduce time to market. This paper presents a comparative evaluation of the quality of user experience when HEVC and AVC/H.264 video coding standards were used. The goal of the study was to evaluate any improvements in user experience when using HEVC. Subjective comparisons were made between H.264/AVC and HEVC encoding standards in accordance with Doublestimulus impairment scale (DSIS) as defined by ITU-R BT.500-13. Test environments are based on smart phone LCD resolutions and expected cellular bit rates, such as 200kbps and 400kbps. Subjective feedback shows both encoding methods are adequate at 400kbps constant bit rate. However, a noticeable consumer experience gap was observed for 200 kbps. Significantly less H.264 subjective quality is noticed with video sequences that have multiple objects moving and no single point of visual attraction. Video sequences with single points of visual attraction or few moving objects tended to have higher H.264 subjective quality.
Siu, Kin Wai Michael; Wong, M M Y
2013-07-01
The principal objective of a healthy living environment is to improve the quality of everyday life. Visually impaired persons (VIPs) encounter many difficulties in everyday life through a series of barriers, particularly in relation to public toilets. This study aimed to explore the concerns of VIPs in accessing public toilets, and identify methods for improvement. Considerations about user participation are also discussed. Adopting a case study approach, VIPs were invited to participate in the research process. In addition to in-depth interviews and field visits, models and a simulated full-scale environment were produced to facilitate the VIPs to voice their opinions. The key findings indicate that the design of public toilets for promoting public health should be considered and tackled from a three-level framework: plain, line and point. Governments, professionals and the public need to consider the quality of public toilets in terms of policy, implementation and management. VIPs have the right to access public toilets. Governments and professionals should respect the particular needs and concerns of VIPs. A three-level framework (plain, line and point) is required to consider the needs of VIPs in accessing public toilets, and user participation is a good way to reveal the actual needs of VIPs. Copyright © 2013 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Multisensory and Modality-Specific Influences on Adaptation to Optical Prisms
Calzolari, Elena; Albini, Federica; Bolognini, Nadia; Vallar, Giuseppe
2017-01-01
Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target. Visual, auditory, and multisensory – audio-visual – targets in the adaptation phase were used, while participants wore prisms displacing the visual field rightward by 10°. Proprioceptive, visual, visual-proprioceptive, auditory-proprioceptive straight-ahead shifts were measured. Pointing to auditory and to audio-visual targets in the adaptation phase produces proprioceptive, visual-proprioceptive, and auditory-proprioceptive AEs, as the typical visual targets did. This finding reveals that cross-modal plasticity effects involve both the auditory and the visual modality, and their interactions (Experiment 1). Even a shortened PA phase, requiring only 24 pointings to visual and audio-visual targets (Experiment 2), is sufficient to bring about AEs, as compared to the standard 92-pointings procedure. Finally, pointings to auditory targets cause AEs, although PA with a reduced number of pointings (24) to auditory targets brings about smaller AEs, as compared to the 92-pointings procedure (Experiment 3). Together, results from the three experiments extend to the auditory modality the sensorimotor plasticity underlying the typical AEs produced by PA to visual targets. Importantly, PA to auditory targets appears characterized by less accurate pointings and error correction, suggesting that the auditory component of the PA process may be less central to the building up of the AEs, than the sensorimotor pointing activity per se. These findings highlight both the effectiveness of a reduced number of pointings for bringing about AEs, and the possibility of inducing PA with auditory targets, which may be used as a compensatory route in patients with visual deficits. PMID:29213233
Interactive modeling and simulation of peripheral nerve cords in virtual environments
NASA Astrophysics Data System (ADS)
Ullrich, Sebastian; Frommen, Thorsten; Eckert, Jan; Schütz, Astrid; Liao, Wei; Deserno, Thomas M.; Ntouba, Alexandre; Rossaint, Rolf; Prescher, Andreas; Kuhlen, Torsten
2008-03-01
This paper contributes to modeling, simulation and visualization of peripheral nerve cords. Until now, only sparse datasets of nerve cords can be found. In addition, this data has not yet been used in simulators, because it is only static. To build up a more flexible anatomical structure of peripheral nerve cords, we propose a hierarchical tree data structure where each node represents a nerve branch. The shape of the nerve segments itself is approximated by spline curves. Interactive modeling allows for the creation and editing of control points which are used for branching nerve sections, calculating spline curves and editing spline representations via cross sections. Furthermore, the control points can be attached to different anatomic structures. Through this approach, nerve cords deform in accordance to the movement of the connected structures, e.g., muscles or bones. As a result, we have developed an intuitive modeling system that runs on desktop computers and in immersive environments. It allows anatomical experts to create movable peripheral nerve cords for articulated virtual humanoids. Direct feedback of changes induced by movement or deformation is achieved by visualization in real-time. The techniques and the resulting data are already used for medical simulators.
Live Aircraft Encounter Visualization at FutureFlight Central
NASA Technical Reports Server (NTRS)
Murphy, James R.; Chinn, Fay; Monheim, Spencer; Otto, Neil; Kato, Kenji; Archdeacon, John
2018-01-01
Researchers at the National Aeronautics and Space Administration (NASA) have developed an aircraft data streaming capability that can be used to visualize live aircraft in near real-time. During a joint Federal Aviation Administration (FAA)/NASA Airborne Collision Avoidance System flight series, test sorties between unmanned aircraft and manned intruder aircraft were shown in real-time at NASA Ames' FutureFlight Central tower facility as a virtual representation of the encounter. This capability leveraged existing live surveillance, video, and audio data streams distributed through a Live, Virtual, Constructive test environment, then depicted the encounter from the point of view of any aircraft in the system showing the proximity of the other aircraft. For the demonstration, position report data were sent to the ground from on-board sensors on the unmanned aircraft. The point of view can be change dynamically, allowing encounters from all angles to be observed. Visualizing the encounters in real-time provides a safe and effective method for observation of live flight testing and a strong alternative to travel to the remote test range.
Strategies to Evaluate the Visibility Along AN Indoor Path in a Point Cloud Representation
NASA Astrophysics Data System (ADS)
Grasso, N.; Verbree, E.; Zlatanova, S.; Piras, M.
2017-09-01
Many research works have been oriented to the formulation of different algorithms for estimating the paths in indoor environments from three-dimensional representations of space. The architectural configuration, the actions that take place within it, and the location of some objects in the space influence the paths along which is it possible to move, as they may cause visibility problems. To overcome the visibility issue, different methods have been proposed which allow to identify the visible areas and from a certain point of view, but often they do not take into account the user's visual perception of the environment and not allow estimating how much may be complicated to follow a certain path. In the field of space syntax and cognitive science, it has been attempted to describe the characteristics of a building or an urban environment by the isovists and visibility graphs methods; some numerical properties of these representations allow to describe the space as for how it is perceived by a user. However, most of these studies are directed to analyze the environment in a two-dimensional space. In this paper we propose a method to evaluate in a quantitative way the complexity of a certain path within an environment represented by a three-dimensional point cloud, by the combination of some of the previously mentioned techniques, considering the space visible from a certain point of view, depending on the moving agent (pedestrian , people in wheelchairs, UAV, UGV, robot).
Visual Environments for CFD Research
NASA Technical Reports Server (NTRS)
Watson, Val; George, Michael W. (Technical Monitor)
1994-01-01
This viewgraph presentation gives an overview of the visual environments for computational fluid dynamics (CFD) research. It includes details on critical needs from the future computer environment, features needed to attain this environment, prospects for changes in and the impact of the visualization revolution on the human-computer interface, human processing capabilities, limits of personal environment and the extension of that environment with computers. Information is given on the need for more 'visual' thinking (including instances of visual thinking), an evaluation of the alternate approaches for and levels of interactive computer graphics, a visual analysis of computational fluid dynamics, and an analysis of visualization software.
NASA Astrophysics Data System (ADS)
Sasaki, T.; Azuma, S.; Matsuda, S.; Nagayama, A.; Ogido, M.; Saito, H.; Hanafusa, Y.
2016-12-01
The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives a large amount of deep-sea research videos and photos obtained by JAMSTEC's research submersibles and vehicles with cameras. The web site "JAMSTEC E-library of Deep-sea Images : J-EDI" (http://www.godac.jamstec.go.jp/jedi/e/) has made videos and photos available to the public via the Internet since 2011. Users can search for target videos and photos by keywords, easy-to-understand icons, and dive information at J-EDI because operating staffs classify videos and photos as to contents, e.g. living organism and geological environment, and add comments to them.Dive survey data including videos and photos are not only valiant academically but also helpful for education and outreach activities. With the aim of the improvement of visibility for broader communities, we added new functions of 3-dimensional display synchronized various dive survey data with videos in this year.New Functions Users can search for dive survey data by 3D maps with plotted dive points using the WebGL virtual map engine "Cesium". By selecting a dive point, users can watch deep-sea videos and photos and associated environmental data, e.g. water temperature, salinity, rock and biological sample photos, obtained by the dive survey. Users can browse a dive track visualized in 3D virtual spaces using the WebGL JavaScript library. By synchronizing this virtual dive track with videos, users can watch deep-sea videos recorded at a point on a dive track. Users can play an animation which a submersible-shaped polygon automatically traces a 3D virtual dive track and displays of dive survey data are synchronized with tracing a dive track. Users can directly refer to additional information of other JAMSTEC data sites such as marine biodiversity database, marine biological sample database, rock sample database, and cruise and dive information database, on each page which a 3D virtual dive track is displayed. A 3D visualization of a dive track makes users experience a virtual dive survey. In addition, by synchronizing a virtual dive track with videos, it is easy to understand living organisms and geological environments of a dive point. Therefore, these functions will visually support understanding of deep-sea environments in lectures and educational activities.
2008-01-01
Objective To compare optical coherence tomography (OCT)-measured retinal thickness and visual acuity in eyes with diabetic macular edema (DME) both before and after macular laser photocoagulation. Design Cross-sectional and longitudinal study. Participants 210 subjects (251 eyes) with DME enrolled in a randomized clinical trial of laser techniques. Methods Retinal thickness was measured with OCT and visual acuity was measured with the electronic-ETDRS procedure. Main Outcome Measures OCT-measured center point thickness and visual acuity Results The correlation coefficients for visual acuity versus OCT center point thickness were 0.52 at baseline and 0.49, 0.36, and 0.38 at 3.5, 8, and 12 months post-laser photocoagulation. The slope of the best fit line to the baseline data was approximately 4.4 letters (95% C.I.: 3.5, 5.3) better visual acuity for every 100 microns decrease in center point thickness at baseline with no important difference at follow-up visits. Approximately one-third of the variation in visual acuity could be predicted by a linear regression model that incorporated OCT center point thickness, age, hemoglobin A1C, and severity of fluorescein leakage in the center and inner subfields. The correlation between change in visual acuity and change in OCT center point thickening 3.5 months after laser treatment was 0.44 with no important difference at the other follow-up times. A subset of eyes showed paradoxical improvements in visual acuity with increased center point thickening (7–17% at the three time points) or paradoxical worsening of visual acuity with a decrease in center point thickening (18%–26% at the three time points). Conclusions There is modest correlation between OCT-measured center point thickness and visual acuity, and modest correlation of changes in retinal thickening and visual acuity following focal laser treatment for DME. However, a wide range of visual acuity may be observed for a given degree of retinal edema and paradoxical increases in center point thickening with increases in visual acuity as well as paradoxical decreases in center point thickening with decreases in visual acuity were not uncommon. Thus, although OCT measurements of retinal thickness represent an important tool in clinical evaluation, they cannot reliably substitute as a surrogate for visual acuity at a given point in time. This study does not address whether short-term changes on OCT are predictive of long-term effects on visual acuity. PMID:17123615
Compression-based integral curve data reuse framework for flow visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Fan; Bi, Chongke; Guo, Hanqi
Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less
The Virtual Pelvic Floor, a tele-immersive educational environment.
Pearl, R. K.; Evenhouse, R.; Rasmussen, M.; Dech, F.; Silverstein, J. C.; Prokasy, S.; Panko, W. B.
1999-01-01
This paper describes the development of the Virtual Pelvic Floor, a new method of teaching the complex anatomy of the pelvic region utilizing virtual reality and advanced networking technology. Virtual reality technology allows improved visualization of three-dimensional structures over conventional media because it supports stereo vision, viewer-centered perspective, large angles of view, and interactivity. Two or more ImmersaDesk systems, drafting table format virtual reality displays, are networked together providing an environment where teacher and students share a high quality three-dimensional anatomical model, and are able to converse, see each other, and to point in three dimensions to indicate areas of interest. This project was realized by the teamwork of surgeons, medical artists and sculptors, computer scientists, and computer visualization experts. It demonstrates the future of virtual reality for surgical education and applications for the Next Generation Internet. Images Figure 1 Figure 2 Figure 3 PMID:10566378
Evaluation of Pseudo-Haptic Interactions with Soft Objects in Virtual Environments.
Li, Min; Sareh, Sina; Xu, Guanghua; Ridzuan, Maisarah Binti; Luo, Shan; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar
2016-01-01
This paper proposes a pseudo-haptic feedback method conveying simulated soft surface stiffness information through a visual interface. The method exploits a combination of two feedback techniques, namely visual feedback of soft surface deformation and control of the indenter avatar speed, to convey stiffness information of a simulated surface of a soft object in virtual environments. The proposed method was effective in distinguishing different sizes of virtual hard nodules integrated into the simulated soft bodies. To further improve the interactive experience, the approach was extended creating a multi-point pseudo-haptic feedback system. A comparison with regards to (a) nodule detection sensitivity and (b) elapsed time as performance indicators in hard nodule detection experiments to a tablet computer incorporating vibration feedback was conducted. The multi-point pseudo-haptic interaction is shown to be more time-efficient than the single-point pseudo-haptic interaction. It is noted that multi-point pseudo-haptic feedback performs similarly well when compared to a vibration-based feedback method based on both performance measures elapsed time and nodule detection sensitivity. This proves that the proposed method can be used to convey detailed haptic information for virtual environmental tasks, even subtle ones, using either a computer mouse or a pressure sensitive device as an input device. This pseudo-haptic feedback method provides an opportunity for low-cost simulation of objects with soft surfaces and hard inclusions, as, for example, occurring in ever more realistic video games with increasing emphasis on interaction with the physical environment and minimally invasive surgery in the form of soft tissue organs with embedded cancer nodules. Hence, the method can be used in many low-budget applications where haptic sensation is required, such as surgeon training or video games, either using desktop computers or portable devices, showing reasonably high fidelity in conveying stiffness perception to the user.
Quasi-Monochromatic Visual Environments and the Resting Point of Accommodation
1988-01-01
accommodation. No statistically significant differences were revealed to support the possibility of color mediated differential regression to resting...discussed with respect to the general findings of the total sample as well as the specific behavior of individual participants. The summarized statistics ...remaining ten varied considerably with respect to the averaged trends reported in the above descriptive statistics as well as with respect to precision
Induced Stress, Artificial Environment, Simulated Tactical Operations Center Model
1973-06-01
oriented 4 activities or, at best , tre application of dor:trinal i. 14 concepts to command post exercises. Unlike mechanical skills, weapon’s...training model identified as APSTRAT, an acronym indicating aptitude and strategies , be considered as a point of reference. Several instructional...post providing visual and aural sensing tasks and training objective oriented performance tasks. Vintilly, ho concludes that failure should be
Photorealistic ray tracing to visualize automobile side mirror reflective scenes.
Lee, Hocheol; Kim, Kyuman; Lee, Gang; Lee, Sungkoo; Kim, Jingu
2014-10-20
We describe an interactive visualization procedure for determining the optimal surface of a special automobile side mirror, thereby removing the blind spot, without the need for feedback from the error-prone manufacturing process. If the horizontally progressive curvature distributions are set to the semi-mathematical expression for a free-form surface, the surface point set can then be derived through numerical integration. This is then converted to a NURBS surface while retaining the surface curvature. Then, reflective scenes from the driving environment can be virtually realized using photorealistic ray tracing, in order to evaluate how these reflected images would appear to drivers.
A cognitive approach to vision for a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Funk, Christopher; Lyons, Damian
2013-05-01
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
The effect of visual context on manual localization of remembered targets
NASA Technical Reports Server (NTRS)
Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.
1997-01-01
This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.
Ci, Wenyan; Huang, Yingping
2016-10-17
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
Ci, Wenyan; Huang, Yingping
2016-01-01
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508
NASA Astrophysics Data System (ADS)
Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.
2002-10-01
In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.
A visualization environment for supercomputing-based applications in computational mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.
1993-06-01
In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.
Research of cartographer laser SLAM algorithm
NASA Astrophysics Data System (ADS)
Xu, Bo; Liu, Zhengjun; Fu, Yiran; Zhang, Changsai
2017-11-01
As the indoor is a relatively closed and small space, total station, GPS, close-range photogrammetry technology is difficult to achieve fast and accurate indoor three-dimensional space reconstruction task. LIDAR SLAM technology does not rely on the external environment a priori knowledge, only use their own portable lidar, IMU, odometer and other sensors to establish an independent environment map, a good solution to this problem. This paper analyzes the Google Cartographer laser SLAM algorithm from the point cloud matching and closed loop detection. Finally, the algorithm is presented in the 3D visualization tool RViz from the data acquisition and processing to create the environment map, complete the SLAM technology and realize the process of indoor threedimensional space reconstruction
Risk Identification and Visualization in a Concurrent Engineering Team Environment
NASA Technical Reports Server (NTRS)
Hihn, Jairus; Chattopadhyay, Debarati; Shishko, Robert
2010-01-01
Incorporating risk assessment into the dynamic environment of a concurrent engineering team requires rapid response and adaptation. Generating consistent risk lists with inputs from all the relevant subsystems and presenting the results clearly to the stakeholders in a concurrent engineering environment is difficult because of the speed with which decisions are made. In this paper we describe the various approaches and techniques that have been explored for the point designs of JPL's Team X and the Trade Space Studies of the Rapid Mission Architecture Team. The paper will also focus on the issues of the misuse of categorical and ordinal data that keep arising within current engineering risk approaches and also in the applied risk literature.
NASA Technical Reports Server (NTRS)
Walatka, Pamela P.; Clucas, Jean; McCabe, R. Kevin; Plessel, Todd; Potter, R.; Cooper, D. M. (Technical Monitor)
1994-01-01
The Flow Analysis Software Toolkit, FAST, is a software environment for visualizing data. FAST is a collection of separate programs (modules) that run simultaneously and allow the user to examine the results of numerical and experimental simulations. The user can load data files, perform calculations on the data, visualize the results of these calculations, construct scenes of 3D graphical objects, and plot, animate and record the scenes. Computational Fluid Dynamics (CFD) visualization is the primary intended use of FAST, but FAST can also assist in the analysis of other types of data. FAST combines the capabilities of such programs as PLOT3D, RIP, SURF, and GAS into one environment with modules that share data. Sharing data between modules eliminates the drudgery of transferring data between programs. All the modules in the FAST environment have a consistent, highly interactive graphical user interface. Most commands are entered by pointing and'clicking. The modular construction of FAST makes it flexible and extensible. The environment can be custom configured and new modules can be developed and added as needed. The following modules have been developed for FAST: VIEWER, FILE IO, CALCULATOR, SURFER, TOPOLOGY, PLOTTER, TITLER, TRACER, ARCGRAPH, GQ, SURFERU, SHOTET, and ISOLEVU. A utility is also included to make the inclusion of user defined modules in the FAST environment easy. The VIEWER module is the central control for the FAST environment. From VIEWER, the user can-change object attributes, interactively position objects in three-dimensional space, define and save scenes, create animations, spawn new FAST modules, add additional view windows, and save and execute command scripts. The FAST User Guide uses text and FAST MAPS (graphical representations of the entire user interface) to guide the user through the use of FAST. Chapters include: Maps, Overview, Tips, Getting Started Tutorial, a separate chapter for each module, file formats, and system administration.
Ground-Based Robotic Sensing of an Agricultural Sub-Canopy Environment
NASA Astrophysics Data System (ADS)
Burns, A.; Peschel, J.
2015-12-01
Airborne remote sensing is a useful method for measuring agricultural crop parameters over large areas; however, the approach becomes limited to above-canopy characterization as a crop matures due to reduced visual access of the sub-canopy environment. During the growth cycle of an agricultural crop, such as soybeans, the micrometeorology of the sub-canopy environment can significantly impact pod development and reduced yields may result. Larger-scale environmental conditions aside, the physical structure and configuration of the sub-canopy matrix will logically influence local climate conditions for a single plant; understanding the state and development of the sub-canopy could inform crop models and improve best practices but there are currently no low-cost methods to quantify the sub-canopy environment at a high spatial and temporal resolution over an entire growth cycle. This work describes the modification of a small tactical and semi-autonomous, ground-based robotic platform with sensors capable of mapping the physical structure of an agricultural row crop sub-canopy; a soybean crop is used as a case study. Point cloud data representing the sub-canopy structure are stored in LAS format and can be used for modeling and visualization in standard GIS software packages.
Mishra, Ajay; Aloimonos, Yiannis
2009-01-01
The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.
Piecewise-Planar StereoScan: Sequential Structure and Motion using Plane Primitives.
Raposo, Carolina; Antunes, Michel; P Barreto, Joao
2017-08-09
The article describes a pipeline that receives as input a sequence of stereo images, and outputs the camera motion and a Piecewise-Planar Reconstruction (PPR) of the scene. The pipeline, named Piecewise-Planar StereoScan (PPSS), works as follows: the planes in the scene are detected for each stereo view using semi-dense depth estimation; the relative pose is computed by a new closed-form minimal algorithm that only uses point correspondences whenever plane detections do not fully constrain the motion; the camera motion and the PPR are jointly refined by alternating between discrete optimization and continuous bundle adjustment; and, finally, the detected 3D planes are segmented in images using a new framework that handles low texture and visibility issues. PPSS is extensively validated in indoor and outdoor datasets, and benchmarked against two popular point-based SfM pipelines. The experiments confirm that plane-based visual odometry is resilient to situations of small image overlap, poor texture, specularity, and perceptual aliasing where the fast LIBVISO2 pipeline fails. The comparison against VisualSfM+CMVS/PMVS shows that, for a similar computational complexity, PPSS is more accurate and provides much more compelling and visually pleasant 3D models. These results strongly suggest that plane primitives are an advantageous alternative to point correspondences for applications of SfM and 3D reconstruction in man-made environments.
StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.
Li, Chenhui; Baciu, George; Han, Yu
2018-03-01
Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.
Realization of ActiveX control based on ATL in VC 2008
NASA Astrophysics Data System (ADS)
Li, Shuhua; Tie, Yong
2011-10-01
ActiveX has a key role in web development, this paper realizes the classical program Polygon in the newest Visual C++ environment 2008 and tests each function of control in ActiveX Control Test Container. After that web code is created rapidly via ActiveX Control Pad and it is checked in HTML. Development process and key point attention are summarized systematically which can guide the related developers.
Simulator Study of Helmet-Mounted Symbology System Concepts in Degraded Visual Environments.
Cheung, Bob; McKinley, Richard A; Steels, Brad; Sceviour, Robert; Cosman, Vaughn; Holst, Peter
2015-07-01
A sudden loss of external visual cues during critical phases of flight results in spatial disorientation. This is due to undetected horizontal and vertical drift when there is little tolerance for error and correction delay as the helicopter is close to the ground. Three helmet-mounted symbology system concepts were investigated in the simulator as potential solutions for the legacy Griffon helicopters. Thirteen Royal Canadian Air Force (RCAF) Griffon pilots were exposed to the Helmet Display Tracking System for Degraded Visual Environments (HDTS), the BrownOut Symbology System (BOSS), and the current RCAF AVS7 symbology system. For each symbology system, the pilot performed a two-stage departure and a single-stage approach. The presentation order of the symbology systems was randomized. Objective performance metrics included aircraft speed, altitude, attitude, and distance from the landing point. Subjective measurements included situation awareness, mental effort, perceived performance, perceptual cue rating, and NASA Task Load Index. Repeated measures analysis of variance and subsequent planned comparison for all the objective and subjective measurements were performed between the AVS7, HDTS, and BOSS. Our results demonstrated that HDTS and BOSS showed general improvement over AVS7 in two-stage departure. However, only HDTS performed significantly better in heading error than AVS7. During the single-stage approach, BOSS performed worse than AVS7 in heading root mean square error, and only HDTS performed significantly better in distance to landing point and approach heading than the others. Both the HDTS and BOSS possess their own limitations; however, HDTS is the pilots' preferred flight display.
An Approach of Web-based Point Cloud Visualization without Plug-in
NASA Astrophysics Data System (ADS)
Ye, Mengxuan; Wei, Shuangfeng; Zhang, Dongmei
2016-11-01
With the advances in three-dimensional laser scanning technology, the demand for visualization of massive point cloud is increasingly urgent, but a few years ago point cloud visualization was limited to desktop-based solutions until the introduction of WebGL, several web renderers are available. This paper addressed the current issues in web-based point cloud visualization, and proposed a method of web-based point cloud visualization without plug-in. The method combines ASP.NET and WebGL technologies, using the spatial database PostgreSQL to store data and the open web technologies HTML5 and CSS3 to implement the user interface, a visualization system online for 3D point cloud is developed by Javascript with the web interactions. Finally, the method is applied to the real case. Experiment proves that the new model is of great practical value which avoids the shortcoming of the existing WebGIS solutions.
Bloch, Natasha I.; Morrow, James M.; Chang, Belinda S.W.; Price, Trevor D.
2014-01-01
Distantly related clades that occupy similar environments may differ due to the lasting imprint of their ancestors – historical contingency. The New World warblers (Parulidae) and Old World warblers (Phylloscopidae) are ecologically similar clades that differ strikingly in plumage coloration. We studied genetic and functional evolution of the short-wavelength sensitive visual pigments (SWS2 and SWS1) to ask if altered color perception could contribute to the plumage color differences between clades. We show SWS2 is short-wavelength shifted in birds that occupy open environments, such as finches, compared to those in closed environments, including warblers. Phylogenetic reconstructions indicate New World warblers were derived from a finch-like form that colonized from the Old World 15-20Ma. During this process the SWS2 gene accumulated 6 substitutions in branches leading to New World warblers, inviting the hypothesis that passage through a finch-like ancestor resulted in SWS2 evolution. In fact, we show spectral tuning remained similar across warblers as well as the finch ancestor. Results reject the hypothesis of historical contingency based on opsin spectral tuning, but point to evolution of other aspects of visual pigment function. Using the approach outlined here, historical contingency becomes a generally testable theory in systems where genotype and phenotype can be connected. PMID:25496318
Bloch, Natasha I; Morrow, James M; Chang, Belinda S W; Price, Trevor D
2015-02-01
Distantly related clades that occupy similar environments may differ due to the lasting imprint of their ancestors-historical contingency. The New World warblers (Parulidae) and Old World warblers (Phylloscopidae) are ecologically similar clades that differ strikingly in plumage coloration. We studied genetic and functional evolution of the short-wavelength-sensitive visual pigments (SWS2 and SWS1) to ask if altered color perception could contribute to the plumage color differences between clades. We show SWS2 is short-wavelength shifted in birds that occupy open environments, such as finches, compared to those in closed environments, including warblers. Phylogenetic reconstructions indicate New World warblers were derived from a finch-like form that colonized from the Old World 15-20 Ma. During this process, the SWS2 gene accumulated six substitutions in branches leading to New World warblers, inviting the hypothesis that passage through a finch-like ancestor resulted in SWS2 evolution. In fact, we show spectral tuning remained similar across warblers as well as the finch ancestor. Results reject the hypothesis of historical contingency based on opsin spectral tuning, but point to evolution of other aspects of visual pigment function. Using the approach outlined here, historical contingency becomes a generally testable theory in systems where genotype and phenotype can be connected. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
Imaging laser radar for high-speed monitoring of the environment
NASA Astrophysics Data System (ADS)
Froehlich, Christoph; Mettenleiter, M.; Haertl, F.
1998-01-01
In order to establish mobile robot operations and to realize survey and inspection tasks, robust and precise measurements of the geometry of the 3D environment is the basis sensor technology. For visual inspection, surface classification, and documentation purposes, however, additional information concerning reflectance of measured objects is necessary. High-speed acquisition of both geometric and visual information is achieved by means of an active laser radar, supporting consistent range and reflectance images. The laser radar developed at Zoller + Froehlich (ZF) is an optical-wavelength system measuring the range between sensor and target surface as well as the reflectance of the target surface, which corresponds to the magnitude of the back scattered laser energy. In contrast to other range sensing devices, the ZF system is designed for high-speed and high- performance operation in real indoor and outdoor environments, emitting a minimum of near-IR laser energy. It integrates a single-point laser measurement system and a mechanical deflection system for 3D environmental measurements. This paper reports details of the laser radar which is designed to cover requirements with medium range applications. It outlines the performance requirements and introduces the two-frequency phase-shift measurement principle. The hardware design of the single-point laser measurement system, including the main modulates, such as the laser head, the high frequency unit and the signal processing unit are discussed in detail. The paper focuses on performance data of the laser radar, including noise, drift over time, precision, and accuracy with measurements. It discusses the influences of ambient light, surface material of the target, and ambient temperature for range accuracy and range precision. Furthermore, experimental results from inspection of tunnels, buildings, monuments and industrial environments are presented. The paper concludes by summarizing results and gives a short outlook to future work.
A pose estimation method for unmanned ground vehicles in GPS denied environments
NASA Astrophysics Data System (ADS)
Tamjidi, Amirhossein; Ye, Cang
2012-06-01
This paper presents a pose estimation method based on the 1-Point RANSAC EKF (Extended Kalman Filter) framework. The method fuses the depth data from a LIDAR and the visual data from a monocular camera to estimate the pose of a Unmanned Ground Vehicle (UGV) in a GPS denied environment. Its estimation framework continuy updates the vehicle's 6D pose state and temporary estimates of the extracted visual features' 3D positions. In contrast to the conventional EKF-SLAM (Simultaneous Localization And Mapping) frameworks, the proposed method discards feature estimates from the extended state vector once they are no longer observed for several steps. As a result, the extended state vector always maintains a reasonable size that is suitable for online calculation. The fusion of laser and visual data is performed both in the feature initialization part of the EKF-SLAM process and in the motion prediction stage. A RANSAC pose calculation procedure is devised to produce pose estimate for the motion model. The proposed method has been successfully tested on the Ford campus's LIDAR-Vision dataset. The results are compared with the ground truth data of the dataset and the estimation error is ~1.9% of the path length.
3D Virtual Environment Used to Support Lighting System Management in a Building
NASA Astrophysics Data System (ADS)
Sampaio, A. Z.; Ferreira, M. M.; Rosário, D. P.
The main aim of the research project, which is in progress at the UTL, is to develop a virtual interactive model as a tool to support decision-making in the planning of construction maintenance and facilities management. The virtual model gives the capacity to allow the user to transmit, visually and interactively, information related to the components of a building, defined as a function of the time variable. In addition, the analysis of solutions for repair work/substitution and inherent cost are predicted, the results being obtained interactively and visualized in the virtual environment itself. The first component of the virtual prototype concerns the management of lamps in a lighting system. It was applied in a study case. The interactive application allows the examination of the physical model, visualizing, for each element modeled in 3D and linked to a database, the corresponding technical information concerned with the use of the material, calculated for different points in time during their life. The control of a lamp stock, the constant updating of lifetime information and the planning of periodical local inspections are attended on the prototype. This is an important mean of cooperation between collaborators involved in the building management.
Using a virtual world for robot planning
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Monaco, John V.; Lin, Yixia; Funk, Christopher; Lyons, Damian
2012-06-01
We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's actions. We report experimental results in indoor environments.
2017-04-01
ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH (VISRIDER) PROGRAM TASK 6: POINT CLOUD...To) OCT 2013 – SEP 2014 4. TITLE AND SUBTITLE ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH...various point cloud visualization techniques for viewing large scale LiDAR datasets. Evaluate their potential use for thick client desktop platforms
Moles: Tool-Assisted Environment Isolation with Closures
NASA Astrophysics Data System (ADS)
de Halleux, Jonathan; Tillmann, Nikolai
Isolating test cases from environment dependencies is often desirable, as it increases test reliability and reduces test execution time. However, code that calls non-virtual methods or consumes sealed classes is often impossible to test in isolation. Moles is a new lightweight framework which addresses this problem. For any .NET method, Moles allows test-code to provide alternative implementations, given as .NET delegates, for which C# provides very concise syntax while capturing local variables in a closure object. Using code instrumentation, the Moles framework will redirect calls to provided delegates instead of the original methods. The Moles framework is designed to work together with the dynamic symbolic execution tool Pex to enable automated test generation. In a case study, testing code programmed against the Microsoft SharePoint Foundation API, we achieved full code coverage while running tests in isolation without an actual SharePoint server. The Moles framework integrates with .NET and Visual Studio.
Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel-Robert; Amedi, Amir
2013-01-01
Virtual worlds and environments are becoming an increasingly central part of our lives, yet they are still far from accessible to the blind. This is especially unfortunate as such environments hold great potential for them for uses such as social interaction, online education and especially for use with familiarizing the visually impaired user with a real environment virtually from the comfort and safety of his own home before visiting it in the real world. We have implemented a simple algorithm to improve this situation using single-point depth information, enabling the blind to use a virtual cane, modeled on the “EyeCane” electronic travel aid, within any virtual environment with minimal pre-processing. Use of the Virtual-EyeCane, enables this experience to potentially be later used in real world environments with identical stimuli to those from the virtual environment. We show the fast-learned practical use of this algorithm for navigation in simple environments. PMID:23977316
Direct manipulation of virtual objects
NASA Astrophysics Data System (ADS)
Nguyen, Long K.
Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities -- proprioception, haptics, and audition -- and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum -- Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.
My Ideal City (mic): Virtual Environments to Design the Future Town
NASA Astrophysics Data System (ADS)
Borgherini, M.; Garbin, E.
2011-09-01
MIC is an EU funded project to explore the use of shared virtual environments as part of a public discussion on the issues of building the city of the future. An interactive exploration of four european cities - in the digital city models were translated urban places, family problems and citizens wishes - is a chance to see them in different ways and from different points of view, to imagine new scenarios to overcome barriers and stereotypes no longer effective. This paper describes the process from data to visualization of virtual cities and, in detail, the project of two interactive digital model (Trento and Lisbon).
NASA Technical Reports Server (NTRS)
Bergeron, H. P.; Haynie, A. T.; Mcdede, J. B.
1980-01-01
A general aviation single pilot instrument flight rule simulation capability was developed. Problems experienced by single pilots flying in IFR conditions were investigated. The simulation required a three dimensional spatial navaid environment of a flight navigational area. A computer simulation of all the navigational aids plus 12 selected airports located in the Washington/Norfolk area was developed. All programmed locations in the list were referenced to a Cartesian coordinate system with the origin located at a specified airport's reference point. All navigational aids with their associated frequencies, call letters, locations, and orientations plus runways and true headings are included in the data base. The simulation included a TV displayed out-the-window visual scene of country and suburban terrain and a scaled model runway complex. Any of the programmed runways, with all its associated navaids, can be referenced to a runway on the airport in this visual scene. This allows a simulation of a full mission scenario including breakout and landing.
The research of autonomous obstacle avoidance of mobile robot based on multi-sensor integration
NASA Astrophysics Data System (ADS)
Zhao, Ming; Han, Baoling
2016-11-01
The object of this study is the bionic quadruped mobile robot. The study has proposed a system design plan for mobile robot obstacle avoidance with the binocular stereo visual sensor and the self-control 3D Lidar integrated with modified ant colony optimization path planning to realize the reconstruction of the environmental map. Because the working condition of a mobile robot is complex, the result of the 3D reconstruction with a single binocular sensor is undesirable when feature points are few and the light condition is poor. Therefore, this system integrates the stereo vision sensor blumblebee2 and the Lidar sensor together to detect the cloud information of 3D points of environmental obstacles. This paper proposes the sensor information fusion technology to rebuild the environment map. Firstly, according to the Lidar data and visual data on obstacle detection respectively, and then consider two methods respectively to detect the distribution of obstacles. Finally fusing the data to get the more complete, more accurate distribution of obstacles in the scene. Then the thesis introduces ant colony algorithm. It has analyzed advantages and disadvantages of the ant colony optimization and its formation cause deeply, and then improved the system with the help of the ant colony optimization to increase the rate of convergence and precision of the algorithm in robot path planning. Such improvements and integrations overcome the shortcomings of the ant colony optimization like involving into the local optimal solution easily, slow search speed and poor search results. This experiment deals with images and programs the motor drive under the compiling environment of Matlab and Visual Studio and establishes the visual 2.5D grid map. Finally it plans a global path for the mobile robot according to the ant colony algorithm. The feasibility and effectiveness of the system are confirmed by ROS and simulation platform of Linux.
Flies and humans share a motion estimation strategy that exploits natural scene statistics
Clark, Damon A.; Fitzgerald, James E.; Ales, Justin M.; Gohl, Daryl M.; Silies, Marion A.; Norcia, Anthony M.; Clandinin, Thomas R.
2014-01-01
Sighted animals extract motion information from visual scenes by processing spatiotemporal patterns of light falling on the retina. The dominant models for motion estimation exploit intensity correlations only between pairs of points in space and time. Moving natural scenes, however, contain more complex correlations. Here we show that fly and human visual systems encode the combined direction and contrast polarity of moving edges using triple correlations that enhance motion estimation in natural environments. Both species extract triple correlations with neural substrates tuned for light or dark edges, and sensitivity to specific triple correlations is retained even as light and dark edge motion signals are combined. Thus, both species separately process light and dark image contrasts to capture motion signatures that can improve estimation accuracy. This striking convergence argues that statistical structures in natural scenes have profoundly affected visual processing, driving a common computational strategy over 500 million years of evolution. PMID:24390225
A spatially collocated sound thrusts a flash into awareness
Aller, Máté; Giani, Anette; Conrad, Verena; Watanabe, Masataka; Noppeney, Uta
2015-01-01
To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception. PMID:25774126
Spoerer, Courtney J; Eguchi, Akihiro; Stringer, Simon M
2016-02-01
In order to develop transformation invariant representations of objects, the visual system must make use of constraints placed upon object transformation by the environment. For example, objects transform continuously from one point to another in both space and time. These two constraints have been exploited separately in order to develop translation and view invariance in a hierarchical multilayer model of the primate ventral visual pathway in the form of continuous transformation learning and temporal trace learning. We show for the first time that these two learning rules can work cooperatively in the model. Using these two learning rules together can support the development of invariance in cells and help maintain object selectivity when stimuli are presented over a large number of locations or when trained separately over a large number of viewing angles. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Pictorial depth probed through relative sizes
Wagemans, Johan; van Doorn, Andrea J; Koenderink, Jan J
2011-01-01
In the physical environment familiar size is an effective depth cue because the distance from the eye to an object equals the ratio of its physical size to its angular extent in the visual field. Such simple geometrical relations do not apply to pictorial space, since the eye itself is not in pictorial space, and consequently the notion “distance from the eye” is meaningless. Nevertheless, relative size in the picture plane is often used by visual artists to suggest depth differences. The depth domain has no natural origin, nor a natural unit; thus only ratios of depth differences could have an invariant significance. We investigate whether the pictorial relative size cue yields coherent depth structures in pictorial spaces. Specifically, we measure the depth differences for all pairs of points in a 20-point configuration in pictorial space, and we account for these observations through 19 independent parameters (the depths of the points modulo an arbitrary offset), with no meaningful residuals. We discuss a simple formal framework that allows one to handle individual differences. We also compare the depth scale obtained by way of this method with depth scales obtained in totally different ways, finding generally good agreement. PMID:23145258
Anti-pointing is mediated by a perceptual bias of target location in left and right visual space.
Heath, Matthew; Maraj, Anika; Gradkowski, Ashlee; Binsted, Gordon
2009-01-01
We sought to determine whether mirror-symmetrical limb movements (so-called anti-pointing) elicit a pattern of endpoint bias commensurate with perceptual judgments. In particular, we examined whether asymmetries related to the perceptual over- and under-estimation of target extent in respective left and right visual space impacts the trajectories of anti-pointing. In Experiment 1, participants completed direct (i.e. pro-pointing) and mirror-symmetrical (i.e. anti-pointing) responses to targets in left and right visual space with their right hand. In line with the anti-saccade literature, anti-pointing yielded longer reaction times than pro-pointing: a result suggesting increased top-down processing for the sensorimotor transformations underlying a mirror-symmetrical response. Most interestingly, pro-pointing yielded comparable endpoint accuracy in left and right visual space; however, anti-pointing produced an under- and overshooting bias in respective left and right visual space. In Experiment 2, we replicated the findings from Experiment 1 and further demonstrate that the endpoint bias of anti-pointing is independent of the reaching limb (i.e. left vs. right hand) and between-task differences in saccadic drive. We thus propose that the visual field-specific endpoint bias observed here is related to the cognitive (i.e. top-down) nature of anti-pointing and the corollary use of visuo-perceptual networks to support the sensorimotor transformations underlying such actions.
NASA Astrophysics Data System (ADS)
Kreylos, O.; Bawden, G. W.; Kellogg, L. H.
2005-12-01
We are developing a visualization application to display and interact with very large (tens of millions of points) four-dimensional point position datasets in an immersive environment such that point groups from repeated Tripod LiDAR (Light Detection And Ranging) surveys can be selected, measured, and analyzed for land surface change using 3D~interactions. Ground-based tripod or terrestrial LiDAR (T-LiDAR) can remotely collect ultra-high resolution (centimeter to subcentimeter) and accurate (± 4 mm) digital imagery of the scanned target, and at scanning rates of 2,000 (x, y, z, i) (3D~position~+ intensity) points per second over 7~million points can be collected for a given target in an hour. We developed a multiresolution point set data representation based on octrees to display large T-LiDAR point cloud datasets at the frame rates required for immersive display (between 60 Hz and 120 Hz). Data inside an observer's region of interest is shown in full detail, whereas data outside the field of view or far away from the observer is shown at reduced resolution to provide context. Using 3D input devices at the University of California Davis KeckCAVES, users can navigate large point sets, accurately select related point groups in two or more point sets by sweeping regions of space, and guide the software in deriving positional information from point groups to compute their displacements between surveys. We used this new software application in the KeckCAVES to analyze 4D T-LiDAR imagery from the June~1, 2005 Blue Bird Canyon landslide in Laguna Beach, southern California. Over 50~million (x, y, z, i) data points were collected between 10 and 21~days after the landslide to evaluate T-LiDAR as a natural hazards response tool. The visualization of the T-LiDAR scans within the immediate landslide showed minor readjustments in the weeks following the primarily landslide with no observable continued motion on the primary landslide. Recovery and demolition efforts across the landslide, such as the building of new roads and removal of unstable structures, are easily identified and assessed with the new software through the differencing of aligned imagery.
NASA Astrophysics Data System (ADS)
Tabrizian, P.; Petrasova, A.; Baran, P.; Petras, V.; Mitasova, H.; Meentemeyer, R. K.
2017-12-01
Viewshed modelling- a process of defining, parsing and analysis of landscape visual space's structure within GIS- has been commonly used in applications ranging from landscape planning and ecosystem services assessment to geography and archaeology. However, less effort has been made to understand whether and to what extent these objective analyses predict actual on-the-ground perception of human observer. Moreover, viewshed modelling at the human-scale level require incorporation of fine-grained landscape structure (eg., vegetation) and patterns (e.g, landcover) that are typically omitted from visibility calculations or unrealistically simulated leading to significant error in predicting visual attributes. This poster illustrates how photorealistic Immersive Virtual Environments and high-resolution geospatial data can be used to integrate objective and subjective assessments of visual characteristics at the human-scale level. We performed viewshed modelling for a systematically sampled set of viewpoints (N=340) across an urban park using open-source GIS (GRASS GIS). For each point a binary viewshed was computed on a 3D surface model derived from high-density leaf-off LIDAR (QL2) points. Viewshed map was combined with high-resolution landcover (.5m) derived through fusion of orthoimagery, lidar vegetation, and vector data. Geo-statistics and landscape structure analysis was performed to compute topological and compositional metrics for visual-scale (e.g., openness), complexity (pattern, shape and object diversity), and naturalness. Based on the viewshed model output, a sample of 24 viewpoints representing the variation of visual characteristics were selected and geolocated. For each location, 360o imagery were captured using a DSL camera mounted on a GIGA PAN robot. We programmed a virtual reality application through which human subjects (N=100) immersively experienced a random representation of selected environments via a head-mounted display (Oculus Rift CV1), and rated each location on perceived openness, naturalness and complexity. Regression models were performed to correlate model outputs with participants' responses. The results indicated strong, significant correlations for openness, and naturalness and moderate correlation for complexity estimations.
Effects of aging on pointing movements under restricted visual feedback conditions.
Zhang, Liancun; Yang, Jiajia; Inai, Yoshinobu; Huang, Qiang; Wu, Jinglong
2015-04-01
The goal of this study was to investigate the effects of aging on pointing movements under restricted visual feedback of hand movement and target location. Fifteen young subjects and fifteen elderly subjects performed pointing movements under four restricted visual feedback conditions that included full visual feedback of hand movement and target location (FV), no visual feedback of hand movement and target location condition (NV), no visual feedback of hand movement (NM) and no visual feedback of target location (NT). This study suggested that Fitts' law applied for pointing movements of the elderly adults under different visual restriction conditions. Moreover, significant main effect of aging on movement times has been found in all four tasks. The peripheral and central changes may be the key factors for these different characteristics. Furthermore, no significant main effects of age on the mean accuracy rate under condition of restricted visual feedback were found. The present study suggested that the elderly subjects made a very similar use of the available sensory information as young subjects under restricted visual feedback conditions. In addition, during the pointing movement, information about the hand's movement was more useful than information about the target location for young and elderly subjects. Copyright © 2014 Elsevier B.V. All rights reserved.
Matching optical flow to motor speed in virtual reality while running on a treadmill
Lafortuna, Claudio L.; Mugellini, Elena; Abou Khaled, Omar
2018-01-01
We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed–i.e., treadmill’s speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care. PMID:29641564
Matching optical flow to motor speed in virtual reality while running on a treadmill.
Caramenti, Martina; Lafortuna, Claudio L; Mugellini, Elena; Abou Khaled, Omar; Bresciani, Jean-Pierre; Dubois, Amandine
2018-01-01
We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed-i.e., treadmill's speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care.
Visual-Vestibular Conflict Detection Depends on Fixation.
Garzorz, Isabelle T; MacNeilage, Paul R
2017-09-25
Visual and vestibular signals are the primary sources of sensory information for self-motion. Conflict among these signals can be seriously debilitating, resulting in vertigo [1], inappropriate postural responses [2], and motion, simulator, or cyber sickness [3-8]. Despite this significance, the mechanisms mediating conflict detection are poorly understood. Here we model conflict detection simply as crossmodal discrimination with benchmark performance limited by variabilities of the signals being compared. In a series of psychophysical experiments conducted in a virtual reality motion simulator, we measure these variabilities and assess conflict detection relative to this benchmark. We also examine the impact of eye movements on visual-vestibular conflict detection. In one condition, observers fixate a point that is stationary in the simulated visual environment by rotating the eyes opposite head rotation, thereby nulling retinal image motion. In another condition, eye movement is artificially minimized via fixation of a head-fixed fixation point, thereby maximizing retinal image motion. Visual-vestibular integration performance is also measured, similar to previous studies [9-12]. We observe that there is a tradeoff between integration and conflict detection that is mediated by eye movements. Minimizing eye movements by fixating a head-fixed target leads to optimal integration but highly impaired conflict detection. Minimizing retinal motion by fixating a scene-fixed target improves conflict detection at the cost of impaired integration performance. The common tendency to fixate scene-fixed targets during self-motion [13] may indicate that conflict detection is typically a higher priority than the increase in precision of self-motion estimation that is obtained through integration. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro
2011-01-01
During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649
Visual laterality in dolphins: importance of the familiarity of stimuli
2012-01-01
Background Many studies of cerebral asymmetries in different species lead, on the one hand, to a better understanding of the functions of each cerebral hemisphere and, on the other hand, to develop an evolutionary history of hemispheric laterality. Our animal model is particularly interesting because of its original evolutionary path, i.e. return to aquatic life after a terrestrial phase. The rare reports concerning visual laterality of marine mammals investigated mainly discrimination processes. As dolphins are migrant species they are confronted to a changing environment. Being able to categorize new versus familiar objects would allow dolphins a rapid adaptation to novel environments. Visual laterality could be a prerequisite to this adaptability. To date, no study, to our knowledge, has analyzed the environmental factors that could influence their visual laterality. Results We investigated visual laterality expressed spontaneously at the water surface by a group of five common bottlenose dolphins (Tursiops truncatus) in response to various stimuli. The stimuli presented ranged from very familiar objects (known and manipulated previously) to familiar objects (known but never manipulated) to unfamiliar objects (unknown, never seen previously). At the group level, dolphins used their left eye to observe very familiar objects and their right eye to observe unfamiliar objects. However, eyes are used indifferently to observe familiar objects with intermediate valence. Conclusion Our results suggest different visual cerebral processes based either on the global shape of well-known objects or on local details of unknown objects. Moreover, the manipulation of an object appears necessary for these dolphins to construct a global representation of an object enabling its immediate categorization for subsequent use. Our experimental results pointed out some cognitive capacities of dolphins which might be crucial for their wild life given their fission-fusion social system and migratory behaviour. PMID:22239860
Visual laterality in dolphins: importance of the familiarity of stimuli.
Blois-Heulin, Catherine; Crével, Mélodie; Böye, Martin; Lemasson, Alban
2012-01-12
Many studies of cerebral asymmetries in different species lead, on the one hand, to a better understanding of the functions of each cerebral hemisphere and, on the other hand, to develop an evolutionary history of hemispheric laterality. Our animal model is particularly interesting because of its original evolutionary path, i.e. return to aquatic life after a terrestrial phase. The rare reports concerning visual laterality of marine mammals investigated mainly discrimination processes. As dolphins are migrant species they are confronted to a changing environment. Being able to categorize new versus familiar objects would allow dolphins a rapid adaptation to novel environments. Visual laterality could be a prerequisite to this adaptability. To date, no study, to our knowledge, has analyzed the environmental factors that could influence their visual laterality. We investigated visual laterality expressed spontaneously at the water surface by a group of five common bottlenose dolphins (Tursiops truncatus) in response to various stimuli. The stimuli presented ranged from very familiar objects (known and manipulated previously) to familiar objects (known but never manipulated) to unfamiliar objects (unknown, never seen previously). At the group level, dolphins used their left eye to observe very familiar objects and their right eye to observe unfamiliar objects. However, eyes are used indifferently to observe familiar objects with intermediate valence. Our results suggest different visual cerebral processes based either on the global shape of well-known objects or on local details of unknown objects. Moreover, the manipulation of an object appears necessary for these dolphins to construct a global representation of an object enabling its immediate categorization for subsequent use. Our experimental results pointed out some cognitive capacities of dolphins which might be crucial for their wild life given their fission-fusion social system and migratory behaviour.
3D modeling of building indoor spaces and closed doors from imagery and point clouds.
Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro
2015-02-03
3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction.
Testing and evaluation of a wearable augmented reality system for natural outdoor environments
NASA Astrophysics Data System (ADS)
Roberts, David; Menozzi, Alberico; Cook, James; Sherrill, Todd; Snarski, Stephen; Russler, Pat; Clipp, Brian; Karl, Robert; Wenger, Eric; Bennett, Matthew; Mauger, Jennifer; Church, William; Towles, Herman; MacCabe, Stephen; Webb, Jeffrey; Lupo, Jasper; Frahm, Jan-Michael; Dunn, Enrique; Leslie, Christopher; Welch, Greg
2013-05-01
This paper describes performance evaluation of a wearable augmented reality system for natural outdoor environments. Applied Research Associates (ARA), as prime integrator on the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program, is developing a soldier-worn system to provide intuitive `heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a novel pose estimation capability, a helmet-mounted see-through display, and a wearable processing unit to accurately overlay geo-registered iconography (e.g., navigation waypoints, sensor points of interest, blue forces, aircraft) on the soldier's view of reality. We achieve accurate pose estimation through fusion of inertial, magnetic, GPS, terrain data, and computer-vision inputs. We leverage a helmet-mounted camera and custom computer vision algorithms to provide terrain-based measurements of absolute orientation (i.e., orientation of the helmet with respect to the earth). These orientation measurements, which leverage mountainous terrain horizon geometry and mission planning landmarks, enable our system to operate robustly in the presence of external and body-worn magnetic disturbances. Current field testing activities across a variety of mountainous environments indicate that we can achieve high icon geo-registration accuracy (<10mrad) using these vision-based methods.
Visualization of conserved structures by fusing highly variable datasets.
Silverstein, Jonathan C; Chhadia, Ankur; Dech, Fred
2002-01-01
Skill, effort, and time are required to identify and visualize anatomic structures in three-dimensions from radiological data. Fundamentally, automating these processes requires a technique that uses symbolic information not in the dynamic range of the voxel data. We were developing such a technique based on mutual information for automatic multi-modality image fusion (MIAMI Fuse, University of Michigan). This system previously demonstrated facility at fusing one voxel dataset with integrated symbolic structure information to a CT dataset (different scale and resolution) from the same person. The next step of development of our technique was aimed at accommodating the variability of anatomy from patient to patient by using warping to fuse our standard dataset to arbitrary patient CT datasets. A standard symbolic information dataset was created from the full color Visible Human Female by segmenting the liver parenchyma, portal veins, and hepatic veins and overwriting each set of voxels with a fixed color. Two arbitrarily selected patient CT scans of the abdomen were used for reference datasets. We used the warping functions in MIAMI Fuse to align the standard structure data to each patient scan. The key to successful fusion was the focused use of multiple warping control points that place themselves around the structure of interest automatically. The user assigns only a few initial control points to align the scans. Fusion 1 and 2 transformed the atlas with 27 points around the liver to CT1 and CT2 respectively. Fusion 3 transformed the atlas with 45 control points around the liver to CT1 and Fusion 4 transformed the atlas with 5 control points around the portal vein. The CT dataset is augmented with the transformed standard structure dataset, such that the warped structure masks are visualized in combination with the original patient dataset. This combined volume visualization is then rendered interactively in stereo on the ImmersaDesk in an immersive Virtual Reality (VR) environment. The accuracy of the fusions was determined qualitatively by comparing the transformed atlas overlaid on the appropriate CT. It was examined for where the transformed structure atlas was incorrectly overlaid (false positive) and where it was incorrectly not overlaid (false negative). According to this method, fusions 1 and 2 were correct roughly 50-75% of the time, while fusions 3 and 4 were correct roughly 75-100%. The CT dataset augmented with transformed dataset was viewed arbitrarily in user-centered perspective stereo taking advantage of features such as scaling, windowing and volumetric region of interest selection. This process of auto-coloring conserved structures in variable datasets is a step toward the goal of a broader, standardized automatic structure visualization method for radiological data. If successful it would permit identification, visualization or deletion of structures in radiological data by semi-automatically applying canonical structure information to the radiological data (not just processing and visualization of the data's intrinsic dynamic range). More sophisticated selection of control points and patterns of warping may allow for more accurate transforms, and thus advances in visualization, simulation, education, diagnostics, and treatment planning.
Multi-contact Variable-Compliance Manipulation in Extreme Clutter
2014-06-16
house to find eggs and young. (b) When noodling , people find catfish holes from which to pull fish out. (c)-(d) A person makes contact along his...Figure 7: Haptic Map of detected rigid contacts. by mapping all the rigid taxels at every time- instant . For visualizing the haptic map, we use point...the environment while reaching into clutter. (a) A raccoon reaches into a bird house to find eggs and young. (b) When noodling , people find catfish
Independent Research and Independent Exploratory Development FY 1985
1986-01-01
whs’tkxwK 3nk ’ --’ ." -•ftFiu... imcnft fli m le- Apanl these individua complete * hic tm rmthstst h frttw tpsi hesltin ndte the correct answer (33%) were...describe itself, graphically indicating a Maintenance System ( TMS ), which design relationship, or examining a maintains a set of assumptions within an...critique comment by visually marking environment. At this point, how to utilize the elements involved, a TMS within Designer has not been clearly , defined
NASA Astrophysics Data System (ADS)
Cheng, D. L. C.; Quinn, J. D.; Larour, E. Y.; Halkides, D. J.
2017-12-01
The Virtual Earth System Laboratory (VESL) is a Web application, under continued development at the Jet Propulsion Laboratory and UC Irvine, for the visualization of Earth System data and process simulations. As with any project of its size, we have encountered both successes and challenges during the course of development. Our principal point of success is the fact that VESL users can interact seamlessly with our earth science simulations within their own Web browser. Some of the challenges we have faced include retrofitting the VESL Web application to respond to touch gestures, reducing page load time (especially as the application has grown), and accounting for the differences between the various Web browsers and computing platforms.
NASA Astrophysics Data System (ADS)
Moody, Marc; Fisher, Robert; Little, J. Kristin
2014-06-01
Boeing has developed a degraded visual environment navigational aid that is flying on the Boeing AH-6 light attack helicopter. The navigational aid is a two dimensional software digital map underlay generated by the Boeing™ Geospatial Embedded Mapping Software (GEMS) and fully integrated with the operational flight program. The page format on the aircraft's multi function displays (MFD) is termed the Approach page. The existing work utilizes Digital Terrain Elevation Data (DTED) and OpenGL ES 2.0 graphics capabilities to compute the pertinent graphics underlay entirely on the graphics processor unit (GPU) within the AH-6 mission computer. The next release will incorporate cultural databases containing Digital Vertical Obstructions (DVO) to warn the crew of towers, buildings, and power lines when choosing an opportune landing site. Future IRAD will include Light Detection and Ranging (LIDAR) point cloud generating sensors to provide 2D and 3D synthetic vision on the final approach to the landing zone. Collision detection with respect to terrain, cultural, and point cloud datasets may be used to further augment the crew warning system. The techniques for creating the digital map underlay leverage the GPU almost entirely, making this solution viable on most embedded mission computing systems with an OpenGL ES 2.0 capable GPU. This paper focuses on the AH-6 crew interface process for determining a landing zone and flying the aircraft to it.
Method for visualization and presentation of priceless old prints based on precise 3D scan
NASA Astrophysics Data System (ADS)
Bunsch, Eryk; Sitnik, Robert
2014-02-01
Graphic prints and manuscripts constitute main part of the cultural heritage objects created by the most of the known civilizations. Their presentation was always a problem due to their high sensitivity to light and changes of external conditions (temperature, humidity). Today it is possible to use an advanced digitalization techniques for documentation and visualization of mentioned objects. In the situation when presentation of the original heritage object is impossible, there is a need to develop a method allowing documentation and then presentation to the audience of all the aesthetical features of the object. During the course of the project scans of several pages of one of the most valuable books in collection of Museum of Warsaw Archdiocese were performed. The book known as "Great Dürer Trilogy" consists of three series of woodcuts by the Albrecht Dürer. The measurement system used consists of a custom designed, structured light-based, high-resolution measurement head with automated digitization system mounted on the industrial robot. This device was custom built to meet conservators' requirements, especially the lack of ultraviolet or infrared radiation emission in the direction of measured object. Documentation of one page from the book requires about 380 directional measurements which constitute about 3 billion sample points. The distance between the points in the cloud is 20 μm. Provided that the measurement with MSD (measurement sampling density) of 2500 points makes it possible to show to the publicity the spatial structure of this graphics print. An important aspect is the complexity of the software environment created for data processing, in which massive data sets can be automatically processed and visualized. Very important advantage of the software which is using directly clouds of points is the possibility to manipulate freely virtual light source.
pV3-Gold Visualization Environment for Computer Simulations
NASA Technical Reports Server (NTRS)
Babrauckas, Theresa L.
1997-01-01
A new visualization environment, pV3-Gold, can be used during and after a computer simulation to extract and visualize the physical features in the results. This environment, which is an extension of the pV3 visualization environment developed at the Massachusetts Institute of Technology with guidance and support by researchers at the NASA Lewis Research Center, features many tools that allow users to display data in various ways.
ERIC Educational Resources Information Center
Sehati, Samira; Khodabandehlou, Morteza
2017-01-01
The present investigation was an attempt to study on the effect of power point enhanced teaching (visual input) on Iranian Intermediate EFL learners' listening comprehension ability. To that end, a null hypothesis was formulated as power point enhanced teaching (visual input) has no effect on Iranian Intermediate EFL learners' listening…
Kim, Sung-Min
2018-01-01
Cessation of dewatering following underground mine closure typically results in groundwater rebound, because mine voids and surrounding strata undergo flooding up to the levels of the decant points, such as shafts and drifts. SIMPL (Simplified groundwater program In Mine workings using the Pipe equation and Lumped parameter model), a simplified lumped parameter model-based program for predicting groundwater levels in abandoned mines, is presented herein. The program comprises a simulation engine module, 3D visualization module, and graphical user interface, which aids data processing, analysis, and visualization of results. The 3D viewer facilitates effective visualization of the predicted groundwater level rebound phenomenon together with a topographic map, mine drift, goaf, and geological properties from borehole data. SIMPL is applied to data from the Dongwon coal mine and Dalsung copper mine in Korea, with strong similarities in simulated and observed results. By considering mine workings and interpond connections, SIMPL can thus be used to effectively analyze and visualize groundwater rebound. In addition, the predictions by SIMPL can be utilized to prevent the surrounding environment (water and soil) from being polluted by acid mine drainage. PMID:29747480
ERIC Educational Resources Information Center
Dunn Foundation, Warwick, RI.
Recognizing that community growth and change are inevitable, Viewfinders' goals are as follows: to introduce students and teachers to the concept of the visual environment; enhance an understanding of the interrelationship between the built and natural environment; create an awareness that the visual environment affects the economy and quality of…
Lighting design for globally illuminated volume rendering.
Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.
Field: a new meta-authoring platform for data-intensive scientific visualization
NASA Astrophysics Data System (ADS)
Downie, M.; Ameres, E.; Fox, P. A.; Goebel, J.; Graves, A.; Hendler, J.
2012-12-01
This presentation will demonstrate a new platform for data-intensive scientific visualization, called Field, that rethinks the problem of visual data exploration. Several new opportunities for scientific visualization present themselves here at this moment in time. We believe that when taken together they may catalyze a transformation of the practice of science and to begin to seed a technical culture within science that fuses data analysis, programming and myriad visual strategies. It is at integrative levels that the principle challenges exist, for many fundamental technical components of our field are now well understood and widely available. File formats from CSV through HDF all have broad library support; low-level high-performance graphics APIs (OpenGL) are in a period of stable growth; and a dizzying ecosystem of analysis and machine learning libraries abound. The hardware of computer graphics offers unprecedented computing power within commodity components; programming languages and platforms are coalescing around a core set of umbrella runtimes. Each of these trends are each set to continue — computer graphics hardware is developing at a super-Moore-law rate, and trends in publication and dissemination point only towards an increasing amount of access to code and data. The critical opportunity here for scientific visualization is, we maintain, not a in developing a new statistical library, nor a new tool centered on a particular technique, but rather new visual, "live" programming environment that is promiscuous in its scope. We can identify the necessarily methodological practice and traditions required here not in science or engineering but in the "live-coding" practices prevalent in the fields of digital art and design. We can define this practice as an approach to programming that is live, iterative, integrative, speculative and exploratory. "Live" because it is exclusively practiced in real-time (often during performance); "iterative", because intermediate programs and this visual results are constantly being made and remade en route; "speculative", because these programs and images result out of mode of inquiry into image-making not unlike that of hypothesis formation and testing; "integrative" because this style draws deeply upon the libraries of algorithms and materials available online today; and "exploratory" because the results of these speculations are inherently open to the data and unforeseen out the outset. To this end our development environment — Field — comprises a minimal core and a powerful plug-in system that can be extended from within the environment itself. By providing a hybrid text editor that can incorporate text-based programming at the same time with graphical user-interface elements, its flexible and extensible interface provides space as necessary for notation, visualization, interface construction, and introspection. In addition, it provides an advanced GPU-accelerated graphics system ideal for large-scale data visualization. Since Field was created in the context of widely divergent interdisciplinary projects, its aim is to give its users not only the ability to work rapidly, but to shape their Field environment extensively and flexibly for their own demands.
Infants' visual and auditory communication when a partner is or is not visually attending.
Liszkowski, Ulf; Albrecht, Konstanze; Carpenter, Malinda; Tomasello, Michael
2008-04-01
In the current study we investigated infants' communication in the visual and auditory modalities as a function of the recipient's visual attention. We elicited pointing at interesting events from thirty-two 12-month olds and thirty-two 18-month olds in two conditions: when the recipient either was or was not visually attending to them before and during the point. The main result was that infants initiated more pointing when the recipient's visual attention was on them than when it was not. In addition, when the recipient did not respond by sharing interest in the designated event, infants initiated more repairs (repeated pointing) than when she did, again, especially when the recipient was visually attending to them. Interestingly, accompanying vocalizations were used intentionally and increased in both experimental conditions when the recipient did not share attention and interest. However, there was little evidence that infants used their vocalizations to direct attention to their gestures when the recipient was not attending to them.
Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments.
Andrews, T J; Coppola, D M
1999-08-01
Eye position was recorded in different viewing conditions to assess whether the temporal and spatial characteristics of saccadic eye movements in different individuals are idiosyncratic. Our aim was to determine the degree to which oculomotor control is based on endogenous factors. A total of 15 naive subjects viewed five visual environments: (1) The absence of visual stimulation (i.e. a dark room); (2) a repetitive visual environment (i.e. simple textured patterns); (3) a complex natural scene; (4) a visual search task; and (5) reading text. Although differences in visual environment had significant effects on eye movements, idiosyncrasies were also apparent. For example, the mean fixation duration and size of an individual's saccadic eye movements when passively viewing a complex natural scene covaried significantly with those same parameters in the absence of visual stimulation and in a repetitive visual environment. In contrast, an individual's spatio-temporal characteristics of eye movements during active tasks such as reading text or visual search covaried together, but did not correlate with the pattern of eye movements detected when viewing a natural scene, simple patterns or in the dark. These idiosyncratic patterns of eye movements in normal viewing reveal an endogenous influence on oculomotor control. The independent covariance of eye movements during different visual tasks shows that saccadic eye movements during active tasks like reading or visual search differ from those engaged during the passive inspection of visual scenes.
Spontaneous Group Learning in Ambient Learning Environments
NASA Astrophysics Data System (ADS)
Bick, Markus; Jughardt, Achim; Pawlowski, Jan M.; Veith, Patrick
Spontaneous Group Learning is a concept to form and facilitate face-to-face, ad-hoc learning groups in collaborative settings. We show how to use Ambient Intelligence to identify, support, and initiate group processes. Learners' positions are determined by widely used technologies, e.g., Bluetooth and WLAN. As a second step, learners' positions, tasks, and interests are visualized. Finally, a group process is initiated supported by relevant documents and services. Our solution is a starting point to develop new didactical solutions for collaborative processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean
A new field of research, visual analytics, has recently been introduced. This has been defined as “the science of analytical reasoning facilitated by visual interfaces." Visual analytic environments, therefore, support analytical reasoning using visual representations and interactions, with data representations and transformation capabilities, to support production, presentation and dissemination. As researchers begin to develop visual analytic environments, it will be advantageous to develop metrics and methodologies to help researchers measure the progress of their work and understand the impact their work will have on the users who will work in such environments. This paper presents five areas or aspects ofmore » visual analytic environments that should be considered as metrics and methodologies for evaluation are developed. Evaluation aspects need to include usability, but it is necessary to go beyond basic usability. The areas of situation awareness, collaboration, interaction, creativity, and utility are proposed as areas for initial consideration. The steps that need to be undertaken to develop systematic evaluation methodologies and metrics for visual analytic environments are outlined.« less
A visual-environment simulator with variable contrast
NASA Astrophysics Data System (ADS)
Gusarova, N. F.; Demin, A. V.; Polshchikov, G. V.
1987-01-01
A visual-environment simulator is proposed in which the image contrast can be varied continuously up to the reversal of the image. Contrast variability can be achieved by using two independently adjustable light sources to simultaneously illuminate the carrier of visual information (e.g., a slide or a cinematographic film). It is shown that such a scheme makes it possible to adequately model a complex visual environment.
Edwards, Terra
2015-01-01
This article is concerned with social and interactional processes that simplify pragmatic acts of intention attribution. The empirical focus is a series of interactions among DeafBlind people in Seattle, Washington, where pointing signs are used to individuate objects of reference in the immediate environment. Most members of this community are born deaf and slowly become blind. They come to Seattle using Visual American Sign Language, which has emerged and developed in a field organized around visual modes of access. As vision deteriorates, however, links between deictic signs (such as pointing) and the present, remembered, or imagined environment erode in idiosyncratic ways across the community of language-users, and as a result, it becomes increasingly difficult for participants to converge on objects of reference. In the past, DeafBlind people addressed this problem by relying on sighted interpreters. Under the influence of the recent “pro-tactile” movement, they have turned instead to one another to find new solutions to these referential problems. Drawing on analyses of 120 h of videorecorded interaction and language-use, detailed fieldnotes collected during 12 months of sustained anthropological fieldwork, and more than 15 years of involvement in this community in a range of capacities, I argue that DeafBlind people are generating new and reciprocal modes of access to their environment, and this process is aligning language with context in novel ways. I discuss two mechanisms that can account for this process: embedding in the social field and deictic integration. I argue that together, these social and interactional processes yield a deictic system set to retrieve a restricted range of values from the extra-linguistic context, thereby attenuating the cognitive demands of intention attribution and narrowing the gap between DeafBlind minds. PMID:26500576
Perceptual organization and visual attention.
Kimchi, Ruth
2009-01-01
Perceptual organization--the processes structuring visual information into coherent units--and visual attention--the processes by which some visual information in a scene is selected--are crucial for the perception of our visual environment and to visuomotor behavior. Recent research points to important relations between attentional and organizational processes. Several studies demonstrated that perceptual organization constrains attentional selectivity, and other studies suggest that attention can also constrain perceptual organization. In this chapter I focus on two aspects of the relationship between perceptual organization and attention. The first addresses the question of whether or not perceptual organization can take place without attention. I present findings demonstrating that some forms of grouping and figure-ground segmentation can occur without attention, whereas others require controlled attentional processing, depending on the processes involved and the conditions prevailing for each process. These findings challenge the traditional view, which assumes that perceptual organization is a unitary entity that operates preattentively. The second issue addresses the question of whether perceptual organization can affect the automatic deployment of attention. I present findings showing that the mere organization of some elements in the visual field by Gestalt factors into a coherent perceptual unit (an "object"), with no abrupt onset or any other unique transient, can capture attention automatically in a stimulus-driven manner. Taken together, the findings discussed in this chapter demonstrate the multifaceted, interactive relations between perceptual organization and visual attention.
Visual EKF-SLAM from Heterogeneous Landmarks †
Esparza-Jiménez, Jorge Othón; Devy, Michel; Gordillo, José L.
2016-01-01
Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in an unknown environment. Such a SLAM process executes two interleaved functions: the front-end detects and tracks features from images, while the back-end interprets features as landmark observations and estimates both the landmarks and the robot positions with respect to a selected reference frame. This paper describes a complete visual SLAM solution, combining both point and line landmarks on a single map. The proposed method has an impact on both the back-end and the front-end. The contributions comprehend the use of heterogeneous landmark-based EKF-SLAM (the management of a map composed of both point and line landmarks); from this perspective, the comparison between landmark parametrizations and the evaluation of how the heterogeneity improves the accuracy on the camera localization, the development of a front-end active-search process for linear landmarks integrated into SLAM and the experimentation methodology. PMID:27070602
Visual object recognition for mobile tourist information systems
NASA Astrophysics Data System (ADS)
Paletta, Lucas; Fritz, Gerald; Seifert, Christin; Luley, Patrick; Almer, Alexander
2005-03-01
We describe a mobile vision system that is capable of automated object identification using images captured from a PDA or a camera phone. We present a solution for the enabling technology of outdoors vision based object recognition that will extend state-of-the-art location and context aware services towards object based awareness in urban environments. In the proposed application scenario, tourist pedestrians are equipped with GPS, W-LAN and a camera attached to a PDA or a camera phone. They are interested whether their field of view contains tourist sights that would point to more detailed information. Multimedia type data about related history, the architecture, or other related cultural context of historic or artistic relevance might be explored by a mobile user who is intending to learn within the urban environment. Learning from ambient cues is in this way achieved by pointing the device towards the urban sight, capturing an image, and consequently getting information about the object on site and within the focus of attention, i.e., the users current field of view.
NASA Astrophysics Data System (ADS)
Kobayashi, Hayato; Osaki, Tsugutoyo; Okuyama, Tetsuro; Gramm, Joshua; Ishino, Akira; Shinohara, Ayumi
This paper describes an interactive experimental environment for autonomous soccer robots, which is a soccer field augmented by utilizing camera input and projector output. This environment, in a sense, plays an intermediate role between simulated environments and real environments. We can simulate some parts of real environments, e.g., real objects such as robots or a ball, and reflect simulated data into the real environments, e.g., to visualize the positions on the field, so as to create a situation that allows easy debugging of robot programs. The significant point compared with analogous work is that virtual objects are touchable in this system owing to projectors. We also show the portable version of our system that does not require ceiling cameras. As an application in the augmented environment, we address the learning of goalie strategies on real quadruped robots in penalty kicks. We make our robots utilize virtual balls in order to perform only quadruped locomotion in real environments, which is quite difficult to simulate accurately. Our robots autonomously learn and acquire more beneficial strategies without human intervention in our augmented environment than those in a fully simulated environment.
NASA Technical Reports Server (NTRS)
Senger, Steven O.
1998-01-01
Volumetric data sets have become common in medicine and many sciences through technologies such as computed x-ray tomography (CT), magnetic resonance (MR), positron emission tomography (PET), confocal microscopy and 3D ultrasound. When presented with 2D images humans immediately and unconsciously begin a visual analysis of the scene. The viewer surveys the scene identifying significant landmarks and building an internal mental model of presented information. The identification of features is strongly influenced by the viewers expectations based upon their expert knowledge of what the image should contain. While not a conscious activity, the viewer makes a series of choices about how to interpret the scene. These choices occur in parallel with viewing the scene and effectively change the way the viewer sees the image. It is this interaction of viewing and choice which is the basis of many familiar visual illusions. This is especially important in the interpretation of medical images where it is the expert knowledge of the radiologist which interprets the image. For 3D data sets this interaction of view and choice is frustrated because choices must precede the visualization of the data set. It is not possible to visualize the data set with out making some initial choices which determine how the volume of data is presented to the eye. These choices include, view point orientation, region identification, color and opacity assignments. Further compounding the problem is the fact that these visualization choices are defined in terms of computer graphics as opposed to language of the experts knowledge. The long term goal of this project is to develop an environment where the user can interact with volumetric data sets using tools which promote the utilization of expert knowledge by incorporating visualization and choice into a tight computational loop. The tools will support activities involving the segmentation of structures, construction of surface meshes and local filtering of the data set. To conform to this environment tools should have several key attributes. First, they should be only rely on computations over a local neighborhood of the probe position. Second, they should operate iteratively over time converging towards a limit behavior. Third, they should adapt to user input modifying they operational parameters with time.
NASA Astrophysics Data System (ADS)
Giorgino, Toni
2014-03-01
PLUMED-GUI is an interactive environment to develop and test complex PLUMED scripts within the Visual Molecular Dynamics (VMD) environment. Computational biophysicists can take advantage of both PLUMED’s rich syntax to define collective variables (CVs) and VMD’s chemically-aware atom selection language, while working within a natural point-and-click interface. Pre-defined templates and syntax mnemonics facilitate the definition of well-known reaction coordinates. Complex CVs, e.g. involving reference snapshots used for RMSD or native contacts calculations, can be built through dialogs that provide a synoptic view of the available options. Scripts can be either exported for use in simulation programs, or evaluated on the currently loaded molecular trajectories. Script development takes place without leaving VMD, thus enabling an incremental try-see-modify development model for molecular metrics.
Synthetic environment employing a craft for providing user perspective reference
Maples, Creve; Peterson, Craig A.
1997-10-21
A multi-dimensional user oriented synthetic environment system allows application programs to be programmed and accessed with input/output device independent, generic functional commands which are a distillation of the actual functions performed by any application program. A shared memory structure allows the translation of device specific commands to device independent, generic functional commands. Complete flexibility of the mapping of synthetic environment data to the user is thereby allowed. Accordingly, synthetic environment data may be provided to the user on parallel user information processing channels allowing the subcognitive mind to act as a filter, eliminating irrelevant information and allowing the processing of increase amounts of data by the user. The user is further provided with a craft surrounding the user within the synthetic environment, which craft, imparts important visual referential an motion parallax cues, enabling the user to better appreciate distances and directions within the synthetic environment. Display of this craft in close proximity to the user's point of perspective may be accomplished without substantially degrading the image resolution of the displayed portions of the synthetic environment.
Harris, Magdalena; Rhodes, Tim
2018-06-01
A life history approach enables study of how risk or health protection is shaped by critical transitions and turning points in a life trajectory and in the context of social environment and time. We employed visual and narrative life history methods with people who inject drugs to explore how hepatitis C protection was enabled and maintained over the life course. We overview our methodological approach, with a focus on the ethics in practice of using life history timelines and life-grids with 37 participants. The life-grid evoked mixed emotions for participants: pleasure in receiving a personalized visual history and pain elicited by its contents. A minority managed this pain with additional heroin use. The methodological benefits of using life history methods and visual aids have been extensively reported. Crucial to consider are the ethical implications of this process, particularly for people who lack socially ascribed markers of a "successful life."
Louveton, N; McCall, R; Koenig, V; Avanesov, T; Engel, T
2016-05-01
Innovative in-car applications provided on smartphones can deliver real-time alternative mobility choices and subsequently generate visual-manual demand. Prior studies have found that multi-touch gestures such as kinetic scrolling are problematic in this respect. In this study we evaluate three prototype tasks which can be found in common mobile interaction use-cases. In a repeated-measures design, 29 participants interacted with the prototypes in a car-following task within a driving simulator environment. Task completion, driving performance and eye gaze have been analysed. We found that the slider widget used in the filtering task was too demanding and led to poor performance, while kinetic scrolling generated a comparable amount of visual distraction despite it requiring a lower degree of finger pointing accuracy. We discuss how to improve continuous list browsing in a dual-task context. Copyright © 2016 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Using WorldWide Telescope in Observing, Research and Presentation
NASA Astrophysics Data System (ADS)
Roberts, Douglas A.; Fay, J.
2014-01-01
WorldWide Telescope (WWT) is free software that enables researchers to interactively explore observational data using a user-friendly interface. Reference, all-sky datasets and pointed observations are available as layers along with the ability to easily overlay additional FITS images and catalog data. Connections to the Astrophysics Data System (ADS) are included which enable visual investigation using WWT to drive document searches in ADS. WWT can be used to capture and share visual exploration with colleagues during observational planning and analysis. Finally, researchers can use WorldWide Telescope to create videos for professional, education and outreach presentations. I will conclude with an example of how I have used WWT in a research project. Specifically, I will discuss how WorldWide Telescope helped our group to prepare for radio observations and following them, in the analysis of multi-wavelength data taken in the inner parsec of the Galaxy. A concluding video will show how WWT brought together disparate datasets in a unified interactive visualization environment.
Art as behaviour--an ethological approach to visual and verbal art, music and architecture.
Sütterlin, Christa; Schiefenhövel, Wulf; Lehmann, Christian; Forster, Johanna; Apfelauer, Gerhard
2014-01-01
In recent years, the fine arts, architecture, music and literature have increasingly been examined from the vantage point of human ethology and evolutionary psychology. In 2011 the authors formed the research group 'Ethology of the Arts' concentrating on the evolution and biology of perception and behaviour. These novel approaches aim at a better understanding of the various facets represented by the arts by taking into focus possible phylogenetic adaptations, which have shaped the artistic capacities of our ancestors. Rather than culture specificity, which is stressed e.g. by cultural anthropology and numerous other disciplines, universal human tendencies to perceive, feel, think and behave are postulated. Artistic expressive behaviour is understood as an integral part of the human condition, whether expressed in ritual, visual, verbal or musical art. The Ethology of the Arts-group's research focuses on visual and verbal art, music and built environment/architecture and is designed to contribute to the incipient interdisciplinarity in the field of evolutionary art research.
Neural Circuit to Integrate Opposing Motions in the Visual Field.
Mauss, Alex S; Pankova, Katarina; Arenz, Alexander; Nern, Aljoscha; Rubin, Gerald M; Borst, Alexander
2015-07-16
When navigating in their environment, animals use visual motion cues as feedback signals that are elicited by their own motion. Such signals are provided by wide-field neurons sampling motion directions at multiple image points as the animal maneuvers. Each one of these neurons responds selectively to a specific optic flow-field representing the spatial distribution of motion vectors on the retina. Here, we describe the discovery of a group of local, inhibitory interneurons in the fruit fly Drosophila key for filtering these cues. Using anatomy, molecular characterization, activity manipulation, and physiological recordings, we demonstrate that these interneurons convey direction-selective inhibition to wide-field neurons with opposite preferred direction and provide evidence for how their connectivity enables the computation required for integrating opposing motions. Our results indicate that, rather than sharpening directional selectivity per se, these circuit elements reduce noise by eliminating non-specific responses to complex visual information. Copyright © 2015 Elsevier Inc. All rights reserved.
Real-time visual mosaicking and navigation on the seafloor
NASA Astrophysics Data System (ADS)
Richmond, Kristof
Remote robotic exploration holds vast potential for gaining knowledge about extreme environments accessible to humans only with great difficulty. Robotic explorers have been sent to other solar system bodies, and on this planet into inaccessible areas such as caves and volcanoes. In fact, the largest unexplored land area on earth lies hidden in the airless cold and intense pressure of the ocean depths. Exploration in the oceans is further hindered by water's high absorption of electromagnetic radiation, which both inhibits remote sensing from the surface, and limits communications with the bottom. The Earth's oceans thus provide an attractive target for developing remote exploration capabilities. As a result, numerous robotic vehicles now routinely survey this environment, from remotely operated vehicles piloted over tethers from the surface to torpedo-shaped autonomous underwater vehicles surveying the mid-waters. However, these vehicles are limited in their ability to navigate relative to their environment. This limits their ability to return to sites with precision without the use of external navigation aids, and to maneuver near and interact with objects autonomously in the water and on the sea floor. The enabling of environment-relative positioning on fully autonomous underwater vehicles will greatly extend their power and utility for remote exploration in the furthest reaches of the Earth's waters---even under ice and under ground---and eventually in extraterrestrial liquid environments such as Europa's oceans. This thesis presents an operational, fielded system for visual navigation of underwater robotic vehicles in unexplored areas of the seafloor. The system does not depend on external sensing systems, using only instruments on board the vehicle. As an area is explored, a camera is used to capture images and a composite view, or visual mosaic, of the ocean bottom is created in real time. Side-to-side visual registration of images is combined with dead-reckoned navigation information in a framework allowing the creation and updating of large, locally consistent mosaics. These mosaics are used as maps in which the vehicle can navigate and localize itself with respect to points in the environment. The system achieves real-time performance in several ways. First, wherever possible, direct sensing of motion parameters is used in place of extracting them from visual data. Second, trajectories are chosen to enable a hierarchical search for side-to-side links which limits the amount of searching performed without sacrificing robustness. Finally, the map estimation is formulated as a sparse, linear information filter allowing rapid updating of large maps. The visual navigation enabled by the work in this thesis represents a new capability for remotely operated vehicles, and an enabling capability for a new generation of autonomous vehicles which explore and interact with remote, unknown and unstructured underwater environments. The real-time mosaic can be used on current tethered vehicles to create pilot aids and provide a vehicle user with situational awareness of the local environment and the position of the vehicle within it. For autonomous vehicles, the visual navigation system enables precise environment-relative positioning and mapping, without requiring external navigation systems, opening the way for ever-expanding autonomous exploration capabilities. The utility of this system was demonstrated in the field at sites of scientific interest using the ROVs Ventana and Tiburon operated by the Monterey Bay Aquarium Research Institute. A number of sites in and around Monterey Bay, California were mosaicked using the system, culminating in a complete imaging of the wreck site of the USS Macon , where real-time visual mosaics containing thousands of images were generated while navigating using only sensor systems on board the vehicle.
Helland, Magne; Horgen, Gunnar; Kvikstad, Tor Martin; Garthus, Tore; Aarås, Arne
2011-11-01
This study investigated the effect of moving from small offices to a landscape environment for 19 Visual Display Unit (VDU) operators at Alcatel Denmark AS. The operators reported significantly improved lighting condition and glare situation. Further, visual discomfort was also significantly reduced on a Visual Analogue Scale (VAS). There was no significant correlation between lighting condition and visual discomfort neither in the small offices nor in the office landscape. However, visual discomfort correlated significantly with glare in small offices i.e. more glare is related to more visual discomfort. This correlation disappeared after the lighting system in the office landscape had been improved. There was also a significant correlation between glare and itching of the eyes as well as blurred vision in the small offices, i.e. more glare more visual symptoms. Experience of pain was found to reduce the subjective assessment of work capacity during VDU tasks. There was a significant correlation between visual discomfort and reduced work capacity in small offices and in the office landscape. When moving from the small offices to the office landscape, there was a significant reduction in headache as well as back pain. No significant changes in pain intensity in the neck, shoulder, forearm, and wrist/hand were observed. The pain levels in different body areas were significantly correlated with subjective assessment of reduced work capacity in small offices and in the office landscape. By careful design and construction of an office landscape with regard to lighting and visual conditions, transfer from small offices may be acceptable from a visual-ergonomic point of view. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
An Investigation of Soft Proof to Print Agreement under Bright Surround
NASA Astrophysics Data System (ADS)
Zunjarrao, Vickrant J.
Color quality is a vital concern in the printing industry. The ability of an LCD monitor to accurately and consistently predict the color of a printed work is often in doubt. According to Chung (2005), color reproduction technology is different for soft proofing and hard proofing which could lead a layman to believe that the two technologies may not produce the same result. Nevertheless, it is still possible for both reproduction technologies to achieve a metameric match which gives the same perceived color sensation between display and print. ISO/CD 14681 provides guidelines for creating the conditions required to perform soft proofing. This standard builds on ISO 12646 requirements for monitors and introduces a new softproofing environment (lightbooth with integrated monitor) to better meet the needs of industrial users. The ISO 14681 integrated viewing environment removes one important obstacle to achieving print to softproof match, i.e., the problem of simultaneous color contrast inherent in using a dim monitor surround with a bright paper viewing condition for soft proofing. Thus, the first objective of this research was to assess print to softproof visual match in the ISO 14681 integrated viewing environment. Nevertheless, even in this environment, inconsistency between paper white and monitor white remains as the next major obstacle to achieving consistent print to softproof match. Thus, a second objective of this research is to develop a methodology for matching the monitor's white point to the white point of the paper viewed in an ISO 14681 integrated viewing environment. The methodology for fulfilling these objectives began with the creation of the hardware/software environment required to support experimentation. This environment consisted of a 24-inch EIZO CG242W display conforming to ISO 12646 and an integrated viewing environment conforming to the P2 specification in ISO 3664:2009. Two ISO 12647-2 conformed press sheets were prepared and became the reference for the experiment. The researcher next developed a methodology for matching the monitor white point to the white point of the paper under the P2 viewing condition. Finally, a panel of observers was used to compare print to softproof match for four display conditions in a paired comparison experiment. The results of the experiment were highly encouraging. The mismatch between monitor and paper white points, as measured by the sum of the differences in R, G, and B counts between the monitor and the paper, was reduced by nearly 90%. In addition, the paired comparison experiment demonstrated that the use of a custom monitor white point and optimized monitor gamma outperformed the use of standard D65 and D50 white points with the same optimized gamma at a .05 level of significance.
Postural and Spatial Orientation Driven by Virtual Reality
Keshner, Emily A.; Kenyon, Robert V.
2009-01-01
Orientation in space is a perceptual variable intimately related to postural orientation that relies on visual and vestibular signals to correctly identify our position relative to vertical. We have combined a virtual environment with motion of a posture platform to produce visual-vestibular conditions that allow us to explore how motion of the visual environment may affect perception of vertical and, consequently, affect postural stabilizing responses. In order to involve a higher level perceptual process, we needed to create a visual environment that was immersive. We did this by developing visual scenes that possess contextual information using color, texture, and 3-dimensional structures. Update latency of the visual scene was close to physiological latencies of the vestibulo-ocular reflex. Using this system we found that even when healthy young adults stand and walk on a stable support surface, they are unable to ignore wide field of view visual motion and they adapt their postural orientation to the parameters of the visual motion. Balance training within our environment elicited measurable rehabilitation outcomes. Thus we believe that virtual environments can serve as a clinical tool for evaluation and training of movement in situations that closely reflect conditions found in the physical world. PMID:19592796
Haptograph Representation of Real-World Haptic Information by Wideband Force Control
NASA Astrophysics Data System (ADS)
Katsura, Seiichiro; Irie, Kouhei; Ohishi, Kiyoshi
Artificial acquisition and reproduction of human sensations are basic technologies of communication engineering. For example, auditory information is obtained by a microphone, and a speaker reproduces it by artificial means. Furthermore, a video camera and a television make it possible to transmit visual sensation by broadcasting. On the contrary, since tactile or haptic information is subject to the Newton's “law of action and reaction” in the real world, a device which acquires, transmits, and reproduces the information has not been established. From the point of view, real-world haptics is the key technology for future haptic communication engineering. This paper proposes a novel acquisition method of haptic information named “haptograph”. The haptograph visualizes the haptic information like photograph. The proposed haptograph is applied to haptic recognition of the contact environment. A linear motor contacts to the surface of the environment and its reaction force is used to make a haptograph. A robust contact motion and sensor-less sensing of the reaction force are attained by using a disturbance observer. As a result, an encyclopedia of contact environment is attained. Since temporal and spatial analyses are conducted to represent haptic information as the haptograph, it is possible to be recognized and to be evaluated intuitively.
NASA Astrophysics Data System (ADS)
Thébault, Cédric; Doyen, Didier; Routhier, Pierre; Borel, Thierry
2013-03-01
To ensure an immersive, yet comfortable experience, significant work is required during post-production to adapt the stereoscopic 3D (S3D) content to the targeted display and its environment. On the one hand, the content needs to be reconverged using horizontal image translation (HIT) so as to harmonize the depth across the shots. On the other hand, to prevent edge violation, specific re-convergence is required and depending on the viewing conditions floating windows need to be positioned. In order to simplify this time-consuming work we propose a depth grading tool that automatically adapts S3D content to digital cinema or home viewing environments. Based on a disparity map, a stereo point of interest in each shot is automatically evaluated. This point of interest is used for depth matching, i.e. to position the objects of interest of consecutive shots in a same plane so as to reduce visual fatigue. The tool adapts the re-convergence to avoid edge-violation, hyper-convergence and hyper-divergence. Floating windows are also automatically positioned. The method has been tested on various types of S3D content, and the results have been validated by a stereographer.
NASA Technical Reports Server (NTRS)
Nguyen, Lac; Kenney, Patrick J.
1993-01-01
Development of interactive virtual environments (VE) has typically consisted of three primary activities: model (object) development, model relationship tree development, and environment behavior definition and coding. The model and relationship tree development activities are accomplished with a variety of well-established graphic library (GL) based programs - most utilizing graphical user interfaces (GUI) with point-and-click interactions. Because of this GUI format, little programming expertise on the part of the developer is necessary to create the 3D graphical models or to establish interrelationships between the models. However, the third VE development activity, environment behavior definition and coding, has generally required the greatest amount of time and programmer expertise. Behaviors, characteristics, and interactions between objects and the user within a VE must be defined via command line C coding prior to rendering the environment scenes. In an effort to simplify this environment behavior definition phase for non-programmers, and to provide easy access to model and tree tools, a graphical interface and development tool has been created. The principal thrust of this research is to effect rapid development and prototyping of virtual environments. This presentation will discuss the 'Visual Interface for Virtual Interaction Development' (VIVID) tool; an X-Windows based system employing drop-down menus for user selection of program access, models, and trees, behavior editing, and code generation. Examples of these selection will be highlighted in this presentation, as will the currently available program interfaces. The functionality of this tool allows non-programming users access to all facets of VE development while providing experienced programmers with a collection of pre-coded behaviors. In conjunction with its existing, interfaces and predefined suite of behaviors, future development plans for VIVID will be described. These include incorporation of dual user virtual environment enhancements, tool expansion, and additional behaviors.
Influence of moving visual environment on sit-to-stand kinematics in children and adults.
Slaboda, Jill C; Barton, Joseph E; Keshner, Emily A
2009-08-01
The effect of visual field motion on the sit-to-stand kinematics of adults and children was investigated. Children (8 to12 years of age) and adults (21 to 49 years of age) were seated in a virtual environment that rotated in the pitch and roll directions. Participants stood up either (1) concurrent with onset of visual motion or (2) after an immersion period in the moving visual environment, and (3) without visual input. Angular velocities of the head with respect to the trunk, and trunk with respect to the environment, w ere calculated as was head andtrunk center of mass. Both adults and children reduced head and trunk angular velocity after immersion in the moving visual environment. Unlike adults, children demonstrated significant differences in displacement of the head center of mass during the immersion and concurrent trials when compared to trials without visual input. Results suggest a time-dependent effect of vision on sit-to-stand kinematics in adults, whereas children are influenced by the immediate presence or absence of vision.
a Thtee-Dimensional Variational Assimilation Scheme for Satellite Aod
NASA Astrophysics Data System (ADS)
Liang, Y.; Zang, Z.; You, W.
2018-04-01
A three-dimensional variational data assimilation scheme is designed for satellite AOD based on the IMPROVE (Interagency Monitoring of Protected Visual Environments) equation. The observation operator that simulates AOD from the control variables is established by the IMPROVE equation. All of the 16 control variables in the assimilation scheme are the mass concentrations of aerosol species from the Model for Simulation Aerosol Interactions and Chemistry scheme, so as to take advantage of this scheme in providing comprehensive analyses of species concentrations and size distributions as well as be calculating efficiently. The assimilation scheme can save computational resources as the IMPROVE equation is a quadratic equation. A single-point observation experiment shows that the information from the single-point AOD is effectively spread horizontally and vertically.
Indoor Photogrammetry Aided with Uwb Navigation
NASA Astrophysics Data System (ADS)
Masiero, A.; Fissore, F.; Guarnieri, A.; Vettore, A.
2018-05-01
The subject of photogrammetric surveying with mobile devices, in particular smartphones, is becoming of significant interest in the research community. Nowadays, the process of providing 3D point clouds with photogrammetric procedures is well known. However, external information is still typically needed in order to move from the point cloud obtained from images to a 3D metric reconstruction. This paper investigates the integration of information provided by an UWB positioning system with visual based reconstruction to produce a metric reconstruction. Furthermore, the orientation (with respect to North-East directions) of the obtained model is assessed thanks to the use of inertial sensors included in the considered UWB devices. Results of this integration are shown on two case studies in indoor environments.
Pivots for Pointing: Visually-Monitored Pointing Has Higher Arm Elevations than Pointing Blindfolded
ERIC Educational Resources Information Center
Wnuczko, Marta; Kennedy, John M.
2011-01-01
Observers pointing to a target viewed directly may elevate their fingertip close to the line of sight. However, pointing blindfolded, after viewing the target, they may pivot lower, from the shoulder, aligning the arm with the target as if reaching to the target. Indeed, in Experiment 1 participants elevated their arms more in visually monitored…
The contribution of dynamic visual cues to audiovisual speech perception.
Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador
2015-08-01
Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.
Reaching nearby sources: comparison between real and virtual sound and visual targets
Parseihian, Gaëtan; Jouffrais, Christophe; Katz, Brian F. G.
2014-01-01
Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments. PMID:25228855
Headphone and Head-Mounted Visual Displays for Virtual Environments
NASA Technical Reports Server (NTRS)
Begault, Duran R.; Ellis, Stephen R.; Wenzel, Elizabeth M.; Trejo, Leonard J. (Technical Monitor)
1998-01-01
A realistic auditory environment can contribute to both the overall subjective sense of presence in a virtual display, and to a quantitative metric predicting human performance. Here, the role of audio in a virtual display and the importance of auditory-visual interaction are examined. Conjectures are proposed regarding the effectiveness of audio compared to visual information for creating a sensation of immersion, the frame of reference within a virtual display, and the compensation of visual fidelity by supplying auditory information. Future areas of research are outlined for improving simulations of virtual visual and acoustic spaces. This paper will describe some of the intersensory phenomena that arise during operator interaction within combined visual and auditory virtual environments. Conjectures regarding audio-visual interaction will be proposed.
Information Virtulization in Virtual Environments
NASA Technical Reports Server (NTRS)
Bryson, Steve; Kwak, Dochan (Technical Monitor)
2001-01-01
Virtual Environments provide a natural setting for a wide range of information visualization applications, particularly wlieit the information to be visualized is defined on a three-dimensional domain (Bryson, 1996). This chapter provides an overview of the issues that arise when designing and implementing an information visualization application in a virtual environment. Many design issues that arise, such as, e.g., issues of display, user tracking are common to any application of virtual environments. In this chapter we focus on those issues that are special to information visualization applications, as issues of wider concern are addressed elsewhere in this book.
Multimodal Microchannel and Nanowell-Based Microfluidic Platforms for Bioimaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Tao; Smallwood, Chuck R.; Zhu, Ying
2017-03-30
Modern live-cell imaging approaches permit real-time visualization of biological processes. However, limitations for unicellular organism trapping, culturing and long-term imaging can preclude complete understanding of how such microorganisms respond to perturbations in their local environment or linking single-cell variability to whole population dynamics. We have developed microfluidic platforms to overcome prior technical bottlenecks to allow both chemostat and compartmentalized cellular growth conditions using the same device. Additionally, a nanowell-based platform enables a high throughput approach to scale up compartmentalized imaging optimized within the microfluidic device. These channel and nanowell platforms are complementary, and both provide fine control over the localmore » environment as well as the ability to add/replace media components at any experimental time point.« less
NASA Astrophysics Data System (ADS)
Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
Audio-visual interactions in environment assessment.
Preis, Anna; Kociński, Jędrzej; Hafke-Dys, Honorata; Wrzosek, Małgorzata
2015-08-01
The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznań. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping. Copyright © 2015. Published by Elsevier B.V.
Comparison of User Performance with Interactive and Static 3d Visualization - Pilot Study
NASA Astrophysics Data System (ADS)
Herman, L.; Stachoň, Z.
2016-06-01
Interactive 3D visualizations of spatial data are currently available and popular through various applications such as Google Earth, ArcScene, etc. Several scientific studies have focused on user performance with 3D visualization, but static perspective views are used as stimuli in most of the studies. The main objective of this paper is to try to identify potential differences in user performance with static perspective views and interactive visualizations. This research is an exploratory study. An experiment was designed as a between-subject study and a customized testing tool based on open web technologies was used for the experiment. The testing set consists of an initial questionnaire, a training task and four experimental tasks. Selection of the highest point and determination of visibility from the top of a mountain were used as the experimental tasks. Speed and accuracy of each task performance of participants were recorded. The movement and actions in the virtual environment were also recorded within the interactive variant. The results show that participants deal with the tasks faster when using static visualization. The average error rate was also higher in the static variant. The findings from this pilot study will be used for further testing, especially for formulating of hypotheses and designing of subsequent experiments.
Butson, Christopher R.; Tamm, Georg; Jain, Sanket; Fogal, Thomas; Krüger, Jens
2012-01-01
In recent years there has been significant growth in the use of patient-specific models to predict the effects of neuromodulation therapies such as deep brain stimulation (DBS). However, translating these models from a research environment to the everyday clinical workflow has been a challenge, primarily due to the complexity of the models and the expertise required in specialized visualization software. In this paper, we deploy the interactive visualization system ImageVis3D Mobile, which has been designed for mobile computing devices such as the iPhone or iPad, in an evaluation environment to visualize models of Parkinson’s disease patients who received DBS therapy. Selection of DBS settings is a significant clinical challenge that requires repeated revisions to achieve optimal therapeutic response, and is often performed without any visual representation of the stimulation system in the patient. We used ImageVis3D Mobile to provide models to movement disorders clinicians and asked them to use the software to determine: 1) which of the four DBS electrode contacts they would select for therapy; and 2) what stimulation settings they would choose. We compared the stimulation protocol chosen from the software versus the stimulation protocol that was chosen via clinical practice (independently of the study). Lastly, we compared the amount of time required to reach these settings using the software versus the time required through standard practice. We found that the stimulation settings chosen using ImageVis3D Mobile were similar to those used in standard of care, but were selected in drastically less time. We show how our visualization system, available directly at the point of care on a device familiar to the clinician, can be used to guide clinical decision making for selection of DBS settings. In our view, the positive impact of the system could also translate to areas other than DBS. PMID:22450824
Using Data Visualization to Examine an Academic Library Collection
ERIC Educational Resources Information Center
Finch, Jannette L.; Flenner, Angela R.
2016-01-01
The authors generated data visualizations to compare sections of the library book collection, expenditures in those areas, student enrollment in majors and minors, and number of courses. The visualizations resulting from the entered data provide an excellent starting point for conversations about possible imbalances in the collection and point to…
Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C
2009-01-01
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
Guidance of retinal axons in mammals.
Herrera, Eloísa; Erskine, Lynda; Morenilla-Palao, Cruz
2017-11-26
In order to navigate through the surrounding environment many mammals, including humans, primarily rely on vision. The eye, composed of the choroid, sclera, retinal pigmented epithelium, cornea, lens, iris and retina, is the structure that receives the light and converts it into electrical impulses. The retina contains six major types of neurons involving in receiving and modifying visual information and passing it onto higher visual processing centres in the brain. Visual information is relayed to the brain via the axons of retinal ganglion cells (RGCs), a projection known as the optic pathway. The proper formation of this pathway during development is essential for normal vision in the adult individual. Along this pathway there are several points where visual axons face 'choices' in their direction of growth. Understanding how these choices are made has advanced significantly our knowledge of axon guidance mechanisms. Thus, the development of the visual pathway has served as an extremely useful model to reveal general principles of axon pathfinding throughout the nervous system. However, due to its particularities, some cellular and molecular mechanisms are specific for the visual circuit. Here we review both general and specific mechanisms involved in the guidance of mammalian RGC axons when they are traveling from the retina to the brain to establish precise and stereotyped connections that will sustain vision. Copyright © 2017 Elsevier Ltd. All rights reserved.
Realistic terrain visualization based on 3D virtual world technology
NASA Astrophysics Data System (ADS)
Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai
2009-09-01
The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.
Realistic terrain visualization based on 3D virtual world technology
NASA Astrophysics Data System (ADS)
Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai
2010-11-01
The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.
Scientific Visualization in High Speed Network Environments
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kutler, Paul (Technical Monitor)
1997-01-01
In several cases, new visualization techniques have vastly increased the researcher's ability to analyze and comprehend data. Similarly, the role of networks in providing an efficient supercomputing environment have become more critical and continue to grow at a faster rate than the increase in the processing capabilities of supercomputers. A close relationship between scientific visualization and high-speed networks in providing an important link to support efficient supercomputing is identified. The two technologies are driven by the increasing complexities and volume of supercomputer data. The interaction of scientific visualization and high-speed networks in a Computational Fluid Dynamics simulation/visualization environment are given. Current capabilities supported by high speed networks, supercomputers, and high-performance graphics workstations at the Numerical Aerodynamic Simulation Facility (NAS) at NASA Ames Research Center are described. Applied research in providing a supercomputer visualization environment to support future computational requirements are summarized.
Combined influence of visual scene and body tilt on arm pointing movements: gravity matters!
Scotto Di Cesare, Cécile; Sarlegna, Fabrice R; Bourdin, Christophe; Mestre, Daniel R; Bringoux, Lionel
2014-01-01
Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., 'combined' tilts equal to the sum of 'single' tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues.
Combined Influence of Visual Scene and Body Tilt on Arm Pointing Movements: Gravity Matters!
Scotto Di Cesare, Cécile; Sarlegna, Fabrice R.; Bourdin, Christophe; Mestre, Daniel R.; Bringoux, Lionel
2014-01-01
Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., ‘combined’ tilts equal to the sum of ‘single’ tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues. PMID:24925371
Warrant, Eric J; Locket, N Adam
2004-08-01
The deep sea is the largest habitat on earth. Its three great faunal environments--the twilight mesopelagic zone, the dark bathypelagic zone and the vast flat expanses of the benthic habitat--are home to a rich fauna of vertebrates and invertebrates. In the mesopelagic zone (150-1000 m), the down-welling daylight creates an extended scene that becomes increasingly dimmer and bluer with depth. The available daylight also originates increasingly from vertically above, and bioluminescent point-source flashes, well contrasted against the dim background daylight, become increasingly visible. In the bathypelagic zone below 1000 m no daylight remains, and the scene becomes entirely dominated by point-like bioluminescence. This changing nature of visual scenes with depth--from extended source to point source--has had a profound effect on the designs of deep-sea eyes, both optically and neurally, a fact that until recently was not fully appreciated. Recent measurements of the sensitivity and spatial resolution of deep-sea eyes--particularly from the camera eyes of fishes and cephalopods and the compound eyes of crustaceans--reveal that ocular designs are well matched to the nature of the visual scene at any given depth. This match between eye design and visual scene is the subject of this review. The greatest variation in eye design is found in the mesopelagic zone, where dim down-welling daylight and bio-luminescent point sources may be visible simultaneously. Some mesopelagic eyes rely on spatial and temporal summation to increase sensitivity to a dim extended scene, while others sacrifice this sensitivity to localise pinpoints of bright bioluminescence. Yet other eyes have retinal regions separately specialised for each type of light. In the bathypelagic zone, eyes generally get smaller and therefore less sensitive to point sources with increasing depth. In fishes, this insensitivity, combined with surprisingly high spatial resolution, is very well adapted to the detection and localisation of point-source bioluminescence at ecologically meaningful distances. At all depths, the eyes of animals active on and over the nutrient-rich sea floor are generally larger than the eyes of pelagic species. In fishes, the retinal ganglion cells are also frequently arranged in a horizontal visual streak, an adaptation for viewing the wide flat horizon of the sea floor, and all animals living there. These and many other aspects of light and vision in the deep sea are reviewed in support of the following conclusion: it is not only the intensity of light at different depths, but also its distribution in space, which has been a major force in the evolution of deep-sea vision.
Associative visual learning by tethered bees in a controlled visual environment.
Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin
2017-10-10
Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.
Ocular cells and light: harmony or conflict?
Jurja, Sanda; Hîncu, Mihaela; Dobrescu, Mihaela Amelia; Golu, Andreea Elena; Bălăşoiu, Andrei Theodor; Coman, Mălina
2014-01-01
Vision is based on the sensitivity of the eye to visible rays of the solar spectrum, which allows the recording and transfer of visual information by photoelectric reaction. Any electromagnetic radiation, if sufficiently intense, may cause damages in living tissues. In a changing environment, the aim of this paper is to point out the impact of light radiation on ocular cells, with its phototoxicity potential on eye tissues. In fact, faced with light and oxygen, the eye behaves like an ephemeral aggregate of unstable molecules, like a temporary crystallization threatened with entropia.
Enhancing mHealth Technology in the PCMH Environment to Activate Chronic Care Patients
2016-09-01
9. Appendices…………………………………………………………… 16 Abstract for AMSUS Poster #1 Abstract for AMSUS Poster #2 Power Point sample slides from the mCare product... transfer ? (Not applicable for this reporting period) What was the impact on society beyond science and technology? Phase II research will make an...and process requirements (e.g. interface with wireless communication providers, visualization capabilities and options, data analytic structure) while
Murakoshi, Takuma; Masuda, Tomohiro; Utsumi, Ken; Tsubota, Kazuo; Wada, Yuji
2013-01-01
Previous studies have reported the effects of statistics of luminance distribution on visual freshness perception using pictures which included the degradation process of food samples. However, these studies did not examine the effect of individual differences between the same kinds of food. Here we elucidate whether luminance distribution would continue to have a significant effect on visual freshness perception even if visual stimuli included individual differences in addition to the degradation process of foods. We took pictures of the degradation of three fishes over 3.29 hours in a controlled environment, then cropped square patches of their eyes from the original images as visual stimuli. Eleven participants performed paired comparison tests judging the visual freshness of the fish eyes at three points of degradation. Perceived freshness scores (PFS) were calculated using the Bradley-Terry Model for each image. The ANOVA revealed that the PFS for each fish decreased as the degradation time increased; however, the differences in the PFS between individual fish was larger for the shorter degradation time, and smaller for the longer degradation time. A multiple linear regression analysis was conducted in order to determine the relative importance of the statistics of luminance distribution of the stimulus images in predicting PFS. The results show that standard deviation and skewness in luminance distribution have a significant influence on PFS. These results show that even if foodstuffs contain individual differences, visual freshness perception and changes in luminance distribution correlate with degradation time.
NASA Technical Reports Server (NTRS)
DiZio, P.; Lackner, J. R.
2000-01-01
Reaching movements made to visual targets in a rotating room are initially deviated in path and endpoint in the direction of transient Coriolis forces generated by the motion of the arm relative to the rotating environment. With additional reaches, movements become progressively straighter and more accurate. Such adaptation can occur even in the absence of visual feedback about movement progression or terminus. Here we examined whether congenitally blind and sighted subjects without visual feedback would demonstrate adaptation to Coriolis forces when they pointed to a haptically specified target location. Subjects were tested pre-, per-, and postrotation at 10 rpm counterclockwise. Reaching to straight ahead targets prerotation, both groups exhibited slightly curved paths. Per-rotation, both groups showed large initial deviations of movement path and curvature but within 12 reaches on average had returned to prerotation curvature levels and endpoints. Postrotation, both groups showed mirror image patterns of curvature and endpoint to the per-rotation pattern. The groups did not differ significantly on any of the performance measures. These results provide compelling evidence that motor adaptation to Coriolis perturbations can be achieved on the basis of proprioceptive, somatosensory, and motor information in the complete absence of visual experience.
Acquisition and Visualization Techniques of Human Motion Using Master-Slave System and Haptograph
NASA Astrophysics Data System (ADS)
Katsura, Seiichiro; Ohishi, Kiyoshi
Artificial acquisition and reproduction of human sensations are basic technologies of communication engineering. For example, auditory information is obtained by a microphone, and a speaker reproduces it by artificial means. Furthermore, a video camera and a television make it possible to transmit visual sensation by broadcasting. On the contrary, since tactile or haptic information is subject to the Newton's “law of action and reaction” in the real world, a device which acquires, transmits, and reproduces the information has not been established. From the point of view, real-world haptics is the key technology for future haptic communication engineering. This paper proposes a novel acquisition method of haptic information named “haptograph”. The haptograph visualizes the haptic information like photograph. Since temporal and spatial analyses are conducted to represent haptic information as the haptograph, it is possible to be recognized and to be evaluated intuitively. In this paper, the proposed haptograph is applied to visualization of human motion. It is possible to represent the motion characteristics, the expert's skill and the personal habit, and so on. In other words, a personal encyclopedia is attained. Once such a personal encyclopedia is stored in ubiquitous environment, the future human support technology will be developed.
Wnuczko, Marta; Kennedy, John M
2011-10-01
Observers pointing to a target viewed directly may elevate their fingertip close to the line of sight. However, pointing blindfolded, after viewing the target, they may pivot lower, from the shoulder, aligning the arm with the target as if reaching to the target. Indeed, in Experiment 1 participants elevated their arms more in visually monitored than blindfolded pointing. In Experiment 2, pointing to a visible target they elevated a short pointer more than a long one, raising its tip to the line of sight. In Experiment 3, the Experimenter aligned the participant's arm with the target. Participants judged they were pointing below a visually monitored target. In Experiment 4, participants viewing another person pointing, eyes-open or eyes-closed, judged the target was aligned with the pointing arm. In Experiment 5, participants viewed their arm and the target via a mirror and posed their arm so that it was aligned with the target. Arm elevation was higher in pointing directly.
NASA Astrophysics Data System (ADS)
Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert
2014-05-01
LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar, profile and 3D views since it reduces crowding of the scene and delivers intuitive contextual information. The resulting visualization has proved useful for vegetation analysis for habitat mapping, and can also be applied as a first step for point cloud level classification. An interactive demonstration of the visualization script is shown during poster attendance, including the opportunity to view your own point cloud sample files.
Virtual Environments in Scientific Visualization
NASA Technical Reports Server (NTRS)
Bryson, Steve; Lisinski, T. A. (Technical Monitor)
1994-01-01
Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.
Optical flow versus retinal flow as sources of information for flight guidance
NASA Technical Reports Server (NTRS)
Cutting, James E.
1991-01-01
The appropriate description is considered of visual information for flight guidance, optical flow vs. retinal flow. Most descriptions in the psychological literature are based on the optical flow. However, human eyes move and this movement complicates the issues at stake, particularly when movement of the observer is involved. The question addressed is whether an observer, whose eyes register only retinal flow, use information in optical flow. It is suggested that the observer cannot and does not reconstruct the image in optical flow; instead they use retinal flow. Retinal array is defined as the projections of a three space onto a point and beyond to a movable, nearly hemispheric sensing device, like the retina. Optical array is defined as the projection of a three space environment to a point within that space. And flow is defined as global motion as a field of vectors, best placed on a spherical projection surface. Specifically, flow is the mapping of the field of changes in position of corresponding points on objects in three space onto a point, where that point has moved in position.
Deyer, T W; Ashton-Miller, J A
1999-09-01
To test the (null) hypotheses that the reliability of unipedal balance is unaffected by the attenuation of visual velocity feedback and that, relative to baseline performance, deterioration of balance success rates from attenuated visual velocity feedback will not differ between groups of young men and older women, and the presence (or absence) of a vertical foreground object will not affect balance success rates. Single blind, single case study. University research laboratory. Two volunteer samples: 26 healthy young men (mean age, 20.0yrs; SD, 1.6); 23 healthy older women (mean age, 64.9 yrs; SD, 7.8). Normalized success rates in unipedal balance task. Subjects were asked to transfer to and maintain unipedal stance for 5 seconds in a task near the limit of their balance capabilities. Subjects completed 64 trials: 54 trials of three experimental visual scenes in blocked randomized sequences of 18 trials and 10 trials in a normal visual environment. The experimental scenes included two that provided strong velocity/weak position feedback, one of which had a vertical foreground object (SVWP+) and one without (SVWP-), and one scene providing weak velocity/strong position (WVSP) feedback. Subjects' success rates in the experimental environments were normalized by the success rate in the normal environment in order to allow comparisons between subjects using a mixed model repeated measures analysis of variance. The normalized success rate was significantly greater in SVWP+ than in WVSP (p = .0001) and SVWP- (p = .013). Visual feedback significantly affected the normalized unipedal balance success rates (p = .001); neither the group effect nor the group X visual environment interaction was significant (p = .9362 and p = .5634, respectively). Normalized success rates did not differ significantly between the young men and older women in any visual environment. Near the limit of the young men's or older women's balance capability, the reliability of transfer to unipedal balance was adversely affected by visual environments offering attenuated visual velocity feedback cues and those devoid of vertical foreground objects.
Shoemaker, Ritchie C; House, Dennis E
2005-01-01
The human health risk for chronic illnesses involving multiple body systems following inhalation exposure to the indoor environments of water-damaged buildings (WDBs) has remained poorly characterized and the subject of intense controversy. The current study assessed the hypothesis that exposure to the indoor environments of WDBs with visible microbial colonization was associated with illness. The study used a cross-sectional design with assessments at five time points, and the interventions of cholestyramine (CSM) therapy, exposure avoidance following therapy, and reexposure to the buildings after illness resolution. The methodological approach included oral administration of questionnaires, medical examinations, laboratory analyses, pulmonary function testing, and measurements of visual function. Of the 21 study volunteers, 19 completed assessment at each of the five time points. Data at Time Point 1 indicated multiple symptoms involving at least four organ systems in all study participants, a restrictive respiratory condition in four participants, and abnormally low visual contrast sensitivity (VCS) in 18 participants. Serum leptin levels were abnormally high and alpha melanocyte stimulating hormone (MSH) levels were abnormally low. Assessments at Time Point 2, following 2 weeks of CSM therapy, indicated a highly significant improvement in health status. Improvement was maintained at Time Point 3, which followed exposure avoidance without therapy. Reexposure to the WDBs resulted in illness reacquisition in all participants within 1 to 7 days. Following another round of CSM therapy, assessments at Time Point 5 indicated a highly significant improvement in health status. The group-mean number of symptoms decreased from 14.9+/-0.8 S.E.M. at Time Point 1 to 1.2+/-0.3 S.E.M., and the VCS deficit of approximately 50% at Time Point 1 was fully resolved. Leptin and MSH levels showed statistically significant improvement. The results indicated that CSM was an effective therapeutic agent, that VCS was a sensitive and specific indicator of neurologic function, and that illness involved systemic and hypothalamic processes. Although the results supported the general hypothesis that illness was associated with exposure to the WDBs, this conclusion was tempered by several study limitations. Exposure to specific agents was not demonstrated, study participants were not randomly selected, and double-blinding procedures were not used. Additional human and animal studies are needed to confirm this conclusion, investigate the role of complex mixtures of bacteria, fungi, mycotoxins, endotoxins, and antigens in illness causation, and characterize modes of action. Such data will improve the assessment of human health risk from chronic exposure to WDBs.
Dennerlein, J T; Yang, M C
2001-01-01
Pointing devices, essential input tools for the graphical user interface (GUI) of desktop computers, require precise motor control and dexterity to use. Haptic force-feedback devices provide the human operator with tactile cues, adding the sense of touch to existing visual and auditory interfaces. However, the performance enhancements, comfort, and possible musculoskeletal loading of using a force-feedback device in an office environment are unknown. Hypothesizing that the time to perform a task and the self-reported pain and discomfort of the task improve with the addition of force feedback, 26 people ranging in age from 22 to 44 years performed a point-and-click task 540 times with and without an attractive force field surrounding the desired target. The point-and-click movements were approximately 25% faster with the addition of force feedback (paired t-tests, p < 0.001). Perceived user discomfort and pain, as measured through a questionnaire, were also smaller with the addition of force feedback (p < 0.001). However, this difference decreased as additional distracting force fields were added to the task environment, simulating a more realistic work situation. These results suggest that for a given task, use of a force-feedback device improves performance, and potentially reduces musculoskeletal loading during mouse use. Actual or potential applications of this research include human-computer interface design, specifically that of the pointing device extensively used for the graphical user interface.
Homeostatic Agent for General Environment
NASA Astrophysics Data System (ADS)
Yoshida, Naoto
2018-03-01
One of the essential aspect in biological agents is dynamic stability. This aspect, called homeostasis, is widely discussed in ethology, neuroscience and during the early stages of artificial intelligence. Ashby's homeostats are general-purpose learning machines for stabilizing essential variables of the agent in the face of general environments. However, despite their generality, the original homeostats couldn't be scaled because they searched their parameters randomly. In this paper, first we re-define the objective of homeostats as the maximization of a multi-step survival probability from the view point of sequential decision theory and probabilistic theory. Then we show that this optimization problem can be treated by using reinforcement learning algorithms with special agent architectures and theoretically-derived intrinsic reward functions. Finally we empirically demonstrate that agents with our architecture automatically learn to survive in a given environment, including environments with visual stimuli. Our survival agents can learn to eat food, avoid poison and stabilize essential variables through theoretically-derived single intrinsic reward formulations.
Water environmental management with the aid of remote sensing and GIS technology
NASA Astrophysics Data System (ADS)
Chen, Xiaoling; Yuan, Zhongzhi; Li, Yok-Sheung; Song, Hong; Hou, Yingzi; Xu, Zhanhua; Liu, Honghua; Wai, Onyx W.
2005-01-01
Water environment is associated with many disciplinary fields including sciences and management which makes it difficult to study. Timely observation, data getting and analysis on water environment are very important for decision makers who play an important role to maintain the sustainable development. This study focused on developing a plateform of water environment management based on remote sensing and GIS technology, and its main target is to provide with necessary information on water environment through spatial analysis and visual display in a suitable way. The work especially focused on three points, and the first one is related to technical issues of spatial data organization and communication with a combination of GIS and statistical software. A data-related model was proposed to solve the data communication between the mentioned systems. The second one is spatio-temporal analysis based on remote sensing and GIS. Water quality parameters of suspended sediment concentration and BOD5 were specially analyzed in this case, and the results suggested an obvious influence of land source pollution quantitatively in a spatial domain. The third one is 3D visualization of surface feature based on RS and GIS technology. The Pearl River estuary and HongKong's coastal waters in the South China Sea were taken as a case in this study. The software ARCGIS was taken as a basic platform to develop a water environmental management system. The sampling data of water quality in 76 monitoring stations of coastal water bodies and remote sensed images were selected in this study.
Self-motivated visual scanning predicts flexible navigation in a virtual environment.
Ploran, Elisabeth J; Bevitt, Jacob; Oshiro, Jaris; Parasuraman, Raja; Thompson, James C
2014-01-01
The ability to navigate flexibly (e.g., reorienting oneself based on distal landmarks to reach a learned target from a new position) may rely on visual scanning during both initial experiences with the environment and subsequent test trials. Reliance on visual scanning during navigation harkens back to the concept of vicarious trial and error, a description of the side-to-side head movements made by rats as they explore previously traversed sections of a maze in an attempt to find a reward. In the current study, we examined if visual scanning predicted the extent to which participants would navigate to a learned location in a virtual environment defined by its position relative to distal landmarks. Our results demonstrated a significant positive relationship between the amount of visual scanning and participant accuracy in identifying the trained target location from a new starting position as long as the landmarks within the environment remain consistent with the period of original learning. Our findings indicate that active visual scanning of the environment is a deliberative attentional strategy that supports the formation of spatial representations for flexible navigation.
User-assisted video segmentation system for visual communication
NASA Astrophysics Data System (ADS)
Wu, Zhengping; Chen, Chun
2002-01-01
Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.
2011-09-01
Sensor ..........................................................................25 2. The Environment for Visualizing Images 4.7 (ENVI......DEM Digital Elevation Model ENVI Environment for Visualizing Images HADR Humanitarian and Disaster Relief IfSAR Interferometric Synthetic Aperture
Visual landmarks facilitate rodent spatial navigation in virtual reality environments
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain areas. Virtual reality offers a unique approach to ask whether visual landmark cues alone are sufficient to improve performance in a spatial task. We found that mice could learn to navigate between two water reward locations along a virtual bidirectional linear track using a spherical treadmill. Mice exposed to a virtual environment with vivid visual cues rendered on a single monitor increased their performance over a 3-d training regimen. Training significantly increased the percentage of time avatars controlled by the mice spent near reward locations in probe trials without water rewards. Neither improvement during training or spatial learning for reward locations occurred with mice operating a virtual environment without vivid landmarks or with mice deprived of all visual feedback. Mice operating the vivid environment developed stereotyped avatar turning behaviors when alternating between reward zones that were positively correlated with their performance on the probe trial. These results suggest that mice are able to learn to navigate to specific locations using only visual cues presented within a virtual environment rendered on a single computer monitor. PMID:22345484
Cheng, Yufang; Huang, Ruowen
2012-01-01
The focus of this study is using data glove to practice Joint attention skill in virtual reality environment for people with pervasive developmental disorder (PDD). The virtual reality environment provides a safe environment for PDD people. Especially, when they made errors during practice in virtual reality environment, there is no suffering or dangerous consequences to deal with. Joint attention is a critical skill in the disorder characteristics of children with PDD. The absence of joint attention is a deficit frequently affects their social relationship in daily life. Therefore, this study designed the Joint Attention Skills Learning (JASL) systems with data glove tool to help children with PDD to practice joint attention behavior skills. The JASL specifically focus the skills of pointing, showing, sharing things and behavior interaction with other children with PDD. The system is designed in playroom-scene and presented in the first-person perspectives for users. The functions contain pointing and showing, moving virtual objects, 3D animation, text, speaking sounds, and feedback. The method was employed single subject multiple-probe design across subjects' designs, and analysis of visual inspection in this study. It took 3 months to finish the experimental section. Surprisingly, the experiment results reveal that the participants have further extension in improving the joint attention skills in their daily life after using the JASL system. The significant potential in this particular treatment of joint attention for each participant will be discussed in details in this paper. Copyright © 2012 Elsevier Ltd. All rights reserved.
The role of visual context in manual target localization
NASA Technical Reports Server (NTRS)
Barry, Susan R.
1993-01-01
During space flight and immediately after return to the 1-g environment of earth, astronauts experience perceptual and sensory-motor disturbances. These changes result from adaptation of the astronaut to the microgravity environment of space. During space flight, sensory information from the eyes, limbs, and vestibular organs is reinterpreted by the central nervous system in order to produce appropriate body movements in the microgravity. This adaptation takes several days to develop. Upon return to earth, the changes in the sensory-motor system are no longer appropriate to a 1-g environment. Over several days, the astronaut must re-adapt to the terrestrial environment. Alterations in sensory-motor function may affect eye-head-hand coordination and, thus, the crewmember's ability to manually locate objects in extrapersonal space. Previous reports have demonstrated that crewmembers have difficulty in estimating joint and limb position and in pointing to memorized target positions on orbit and immediately postflight. The ability to point at or reach toward an object or perform other manual tasks is essential for safe Shuttle operation and may be compromised particularly during re-entry and landing sequences and during possible emergency egress from the Shuttle. An understanding of eye-head-hand coordination and the changes produced during space flight is necessary to develop effective countermeasures. This summer's project formed part of the study of the sensory cues use in the manual localization of objects.
[Visual field progression in glaucoma: cluster analysis].
Bresson-Dumont, H; Hatton, J; Foucher, J; Fonteneau, M
2012-11-01
Visual field progression analysis is one of the key points in glaucoma monitoring, but distinction between true progression and random fluctuation is sometimes difficult. There are several different algorithms but no real consensus for detecting visual field progression. The trend analysis of global indices (MD, sLV) may miss localized deficits or be affected by media opacities. Conversely, point-by-point analysis makes progression difficult to differentiate from physiological variability, particularly when the sensitivity of a point is already low. The goal of our study was to analyse visual field progression with the EyeSuite™ Octopus Perimetry Clusters algorithm in patients with no significant changes in global indices or worsening of the analysis of pointwise linear regression. We analyzed the visual fields of 162 eyes (100 patients - 58 women, 42 men, average age 66.8 ± 10.91) with ocular hypertension or glaucoma. For inclusion, at least six reliable visual fields per eye were required, and the trend analysis (EyeSuite™ Perimetry) of visual field global indices (MD and SLV), could show no significant progression. The analysis of changes in cluster mode was then performed. In a second step, eyes with statistically significant worsening of at least one of their clusters were analyzed point-by-point with the Octopus Field Analysis (OFA). Fifty four eyes (33.33%) had a significant worsening in some clusters, while their global indices remained stable over time. In this group of patients, more advanced glaucoma was present than in stable group (MD 6.41 dB vs. 2.87); 64.82% (35/54) of those eyes in which the clusters progressed, however, had no statistically significant change in the trend analysis by pointwise linear regression. Most software algorithms for analyzing visual field progression are essentially trend analyses of global indices, or point-by-point linear regression. This study shows the potential role of analysis by clusters trend. However, for best results, it is preferable to compare the analyses of several tests in combination with morphologic exam. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
NASA Technical Reports Server (NTRS)
Conroy, Michael; Mazzone, Rebecca; Little, William; Elfrey, Priscilla; Mann, David; Mabie, Kevin; Cuddy, Thomas; Loundermon, Mario; Spiker, Stephen; McArthur, Frank;
2010-01-01
The Distributed Observer network (DON) is a NASA-collaborative environment that leverages game technology to bring three-dimensional simulations to conventional desktop and laptop computers in order to allow teams of engineers working on design and operations, either individually or in groups, to view and collaborate on 3D representations of data generated by authoritative tools such as Delmia Envision, Pro/Engineer, or Maya. The DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3D visual environment. DON has been designed to enhance accessibility and user ability to observe and analyze visual simulations in real time. A variety of NASA mission segment simulations [Synergistic Engineering Environment (SEE) data, NASA Enterprise Visualization Analysis (NEVA) ground processing simulations, the DSS simulation for lunar operations, and the Johnson Space Center (JSC) TRICK tool for guidance, navigation, and control analysis] were experimented with. Desired functionalities, [i.e. Tivo-like functions, the capability to communicate textually or via Voice-over-Internet Protocol (VoIP) among team members, and the ability to write and save notes to be accessed later] were targeted. The resulting DON application was slated for early 2008 release to support simulation use for the Constellation Program and its teams. Those using the DON connect through a client that runs on their PC or Mac. This enables them to observe and analyze the simulation data as their schedule allows, and to review it as frequently as desired. DON team members can move freely within the virtual world. Preset camera points can be established, enabling team members to jump to specific views. This improves opportunities for shared analysis of options, design reviews, tests, operations, training, and evaluations, and improves prospects for verification of requirements, issues, and approaches among dispersed teams.
Collecting, Managing, and Visualizing Data during Planetary Surface Exploration
NASA Astrophysics Data System (ADS)
Young, K. E.; Graff, T. G.; Bleacher, J. E.; Whelley, P.; Garry, W. B.; Rogers, A. D.; Glotch, T. D.; Coan, D.; Reagan, M.; Evans, C. A.; Garrison, D. H.
2017-12-01
While the Apollo lunar surface missions were highly successful in collecting valuable samples to help us understand the history and evolution of the Moon, technological advancements since 1969 point us toward a new generation of planetary surface exploration characterized by large volumes of data being collected and used to inform traverse execution real-time. Specifically, the advent of field portable technologies mean that future planetary explorers will have vast quantities of in situ geochemical and geophysical data that can be used to inform sample collection and curation as well as strategic and tactical decision making that will impact mission planning real-time. The RIS4E SSERVI (Remote, In Situ and Synchrotron Studies for Science and Exploration; Solar System Exploration Research Virtual Institute) team has been working for several years to deploy a variety of in situ instrumentation in relevant analog environments. RIS4E seeks both to determine ideal instrumentation suites for planetary surface exploration as well as to develop a framework for EVA (extravehicular activity) mission planning that incorporates this new generation of technology. Results from the last several field campaigns will be discussed, as will recommendations for how to rapidly mine in situ datasets for tactical and strategic planning. Initial thoughts about autonomy in mining field data will also be presented. The NASA Extreme Environments Mission Operations (NEEMO) missions focus on a combination of Science, Science Operations, and Technology objectives in a planetary analog environment. Recently, the increase of high-fidelity marine science objectives during NEEMO EVAs have led to the ability to evaluate how real-time data collection and visualization can influence tactical and strategic planning for traverse execution and mission planning. Results of the last few NEEMO missions will be discussed in the context of data visualization strategies for real-time operations.
Bjørgen, Kathrine
2016-01-01
This article examines the characteristic of affordances of different outdoor environments, related to the influences of children's physical activity levels. Qualitative observation studies in a Norwegian kindergarten were conducted of 3- to 5-year-olds into the natural environment and in the kindergarten's outdoor area. An ecological approach was important from both an analytical and theoretical point of view, using concepts from Gibson's (The ecological approach to visual perception. Houghton Mifflin Company, Bosten, 1979) theory of affordances. The concepts of affordances in an environment can explain children's movement behaviour. The findings reveal that situations with high physical activity levels among the children are more often created in natural environments than in the kindergarten's outdoor environment. Natural environments offer potential qualities that are a catalyst for physical activity. The study shows that certain characteristic of the physical outdoor environment are important for children's opportunities and inspiration for physical active play. The findings also show that social possibilities and opportunities, human interactions, in the environment have the greatest influence on the duration and intensity of physically active play. The need for knowledge on physical and social opportunities in outdoor environments, educational practice and the content of outdoor time in kindergartens should be given greater attention.
Safety assessment on pedestrian crossing environments using MLS data.
Soilán, Mario; Riveiro, Belén; Sánchez-Rodríguez, Ana; Arias, Pedro
2018-02-01
In the framework of infrastructure analysis and maintenance in an urban environment, it is important to address the safety of every road user. This paper presents a methodology for the evaluation of several safety indicators on pedestrian crossing environments using geometric and radiometric information extracted from 3D point clouds collected by a Mobile Mapping System (MMS). The methodology is divided in four main modules which analyze the accessibility of the crossing area, the presence of traffic lights and traffic signs, and the visibility between a driver and a pedestrian on the proximities of a pedestrian crossing. The outputs of the analysis are exported to a Geographic Information System (GIS) where they are visualized and can be further processed in the context of city management. The methodology has been tested on approximately 30 pedestrian crossings in cluttered urban environments of two different cities. Results show that MMS are a valid mean to assess the safety of a specific urban environment, regarding its geometric conditions. Remarkable results are presented on traffic light classification, with a global F-score close to 95%. Copyright © 2017 Elsevier Ltd. All rights reserved.
Aurally aided visual search performance in a dynamic environment
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.
2008-04-01
Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.
NASA Technical Reports Server (NTRS)
Klumpar, D. M.; Lapolla, M. V.; Horblit, B.
1995-01-01
A prototype system has been developed to aid the experimental space scientist in the display and analysis of spaceborne data acquired from direct measurement sensors in orbit. We explored the implementation of a rule-based environment for semi-automatic generation of visualizations that assist the domain scientist in exploring one's data. The goal has been to enable rapid generation of visualizations which enhance the scientist's ability to thoroughly mine his data. Transferring the task of visualization generation from the human programmer to the computer produced a rapid prototyping environment for visualizations. The visualization and analysis environment has been tested against a set of data obtained from the Hot Plasma Composition Experiment on the AMPTE/CCE satellite creating new visualizations which provided new insight into the data.
OnSight: Multi-platform Visualization of the Surface of Mars
NASA Astrophysics Data System (ADS)
Abercrombie, S. P.; Menzies, A.; Winter, A.; Clausen, M.; Duran, B.; Jorritsma, M.; Goddard, C.; Lidawer, A.
2017-12-01
A key challenge of planetary geology is to develop an understanding of an environment that humans cannot (yet) visit. Instead, scientists rely on visualizations created from images sent back by robotic explorers, such as the Curiosity Mars rover. OnSight is a multi-platform visualization tool that helps scientists and engineers to visualize the surface of Mars. Terrain visualization allows scientists to understand the scale and geometric relationships of the environment around the Curiosity rover, both for scientific understanding and for tactical consideration in safely operating the rover. OnSight includes a web-based 2D/3D visualization tool, as well as an immersive mixed reality visualization. In addition, OnSight offers a novel feature for communication among the science team. Using the multiuser feature of OnSight, scientists can meet virtually on Mars, to discuss geology in a shared spatial context. Combining web-based visualization with immersive visualization allows OnSight to leverage strengths of both platforms. This project demonstrates how 3D visualization can be adapted to either an immersive environment or a computer screen, and will discuss advantages and disadvantages of both platforms.
Towards a Comprehensive Computational Simulation System for Turbomachinery
NASA Technical Reports Server (NTRS)
Shih, Ming-Hsin
1994-01-01
The objective of this work is to develop algorithms associated with a comprehensive computational simulation system for turbomachinery flow fields. This development is accomplished in a modular fashion. These modules includes grid generation, visualization, network, simulation, toolbox, and flow modules. An interactive grid generation module is customized to facilitate the grid generation process associated with complicated turbomachinery configurations. With its user-friendly graphical user interface, the user may interactively manipulate the default settings to obtain a quality grid within a fraction of time that is usually required for building a grid about the same geometry with a general-purpose grid generation code. Non-Uniform Rational B-Spline formulations are utilized in the algorithm to maintain geometry fidelity while redistributing grid points on the solid surfaces. Bezier curve formulation is used to allow interactive construction of inner boundaries. It is also utilized to allow interactive point distribution. Cascade surfaces are transformed from three-dimensional surfaces of revolution into two-dimensional parametric planes for easy manipulation. Such a transformation allows these manipulated plane grids to be mapped to surfaces of revolution by any generatrix definition. A sophisticated visualization module is developed to al-low visualization for both grid and flow solution, steady or unsteady. A network module is built to allow data transferring in the heterogeneous environment. A flow module is integrated into this system, using an existing turbomachinery flow code. A simulation module is developed to combine the network, flow, and visualization module to achieve near real-time flow simulation about turbomachinery geometries. A toolbox module is developed to support the overall task. A batch version of the grid generation module is developed to allow portability and has been extended to allow dynamic grid generation for pitch changing turbomachinery configurations. Various applications with different characteristics are presented to demonstrate the success of this system.
Allocentric and contra-aligned spatial representations of a town environment in blind people.
Chiesa, Silvia; Schmidt, Susanna; Tinti, Carla; Cornoldi, Cesare
2017-10-01
Evidence concerning the representation of space by blind individuals is still unclear, as sometimes blind people behave like sighted people do, while other times they present difficulties. A better understanding of blind people's difficulties, especially with reference to the strategies used to form the representation of the environment, may help to enhance knowledge of the consequences of the absence of vision. The present study examined the representation of the locations of landmarks of a real town by using pointing tasks that entailed either allocentric points of reference with mental rotations of different degrees, or contra-aligned representations. Results showed that, in general, people met difficulties when they had to point from a different perspective to aligned landmarks or from the original perspective to contra-aligned landmarks, but this difficulty was particularly evident for the blind. The examination of the strategies adopted to perform the tasks showed that only a small group of blind participants used a survey strategy and that this group had a better performance with respect to people who adopted route or verbal strategies. Implications for the comprehension of the consequences on spatial cognition of the absence of visual experience are discussed, focusing in particular on conceivable interventions. Copyright © 2017 Elsevier B.V. All rights reserved.
Indoor Navigation Design Integrated with Smart Phones and Rfid Devices
NASA Astrophysics Data System (ADS)
Ortakci, Y.; Demiral, E.; Atila, U.; Karas, I. R.
2015-10-01
High rise, complex and huge buildings in the cities are almost like a small city with their tens of floors, hundreds of corridors and rooms and passages. Due to size and complexity of these buildings, people need guidance to find their way to the destination in these buildings. In this study, a mobile application is developed to visualize pedestrian's indoor position as 3D in their smartphone and RFID Technology is used to detect the position of pedestrian. While the pedestrian is walking on his/her way on the route, smartphone will guide the pedestrian by displaying the photos of indoor environment on the route. Along the tour, an RFID (Radio-Frequency Identification) device is integrated to the system. The pedestrian will carry the RFID device during his/her tour in the building. The RFID device will send the position data to the server directly in every two seconds periodically. On the other side, the pedestrian will just select the destination point in the mobile application on smartphone and sent the destination point to the server. The shortest path from the pedestrian position to the destination point is found out by the script on the server. This script also sends the environment photo of the first node on the acquired shortest path to the client as an indoor navigation module.
NASA Astrophysics Data System (ADS)
Ying, Shen; Li, Lin; Gao, Yurong
2009-10-01
Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.
Visual Communication in PowerPoint Presentations in Applied Linguistics
ERIC Educational Resources Information Center
Kmalvand, Ayad
2014-01-01
PowerPoint knowledge presentation as a digital genre has established itself as the main software by which the findings of theses are disseminated in the academic settings. Although the importance of PowerPoint presentations is typically realized in academic settings like lectures, conferences, and seminars, the study of the visual features of…
Visual Perception of Touchdown Point During Simulated Landing
ERIC Educational Resources Information Center
Palmisano, Stephen; Gillam, Barbara
2005-01-01
Experiments examined the accuracy of visual touchdown point perception during oblique descents (1.5?-15?) toward a ground plane consisting of (a) randomly positioned dots, (b) a runway outline, or (c) a grid. Participants judged whether the perceived touchdown point was above or below a probe that appeared at a random position following each…
Painting Supramolecular Polymers in Organic Solvents by Super-resolution Microscopy
2018-01-01
Despite the rapid development of complex functional supramolecular systems, visualization of these architectures under native conditions at high resolution has remained a challenging endeavor. Super-resolution microscopy was recently proposed as an effective tool to unveil one-dimensional nanoscale structures in aqueous media upon chemical functionalization with suitable fluorescent probes. Building upon our previous work, which enabled photoactivation localization microscopy in organic solvents, herein, we present the imaging of one-dimensional supramolecular polymers in their native environment by interface point accumulation for imaging in nanoscale topography (iPAINT). The noncovalent staining, typical of iPAINT, allows the investigation of supramolecular polymers’ structure in situ without any chemical modification. The quasi-permanent adsorption of the dye to the polymer is exploited to identify block-like arrangements within supramolecular fibers, which were obtained upon mixing homopolymers that were prestained with different colors. The staining of the blocks, maintained by the lack of exchange of the dyes, permits the imaging of complex structures for multiple days. This study showcases the potential of PAINT-like strategies such as iPAINT to visualize multicomponent dynamic systems in their native environment with an easy, synthesis-free approach and high spatial resolution. PMID:29697958
High sensitivity to multisensory conflicts in agoraphobia exhibited by virtual reality.
Viaud-Delmon, Isabelle; Warusfel, Olivier; Seguelas, Angeline; Rio, Emmanuel; Jouvent, Roland
2006-10-01
The primary aim of this study was to evaluate the effect of auditory feedback in a VR system planned for clinical use and to address the different factors that should be taken into account in building a bimodal virtual environment (VE). We conducted an experiment in which we assessed spatial performances in agoraphobic patients and normal subjects comparing two kinds of VEs, visual alone (Vis) and auditory-visual (AVis), during separate sessions. Subjects were equipped with a head-mounted display coupled with an electromagnetic sensor system and immersed in a virtual town. Their task was to locate different landmarks and become familiar with the town. In the AVis condition subjects were equipped with the head-mounted display and headphones, which delivered a soundscape updated in real-time according to their movement in the virtual town. While general performances remained comparable across the conditions, the reported feeling of immersion was more compelling in the AVis environment. However, patients exhibited more cybersickness symptoms in this condition. The result of this study points to the multisensory integration deficit of agoraphobic patients and underline the need for further research on multimodal VR systems for clinical use.
Perception of biological motion from size-invariant body representations.
Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E
2015-01-01
The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.
Hayne, Harlene; Jaeger, Katja; Sonne, Trine; Gross, Julien
2016-11-01
The visual recognition memory (VRM) paradigm has been widely used to measure memory during infancy and early childhood; it has also been used to study memory in human and nonhuman adults. Typically, participants are familiarized with stimuli that have no special significance to them. Under these conditions, greater attention to the novel stimulus during the test (i.e., novelty preference) is used as the primary index of memory. Here, we took a novel approach to the VRM paradigm and tested 1-, 2-, and 3-year olds using photos of meaningful stimuli that were drawn from the participants' own environment (e.g., photos of their mother, father, siblings, house). We also compared their performance to that of participants of the same age who were tested in an explicit pointing version of the VRM task. Two- and 3-year olds exhibited a strong familiarity preference for some, but not all, of the meaningful stimuli; 1-year olds did not. At no age did participants exhibit the kind of novelty preference that is commonly used to define memory in the VRM task. Furthermore, when compared to pointing, looking measures provided a rough approximation of recognition memory, but in some instances, the looking measure underestimated retention. The use of meaningful stimuli raise important questions about the way in which visual attention is interpreted in the VRM paradigm, and may provide new opportunities to measure memory during infancy and early childhood. © 2016 Wiley Periodicals, Inc.
Bourdin, C; Bock, O
2006-11-20
The ability of our sensorimotor system to adapt to changing and complex environmental demands has been under experimental scrutiny for more than a century. Previous works have shown that aimed arm movements adapt quickly and completely to Coriolis force, but incompletely to the combination of Coriolis and centrifugal forces without visual cues. Two hypotheses may be advanced to explain this discrepancy: the workspace-exploration hypothesis, and the degraded-proprioception hypothesis. The aim of this study was to distinguish between the above two alternatives by comparing adaptive improvement during off-axis rotation in subjects pointing at one, three or seven different targets in complete darkness. Two main results emerge: (a) off-axis rotation led initially to errors in the direction of Coriolis force and in the opposite direction of the centrifugal force; (b) the size of the visited workspace has no effect on the way the subjects adapt to a multi-force environment. The lack of a target-number effect and the persistence of lateral errors in the pointing movements performed during rotation of the platform, support the degraded-proprioception rather than the workspace-exploration hypothesis of adaptation to a multi-force environment.
Supramodal Enhancement of Auditory Perceptual and Cognitive Learning by Video Game Playing.
Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R; Amitay, Sygal
2017-01-01
Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.
NASA Astrophysics Data System (ADS)
Park, Young Woo; Guo, Bing; Mogensen, Monique; Wang, Kevin; Law, Meng; Liu, Brent
2010-03-01
When a patient is accepted in the emergency room suspected of stroke, time is of the utmost importance. The infarct brain area suffers irreparable damage as soon as three hours after the onset of stroke symptoms. A CT scan is one of standard first line of investigations with imaging and is crucial to identify and properly triage stroke cases. The availability of an expert Radiologist in the emergency environment to diagnose the stroke patient in a timely manner only increases the challenges within the clinical workflow. Therefore, a truly zero-footprint web-based system with powerful advanced visualization tools for volumetric imaging including 2D. MIP/MPR, 3D display can greatly facilitate this dynamic clinical workflow for stroke patients. Together with mobile technology, the proper visualization tools can be delivered at the point of decision anywhere and anytime. We will present a small pilot project to evaluate the use of mobile technologies using devices such as iPhones in evaluating stroke patients. The results of the evaluation as well as any challenges in setting up the system will also be discussed.
Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel Robert; Namer-Furstenberg, Rinat; Amedi, Amir
2014-01-01
Mobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane's in the real world. We compared performance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navigation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation.
Development of Techniques for Visualization of Scalar and Vector Fields in the Immersive Environment
NASA Technical Reports Server (NTRS)
Bidasaria, Hari B.; Wilson, John W.; Nealy, John E.
2005-01-01
Visualization of scalar and vector fields in the immersive environment (CAVE - Cave Automated Virtual Environment) is important for its application to radiation shielding research at NASA Langley Research Center. A complete methodology and the underlying software for this purpose have been developed. The developed software has been put to use for the visualization of the earth s magnetic field, and in particular for the study of the South Atlantic Anomaly. The methodology has also been put to use for the visualization of geomagnetically trapped protons and electrons within Earth's magnetosphere.
Landscape control points: a procedure for predicting and monitoring visual impacts
R. Burton Litton
1973-01-01
The visual impacts of alterations to the landscape can be studied by setting up Landscape Control Pointsâa network of permanently established observation sites. Such observations enable the forest manager to anticipate visual impacts of management decision, select from a choice of alternative solutions, cover an area for comprehensive viewing, and establish a method to...
Wetlands for Wastewater: a Visual Approach to Microbial Dynamics
NASA Astrophysics Data System (ADS)
Joubert, L.; Wolfaardt, G.; Du Plessis, K.
2007-12-01
The complex character of distillery wastewater comprises high concentrations of sugars, lignins, hemicelluloses, dextrans, resins, polyphenols and organic acids which are recalcitrant to biodegradation. Microorganisms play a key role in the production and degradation of organic matter, environmental pollutants, and cycling of nutrients and metals. Due to their short life cycles microbes respond rapidly to external nutrient loading, with major consequences for the stability of biological systems. We evaluated the feasibility of wetlands to treat winery and distillery effluents in experimental systems based on constructed wetlands, including down-scaled on-site distillery wetlands, small-scale controlled greenhouse systems, and bench-scale mesocosms. Chemical, visual and molecular fingerprinting (t-RFLP) techniques were applied to study the dynamics of planktonic and attached (biofilm) communities at various points in wetlands of different size, retention time and geological substrate, and under influence of shock nutrient loadings. Variable- Pressure Scanning Electron Microscopy (VP-SEM) was applied to visualize microbial colonization, morphotype diversity and distribution, and 3D biofilm architecture. Cross-taxon and predator-prey interactions were markedly influenced by organic loading, while the presence of algae affected microbial community composition and biofilm structure. COD removal varied with geological substrate, and was positively correlated with retention time in gravel wetlands. Planktonic and biofilm communities varied markedly in different regions of the wetland and over time, as indicated by whole-community t-RFLP and VP-SEM. An integrative visual approach to community dynamics enhanced data retrieval not afforded by molecular techniques alone. The high microbial diversity along spatial and temporal gradients, and responsiveness to the physico-chemical environment, suggest that microbial communities maintain metabolic function by modifying species composition in response to fluctuations in their environment. It seems apparent that microbial community plasticity may indeed be the distinguishing characteristic of a successful wetland system.
Systems and Methods for Data Visualization Using Three-Dimensional Displays
NASA Technical Reports Server (NTRS)
Davidoff, Scott (Inventor); Djorgovski, Stanislav G. (Inventor); Estrada, Vicente (Inventor); Donalek, Ciro (Inventor)
2017-01-01
Data visualization systems and methods for generating 3D visualizations of a multidimensional data space are described. In one embodiment a 3D data visualization application directs a processing system to: load a set of multidimensional data points into a visualization table; create representations of a set of 3D objects corresponding to the set of data points; receive mappings of data dimensions to visualization attributes; determine the visualization attributes of the set of 3D objects based upon the selected mappings of data dimensions to 3D object attributes; update a visibility dimension in the visualization table for each of the plurality of 3D object to reflect the visibility of each 3D object based upon the selected mappings of data dimensions to visualization attributes; and interactively render 3D data visualizations of the 3D objects within the virtual space from viewpoints determined based upon received user input.
Visualizing Uncertainty of Point Phenomena by Redesigned Error Ellipses
NASA Astrophysics Data System (ADS)
Murphy, Christian E.
2018-05-01
Visualizing uncertainty remains one of the great challenges in modern cartography. There is no overarching strategy to display the nature of uncertainty, as an effective and efficient visualization depends, besides on the spatial data feature type, heavily on the type of uncertainty. This work presents a design strategy to visualize uncertainty con-nected to point features. The error ellipse, well-known from mathematical statistics, is adapted to display the uncer-tainty of point information originating from spatial generalization. Modified designs of the error ellipse show the po-tential of quantitative and qualitative symbolization and simultaneous point based uncertainty symbolization. The user can intuitively depict the centers of gravity, the major orientation of the point arrays as well as estimate the ex-tents and possible spatial distributions of multiple point phenomena. The error ellipse represents uncertainty in an intuitive way, particularly suitable for laymen. Furthermore it is shown how applicable an adapted design of the er-ror ellipse is to display the uncertainty of point features originating from incomplete data. The suitability of the error ellipse to display the uncertainty of point information is demonstrated within two showcases: (1) the analysis of formations of association football players, and (2) uncertain positioning of events on maps for the media.
Measurement Tools for the Immersive Visualization Environment: Steps Toward the Virtual Laboratory.
Hagedorn, John G; Dunkers, Joy P; Satterfield, Steven G; Peskin, Adele P; Kelso, John T; Terrill, Judith E
2007-01-01
This paper describes a set of tools for performing measurements of objects in a virtual reality based immersive visualization environment. These tools enable the use of the immersive environment as an instrument for extracting quantitative information from data representations that hitherto had be used solely for qualitative examination. We provide, within the virtual environment, ways for the user to analyze and interact with the quantitative data generated. We describe results generated by these methods to obtain dimensional descriptors of tissue engineered medical products. We regard this toolbox as our first step in the implementation of a virtual measurement laboratory within an immersive visualization environment.
Selective Attention Modulates the Direction of Audio-Visual Temporal Recalibration
Ikumi, Nara; Soto-Faraco, Salvador
2014-01-01
Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes. PMID:25004132
Selective attention modulates the direction of audio-visual temporal recalibration.
Ikumi, Nara; Soto-Faraco, Salvador
2014-01-01
Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.
jAMVLE, a New Integrated Molecular Visualization Learning Environment
ERIC Educational Resources Information Center
Bottomley, Steven; Chandler, David; Morgan, Eleanor; Helmerhorst, Erik
2006-01-01
A new computer-based molecular visualization tool has been developed for teaching, and learning, molecular structure. This java-based jmol Amalgamated Molecular Visualization Learning Environment (jAMVLE) is platform-independent, integrated, and interactive. It has an overall graphical user interface that is intuitive and easy to use. The…
The Physical Environment and the Visually Impaired.
ERIC Educational Resources Information Center
Braf, Per-Gunnar
Reported are results of a project carried out at the Swedish Institute for the Handicapped to determine needs of the visually impaired in the planning and adaptation of buildings and other forms of physical environment. Chapter 1 considers implications of impaired vision and includes definitions, statistics, and problems of the visually impaired…
A software module for implementing auditory and visual feedback on a video-based eye tracking system
NASA Astrophysics Data System (ADS)
Rosanlall, Bharat; Gertner, Izidor; Geri, George A.; Arrington, Karl F.
2016-05-01
We describe here the design and implementation of a software module that provides both auditory and visual feedback of the eye position measured by a commercially available eye tracking system. The present audio-visual feedback module (AVFM) serves as an extension to the Arrington Research ViewPoint EyeTracker, but it can be easily modified for use with other similar systems. Two modes of audio feedback and one mode of visual feedback are provided in reference to a circular area-of-interest (AOI). Auditory feedback can be either a click tone emitted when the user's gaze point enters or leaves the AOI, or a sinusoidal waveform with frequency inversely proportional to the distance from the gaze point to the center of the AOI. Visual feedback is in the form of a small circular light patch that is presented whenever the gaze-point is within the AOI. The AVFM processes data that are sent to a dynamic-link library by the EyeTracker. The AVFM's multithreaded implementation also allows real-time data collection (1 kHz sampling rate) and graphics processing that allow display of the current/past gaze-points as well as the AOI. The feedback provided by the AVFM described here has applications in military target acquisition and personnel training, as well as in visual experimentation, clinical research, marketing research, and sports training.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chizhov, K.; Simakov, A.; Seregin, V.
2013-07-01
The report is an overview of the information-analytical system designed to assure radiation safety of workers. The system was implemented in the Northwest Radioactive Waste Management Center 'SevRAO' (which is a branch of the Federal State Unitary Enterprise 'Radioactive Waste Management Enterprise RosRAO'). The center is located in the Northwest Russia. In respect to 'SevRAO', the Federal Medical-Biological Agency is the regulatory body, which deals with issues of radiation control. The main document to regulate radiation control is 'Reference levels of radiation factors in radioactive wastes management center'. This document contains about 250 parameters. We have developed a software toolmore » to simplify control of these parameters. The software includes: input interface, the database, dose calculating module and analytical block. Input interface is used to enter radiation environment data. Dose calculating module calculates the dose on the route. Analytical block optimizes and analyzes radiation situation maps. Much attention is paid to the GUI and graphical representation of results. The operator can enter the route at the industrial site or watch the fluctuations of the dose rate field on the map. Most of the results are presented in a visual form. Here we present some analytical tasks, such as comparison of the dose rate in some point with control levels at this point, to be solved for the purpose of radiation safety control. The program helps to identify points making the largest contribution to the collective dose of the personnel. The tool can automatically calculate the route with the lowest dose, compare and choose the best route. The program uses several options to visualize the radiation environment at the industrial site. This system will be useful for radiation monitoring services during the operation, planning of works and development of scenarios. The paper presents some applications of this system on real data over three years - from March 2009 to February 2012. (authors)« less
Visual input enhances selective speech envelope tracking in auditory cortex at a "cocktail party".
Zion Golumbic, Elana; Cogan, Gregory B; Schroeder, Charles E; Poeppel, David
2013-01-23
Our ability to selectively attend to one auditory signal amid competing input streams, epitomized by the "Cocktail Party" problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker's face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a Cocktail Party setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive.
NASA Astrophysics Data System (ADS)
Lee, O. A.
2016-12-01
Significant changes to the Arctic marine environment is anticipated as a result of decreasing sea ice and increasing anthropogenic activity that may occur with increasing access to ice-free waters. Two different collaboration efforts between scientists and artists on projects related to changes in the Alaskan Arctic waters are compared to present different outcomes from two collaboration strategies. The first collaboration involved a funded project to develop visualizations of change on the North Slope as part of an outreach effort for the North Slope Science Initiative Scenarios project. The second collaboration was a voluntary art-science collaboration to develop artwork about changing sea ice habitat for walrus as one contribution to a featured art show during the 2016 Arctic Science Summit Week. Both collaboration opportunities resulted in compelling visualizations. However the funded collaboration provided for more iterative discussions between the scientist and the collaborators for the film and animation products throughout the duration of the project. This ensured that the science remained an important focal point. In contrast, the product of the voluntary collaboration effort was primarily driven by the artist's perspective, although the discussions with the scientist played a role in connecting the content of the three panels in the final art and sculpture piece. This comparison of different levels of scientist-involvement and resources used to develop the visualizations highlights the importance of defining the intended audience and expectations for all collaborators early.
Eye movements, visual search and scene memory, in an immersive virtual environment.
Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary
2014-01-01
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.
The virtual windtunnel: Visualizing modern CFD datasets with a virtual environment
NASA Technical Reports Server (NTRS)
Bryson, Steve
1993-01-01
This paper describes work in progress on a virtual environment designed for the visualization of pre-computed fluid flows. The overall problems involved in the visualization of fluid flow are summarized, including computational, data management, and interface issues. Requirements for a flow visualization are summarized. Many aspects of the implementation of the virtual windtunnel were uniquely determined by these requirements. The user interface is described in detail.
NASA Astrophysics Data System (ADS)
West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram
2014-02-01
Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.
The effectiveness of visual art on environment in nursing home.
Chang, Chia-Hsiu; Lu, Ming-Shih; Lin, Tsyr-En; Chen, Chung-Hey
2013-06-01
This Taiwan study investigated the effect of a visual art-based friendly environment on nursing home residents' satisfaction with their living environment. A pre-experimental design was used. Thirty-three residents in a nursing home were recruited in a one-group pre- and post-test study. The four-floor living environment was integrated using visual art, reminiscence, and gardening based on the local culture and history. Each floor was given a different theme, one that was familiar to most of the residents on the floor. The Satisfaction with Living Environment at Nursing Home Scale (SLE-NHS) was developed to measure outcomes. Of the 33 participants recruited, 27 (81.8%) were women and 6 (18.2%) were men. Their mean age was 79.24 ± 7.40 years, and 48.5% were severely dependent in activities of daily living. The SLE-NHS showed adequate reliability and validity. Its three domains were generated and defined using factor analysis. After the visual art-based intervention, the score on the "recalling old memories" subscale was significantly higher (t = -13.32, p < .001). However, there were no significant score changes on the "convenience" and "pretty and pleasurable" subscales. In general, the participants were satisfied with the redesigned environment and felt happy in the sunny rooms. Visual art in a nursing home is a novel method for representing the local culture and stressing the spiritual value of the elderly residents who helped create it. Older adults' aesthetic activities through visual art, including reminiscence and local culture, may enrich their spirits in later life. Older adults' aesthetic activities through visual art have been shown to improve their satisfaction with their living environment. The SLE-NHS is a useful tool for evaluating their satisfaction. © 2013 Sigma Theta Tau International.
Understanding the visual skills and strategies of train drivers in the urban rail environment.
Naweed, Anjum; Balakrishnan, Ganesh
2014-01-01
Due to the growth of information in the urban rail environment, there is a need to better understand the ergonomics profile underpinning the visual behaviours in train drivers. The aim of this study was to examine the tasks and activities of urban/metropolitan passenger train drivers in order to better understand the nature of the visual demands in their task activities. Data were collected from 34 passenger train drivers in four different Australian states. The research approach used a novel participative ergonomics methodology that fused interviews and observations with generative tools. Data analysis was conducted thematically. Results suggested participants did not so much drive their trains, as manage the intensity of visually demanding work held in their environment. The density of this information and the opacity of the task, invoked an ergonomics profile more closely aligned with diagnostic and error detection than actual train regulation. The paper discusses the relative proportion of strategies corresponding with specific tasks, the visual-perceptual load in substantive activities, and the requisite visual skills behoving navigation in the urban rail environment. These findings provide the basis for developing measures of complexity to further specify the visual demands in passenger train driving.
Adequacy of the Regular Early Education Classroom Environment for Students with Visual Impairment
ERIC Educational Resources Information Center
Brown, Cherylee M.; Packer, Tanya L.; Passmore, Anne
2013-01-01
This study describes the classroom environment that students with visual impairment typically experience in regular Australian early education. Adequacy of the classroom environment (teacher training and experience, teacher support, parent involvement, adult involvement, inclusive attitude, individualization of the curriculum, physical…
A Signal Detection Theory Approach to Evaluating Oculometer Data Quality
NASA Technical Reports Server (NTRS)
Latorella, Kara; Lynn, William, III; Barry, John S.; Kelly, Lon; Shih, Ming-Yun
2013-01-01
Currently, data quality is described in terms of spatial and temporal accuracy and precision [Holmqvist et al. in press]. While this approach provides precise errors in pixels, or visual angle, often experiments are more concerned with whether subjects'points of gaze can be said to be reliable with respect to experimentally-relevant areas of interest. This paper proposes a method to characterize oculometer data quality using Signal Detection Theory (SDT) [Marcum 1947]. SDT classification results in four cases: Hit (correct report of a signal), Miss (failure to report a ), False Alarm (a signal falsely reported), Correct Reject (absence of a signal correctly reported). A technique is proposed where subjects' are directed to look at points in and outside of an AOI, and the resulting Points of Gaze (POG) are classified as Hits (points known to be internal to an AOI are classified as such), Misses (AOI points are not indicated as such), False Alarms (points external to AOIs are indicated as in the AOI), or Correct Rejects (points external to the AOI are indicated as such). SDT metrics describe performance in terms of discriminability, sensitivity, and specificity. This paper presentation will provide the procedure for conducting this assessment and an example of data collected for AOIs in a simulated flightdeck environment.
plas.io: Open Source, Browser-based WebGL Point Cloud Visualization
NASA Astrophysics Data System (ADS)
Butler, H.; Finnegan, D. C.; Gadomski, P. J.; Verma, U. K.
2014-12-01
Point cloud data, in the form of Light Detection and Ranging (LiDAR), RADAR, or semi-global matching (SGM) image processing, are rapidly becoming a foundational data type to quantify and characterize geospatial processes. Visualization of these data, due to overall volume and irregular arrangement, is often difficult. Technological advancement in web browsers, in the form of WebGL and HTML5, have made interactivity and visualization capabilities ubiquitously available which once only existed in desktop software. plas.io is an open source JavaScript application that provides point cloud visualization, exploitation, and compression features in a web-browser platform, reducing the reliance for client-based desktop applications. The wide reach of WebGL and browser-based technologies mean plas.io's capabilities can be delivered to a diverse list of devices -- from phones and tablets to high-end workstations -- with very little custom software development. These properties make plas.io an ideal open platform for researchers and software developers to communicate visualizations of complex and rich point cloud data to devices to which everyone has easy access.
Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments
ERIC Educational Resources Information Center
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…
Visualizing vascular structures in virtual environments
NASA Astrophysics Data System (ADS)
Wischgoll, Thomas
2013-01-01
In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.
Electric field effects on a near-critical fluid in microgravity
NASA Technical Reports Server (NTRS)
Zimmerli, G.; Wilkinson, R. A.; Ferrell, R. A.; Hao, H.; Moldover, M. R.
1994-01-01
The effects of an electric field on a sample of SF6 fluid in the vicinity of the liquid-vapor critical point is studied. The isothermal increase of the density of a near-critical sample as a function of the applied electric field was measured. In agreement with theory, this electrostriction effect diverges near the critical point as the isothermal compressibility diverges. Also as expected, turning on the electric field in the presence of density gradients can induce flow within the fluid, in a way analogous to turning on gravity. These effects were observed in a microgravity environment by using the Critical Point Facility which flew onboard the Space Shuttle Columbia in July 1994 as part of the Second International Microgravity Laboratory Mission. Both visual and interferometric images of two separate sample cells were obtained by means of video downlink. The interferometric images provided quantitative information about the density distribution throughout the sample. The electric field was generated by applying 500 Volts to a fine wire passing through the critical fluid.
Context effects on smooth pursuit and manual interception of a disappearing target.
Kreyenmeier, Philipp; Fooken, Jolande; Spering, Miriam
2017-07-01
In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object ("ball") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated "hit zone." In two experiments ( n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points. NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points. Copyright © 2017 the American Physiological Society.
Mistaken identity? Visual similarities of marine debris to natural prey items of sea turtles.
Schuyler, Qamar A; Wilcox, Chris; Townsend, Kathy; Hardesty, B Denise; Marshall, N Justin
2014-05-09
There are two predominant hypotheses as to why animals ingest plastic: 1) they are opportunistic feeders, eating plastic when they encounter it, and 2) they eat plastic because it resembles prey items. To assess which hypothesis is most likely, we created a model sea turtle visual system and used it to analyse debris samples from beach surveys and from necropsied turtles. We investigated colour, contrast, and luminance of the debris items as they would appear to the turtle. We also incorporated measures of texture and translucency to determine which of the two hypotheses is more plausible as a driver of selectivity in green sea turtles. Turtles preferred more flexible and translucent items to what was available in the environment, lending support to the hypothesis that they prefer debris that resembles prey, particularly jellyfish. They also ate fewer blue items, suggesting that such items may be less conspicuous against the background of open water where they forage. Using visual modelling we determined the characteristics that drive ingestion of marine debris by sea turtles, from the point of view of the turtles themselves. This technique can be utilized to determine debris preferences of other visual predators, and help to more effectively focus management or remediation actions.
Mistaken identity? Visual similarities of marine debris to natural prey items of sea turtles
2014-01-01
Background There are two predominant hypotheses as to why animals ingest plastic: 1) they are opportunistic feeders, eating plastic when they encounter it, and 2) they eat plastic because it resembles prey items. To assess which hypothesis is most likely, we created a model sea turtle visual system and used it to analyse debris samples from beach surveys and from necropsied turtles. We investigated colour, contrast, and luminance of the debris items as they would appear to the turtle. We also incorporated measures of texture and translucency to determine which of the two hypotheses is more plausible as a driver of selectivity in green sea turtles. Results Turtles preferred more flexible and translucent items to what was available in the environment, lending support to the hypothesis that they prefer debris that resembles prey, particularly jellyfish. They also ate fewer blue items, suggesting that such items may be less conspicuous against the background of open water where they forage. Conclusions Using visual modelling we determined the characteristics that drive ingestion of marine debris by sea turtles, from the point of view of the turtles themselves. This technique can be utilized to determine debris preferences of other visual predators, and help to more effectively focus management or remediation actions. PMID:24886170
The effect of colour and design in labour and delivery: A scientific approach
NASA Astrophysics Data System (ADS)
Duncan, Jane
2011-03-01
This study was part of a broader three year research project at London's Chelsea and Westminster Hospital, "A Study of the Effect of the Visual and Performing Arts in Healthcare", exploring whether visual and performing arts have any measurable effect on physiological, psychological and biological outcomes of clinical significance on patient recovery, and providing a potential cost saving benefit to the NHS. In this specific study of women in labour, two measurements were identified as having clinical significance for achieving optimal outcomes during labour and delivery: length of labour and frequency of requirement for analgesia. A screen was designed to hide emergency equipment with the joint aim of reducing women's anxieties and (through visual art) acting as a focal point of attention and distraction during labour, thus diminishing requirements for analgesia. Results demonstrated, in the presence of the screen, a statistically significant shortening of the duration of labour by 2.1h with frequency of requests for epidural analgesia 7% lower in the study group than in the control group. The significant clinical outcomes of this research provide the evidence of the value of integrating visual art into the environment of a labour and delivery room, improving the quality of the maternity service and potentially delivering real cost savings benefits to Hospitals.
Planning in subsumption architectures
NASA Technical Reports Server (NTRS)
Chalfant, Eugene C.
1994-01-01
A subsumption planner using a parallel distributed computational paradigm based on the subsumption architecture for control of real-world capable robots is described. Virtual sensor state space is used as a planning tool to visualize the robot's anticipated effect on its environment. Decision sequences are generated based on the environmental situation expected at the time the robot must commit to a decision. Between decision points, the robot performs in a preprogrammed manner. A rudimentary, domain-specific partial world model contains enough information to extrapolate the end results of the rote behavior between decision points. A collective network of predictors operates in parallel with the reactive network forming a recurrrent network which generates plans as a hierarchy. Details of a plan segment are generated only when its execution is imminent. The use of the subsumption planner is demonstrated by a simple maze navigation problem.
Simulating Visual Attention Allocation of Pilots in an Advanced Cockpit Environment
NASA Technical Reports Server (NTRS)
Frische, F.; Osterloh, J.-P.; Luedtke, A.
2011-01-01
This paper describes the results of experiments conducted with human line pilots and a cognitive pilot model during interaction with a new 40 Flight Management System (FMS). The aim of these experiments was to gather human pilot behavior data in order to calibrate the behavior of the model. Human behavior is mainly triggered by visual perception. Thus, the main aspect was to setup a profile of human pilots' visual attention allocation in a cockpit environment containing the new FMS. We first performed statistical analyses of eye tracker data and then compared our results to common results of familiar analyses in standard cockpit environments. The comparison has shown a significant influence of the new system on the visual performance of human pilots. Further on, analyses of the pilot models' visual performance have been performed. A comparison to human pilots' visual performance revealed important improvement potentials.
NASA Astrophysics Data System (ADS)
Duong, Tuan A.; Duong, Nghi; Le, Duong
2017-01-01
In this paper, we present an integration technique using a bio-inspired, control-based visual and olfactory receptor system to search for elusive targets in practical environments where the targets cannot be seen obviously by either sensory data. Bio-inspired Visual System is based on a modeling of extended visual pathway which consists of saccadic eye movements and visual pathway (vertebrate retina, lateral geniculate nucleus and visual cortex) to enable powerful target detections of noisy, partial, incomplete visual data. Olfactory receptor algorithm, namely spatial invariant independent component analysis, that was developed based on data of old factory receptor-electronic nose (enose) of Caltech, is adopted to enable the odorant target detection in an unknown environment. The integration of two systems is a vital approach and sets up a cornerstone for effective and low-cost of miniaturized UAVs or fly robots for future DOD and NASA missions, as well as for security systems in Internet of Things environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imura, K; Fujibuchi, T; Hirata, H
Purpose: Patient set-up skills in radiotherapy treatment room have a great influence on treatment effect for image guided radiotherapy. In this study, we have developed the training system for improving practical set-up skills considering rotational correction in the virtual environment away from the pressure of actual treatment room by using three-dimensional computer graphic (3DCG) engine. Methods: The treatment room for external beam radiotherapy was reproduced in the virtual environment by using 3DCG engine (Unity). The viewpoints to perform patient set-up in the virtual treatment room were arranged in both sides of the virtual operable treatment couch to assume actual performancemore » by two clinical staffs. The position errors to mechanical isocenter considering alignment between skin marker and laser on the virtual patient model were displayed by utilizing numerical values expressed in SI units and the directions of arrow marks. The rotational errors calculated with a point on the virtual body axis as the center of each rotation axis for the virtual environment were corrected by adjusting rotational position of the body phantom wound the belt with gyroscope preparing on table in a real space. These rotational errors were evaluated by describing vector outer product operations and trigonometric functions in the script for patient set-up technique. Results: The viewpoints in the virtual environment allowed individual user to visually recognize the position discrepancy to mechanical isocenter until eliminating the positional errors of several millimeters. The rotational errors between the two points calculated with the center point could be efficiently corrected to display the minimum technique mathematically by utilizing the script. Conclusion: By utilizing the script to correct the rotational errors as well as accurate positional recognition for patient set-up technique, the training system developed for improving patient set-up skills enabled individual user to indicate efficient positional correction methods easily.« less
Conservation implications of anthropogenic impacts on visual communication and camouflage.
Delhey, Kaspar; Peters, Anne
2017-02-01
Anthropogenic environmental impacts can disrupt the sensory environment of animals and affect important processes from mate choice to predator avoidance. Currently, these effects are best understood for auditory and chemosensory modalities, and recent reviews highlight their importance for conservation. We examined how anthropogenic changes to the visual environment (ambient light, transmission, and backgrounds) affect visual communication and camouflage and considered the implications of these effects for conservation. Human changes to the visual environment can increase predation risk by affecting camouflage effectiveness, lead to maladaptive patterns of mate choice, and disrupt mutualistic interactions between pollinators and plants. Implications for conservation are particularly evident for disrupted camouflage due to its tight links with survival. The conservation importance of impaired visual communication is less documented. The effects of anthropogenic changes on visual communication and camouflage may be severe when they affect critical processes such as pollination or species recognition. However, when impaired mate choice does not lead to hybridization, the conservation consequences are less clear. We suggest that the demographic effects of human impacts on visual communication and camouflage will be particularly strong when human-induced modifications to the visual environment are evolutionarily novel (i.e., very different from natural variation); affected species and populations have low levels of intraspecific (genotypic and phenotypic) variation and behavioral, sensory, or physiological plasticity; and the processes affected are directly related to survival (camouflage), species recognition, or number of offspring produced, rather than offspring quality or attractiveness. Our findings suggest that anthropogenic effects on the visual environment may be of similar importance relative to conservation as anthropogenic effects on other sensory modalities. © 2016 Society for Conservation Biology.
Integrated Data Visualization and Virtual Reality Tool
NASA Technical Reports Server (NTRS)
Dryer, David A.
1998-01-01
The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.
Analysing the physics learning environment of visually impaired students in high schools
NASA Astrophysics Data System (ADS)
Toenders, Frank G. C.; de Putter-Smits, Lesley G. A.; Sanders, Wendy T. M.; den Brok, Perry
2017-07-01
Although visually impaired students attend regular high school, their enrolment in advanced science classes is dramatically low. In our research we evaluated the physics learning environment of a blind high school student in a regular Dutch high school. For visually impaired students to grasp physics concepts, time and additional materials to support the learning process are key. Time for teachers to develop teaching methods for such students is scarce. Suggestions for changes to the learning environment and of materials used are given.
Li, Wenxun; Matin, Leonard
2005-03-01
Measurements were made of the accuracy of open-loop manual pointing and height-matching to a visual target whose elevation was perceptually mislocalized. Accuracy increased linearly with distance of the hand from the body, approaching complete accuracy at full extension; with the hand close to the body (within the midfrontal plane), the manual errors equaled the magnitude of the perceptual mislocalization. The visual inducing stimulus responsible for the perceptual errors was a single pitched-from-vertical line that was long (50 degrees), eccentrically-located (25 degrees horizontal), and viewed in otherwise total darkness. The line induced perceptual errors in the elevation of a small, circular visual target set to appear at eye level (VPEL), a setting that changed linearly with the change in the line's visual pitch as has been previously reported (pitch: -30 degrees topbackward to 30 degrees topforward); the elevation errors measured by VPEL settings varied systematically with pitch through an 18 degrees range. In a fourth experiment the visual inducing stimulus responsible for the perceptual errors was shown to induce separately-measured errors in the manual setting of the arm to feel horizontal that were also distance-dependent. The distance-dependence of the visually-induced changes in felt arm position accounts quantitatively for the distance-dependence of the manual errors in pointing/reaching and height matching to the visual target: The near equality of the changes in felt horizontal and changes in pointing/reaching with the finger at the end of the fully extended arm is responsible for the manual accuracy of the fully-extended point; with the finger in the midfrontal plane their large difference is responsible for the inaccuracies of the midfrontal-plane point. The results are inconsistent with the widely-held but controversial theory that visual spatial information employed for perception and action are dissociated and different with no illusory visual influence on action. A different two-system theory, the Proximal/Distal model, employing the same signals from vision and from the body-referenced mechanism with different weights for different hand-to-body distances, accounts for both the perceptual and the manual results in the present experiments.
The social computing room: a multi-purpose collaborative visualization environment
NASA Astrophysics Data System (ADS)
Borland, David; Conway, Michael; Coposky, Jason; Ginn, Warren; Idaszak, Ray
2010-01-01
The Social Computing Room (SCR) is a novel collaborative visualization environment for viewing and interacting with large amounts of visual data. The SCR consists of a square room with 12 projectors (3 per wall) used to display a single 360-degree desktop environment that provides a large physical real estate for arranging visual information. The SCR was designed to be cost-effective, collaborative, configurable, widely applicable, and approachable for naive users. Because the SCR displays a single desktop, a wide range of applications is easily supported, making it possible for a variety of disciplines to take advantage of the room. We provide a technical overview of the room and highlight its application to scientific visualization, arts and humanities projects, research group meetings, and virtual worlds, among other uses.
Dogs account for body orientation but not visual barriers when responding to pointing gestures
MacLean, Evan L.; Krupenye, Christopher; Hare, Brian
2014-01-01
In a series of 4 experiments we investigated whether dogs use information about a human’s visual perspective when responding to pointing gestures. While there is evidence that dogs may know what humans can and cannot see, and that they flexibly use human communicative gestures, it is unknown if they can integrate these two skills. In Experiment 1 we first determined that dogs were capable of using basic information about a human’s body orientation (indicative of her visual perspective) in a point following context. Subjects were familiarized with experimenters who either faced the dog and accurately indicated the location of hidden food, or faced away from the dog and (falsely) indicated the un-baited container. In test trials these cues were pitted against one another and dogs tended to follow the gesture from the individual who faced them while pointing. In Experiments 2–4 the experimenter pointed ambiguously toward two possible locations where food could be hidden. On test trials a visual barrier occluded the pointer’s view of one container, while dogs could always see both containers. We predicted that if dogs could take the pointer’s visual perspective they should search in the only container visible to the pointer. This hypothesis was supported only in Experiment 2. We conclude that while dogs are skilled both at following human gestures, and exploiting information about others’ visual perspectives, they may not integrate these skills in the manner characteristic of human children. PMID:24611643
Visual Search for Wines with a Triangle on the Label in a Virtual Store
Zhao, Hui; Huang, Fuxing; Spence, Charles; Wan, Xiaoang
2017-01-01
Two experiments were conducted in a virtual reality (VR) environment in order to investigate participants’ in-store visual search for bottles of wines displaying a prominent triangular shape on their label. The experimental task involved virtually moving along a wine aisle in a virtual supermarket while searching for the wine bottle on the shelf that had a different triangle on its label from the other bottles. The results of Experiment 1 revealed that the participants identified the bottle with a downward-pointing triangle on its label more rapidly than when looking for an upward-pointing triangle on the label instead. This finding replicates the downward-pointing triangle superiority (DPTS) effect, though the magnitude of this effect was more pronounced in the first as compared to the second half of the experiment, suggesting a modulating role of practice. The results of Experiment 2 revealed that the DPTS effect was also modulated by the location of the target on the shelf. Interestingly, however, the results of a follow-up survey demonstrate that the orientation of the triangle did not influence the participants’ evaluation of the wine bottles. Taken together, these findings reveal how in-store the attention of consumers might be influenced by the design elements in product packaging. These results therefore suggest that shopping in a virtual supermarket might offer a practical means of assessing the shelf standout of product packaging, which has important implications for food marketing. PMID:29326624
Learning optimal eye movements to unusual faces
Peterson, Matthew F.; Eckstein, Miguel P.
2014-01-01
Eye movements, which guide the fovea’s high resolution and computational power to relevant areas of the visual scene, are integral to efficient, successful completion of many visual tasks. How humans modify their eye movements through experience with their perceptual environments, and its functional role in learning new tasks, has not been fully investigated. Here, we used a face identification task where only the mouth discriminated exemplars to assess if, how, and when eye movement modulation may mediate learning. By interleaving trials of unconstrained eye movements with trials of forced fixation, we attempted to separate the contributions of eye movements and covert mechanisms to performance improvements. Without instruction, a majority of observers substantially increased accuracy and learned to direct their initial eye movements towards the optimal fixation point. The proximity of an observer’s default face identification eye movement behavior to the new optimal fixation point and the observer’s peripheral processing ability were predictive of performance gains and eye movement learning. After practice in a subsequent condition in which observers were directed to fixate different locations along the face, including the relevant mouth region, all observers learned to make eye movements to the optimal fixation point. In this fully learned state, augmented fixation strategy accounted for 43% of total efficiency improvements while covert mechanisms accounted for the remaining 57%. The findings suggest a critical role for eye movement planning to perceptual learning, and elucidate factors that can predict when and how well an observer can learn a new task with unusual exemplars. PMID:24291712
a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments
NASA Astrophysics Data System (ADS)
Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.
2017-09-01
We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.
Visual Search for Wines with a Triangle on the Label in a Virtual Store.
Zhao, Hui; Huang, Fuxing; Spence, Charles; Wan, Xiaoang
2017-01-01
Two experiments were conducted in a virtual reality (VR) environment in order to investigate participants' in-store visual search for bottles of wines displaying a prominent triangular shape on their label. The experimental task involved virtually moving along a wine aisle in a virtual supermarket while searching for the wine bottle on the shelf that had a different triangle on its label from the other bottles. The results of Experiment 1 revealed that the participants identified the bottle with a downward-pointing triangle on its label more rapidly than when looking for an upward-pointing triangle on the label instead. This finding replicates the downward-pointing triangle superiority (DPTS) effect, though the magnitude of this effect was more pronounced in the first as compared to the second half of the experiment, suggesting a modulating role of practice. The results of Experiment 2 revealed that the DPTS effect was also modulated by the location of the target on the shelf. Interestingly, however, the results of a follow-up survey demonstrate that the orientation of the triangle did not influence the participants' evaluation of the wine bottles. Taken together, these findings reveal how in-store the attention of consumers might be influenced by the design elements in product packaging. These results therefore suggest that shopping in a virtual supermarket might offer a practical means of assessing the shelf standout of product packaging, which has important implications for food marketing.
High-Performance 3D Articulated Robot Display
NASA Technical Reports Server (NTRS)
Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy
2011-01-01
In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle over or on the terrain correctly. For long traverses over terrain, the visualization can stream in terrain piecewise in order to maintain the current area of interest for the operator without incurring unreasonable resource constraints on the computing platform. The visualization software is designed to run on laptops that can operate in field-testing environments without Internet access, which is a frequently encountered situation when testing in remote locations that simulate planetary environments such as Mars and other planetary bodies.
Plastic Bags and Environmental Pollution
ERIC Educational Resources Information Center
Sang, Anita Ng Heung
2010-01-01
The "Hong Kong Visual Arts Curriculum Guide," covering Primary 1 to Secondary 3 grades (Curriculum Development Committee, 2003), points to three domains of learning in visual arts: (1) visual arts knowledge; (2) visual arts appreciation and criticism; and (3) visual arts making. The "Guide" suggests learning should develop…
Visual Analytics in Public Safety: Example Capabilities for Example Government Agencies
2011-10-01
is not limited to: the Police Records Information Management Environment for British Columbia (PRIME-BC), the Police Reporting and Occurrence System...and filtering for rapid identification of relevant documents - Graphical environment for visual evidence marshaling - Interactive linking and...analytical reasoning facilitated by interactive visual interfaces and integration with computational analytics. Indeed, a wide variety of technologies
ERIC Educational Resources Information Center
Prayaga, Chandra
2008-01-01
A simple interface between VPython and Microsoft (MS) Office products such as Word and Excel, controlled by Visual Basic for Applications, is described. The interface allows the preparation of content-rich, interactive learning environments by taking advantage of the three-dimensional (3D) visualization capabilities of VPython and the GUI…
Development and demonstration of autonomous behaviors for urban environment exploration
NASA Astrophysics Data System (ADS)
Ahuja, Gaurav; Fellars, Donald; Kogut, Gregory; Pacis Rius, Estrellina; Schoolov, Misha; Xydes, Alexander
2012-06-01
Under the Urban Environment Exploration project, the Space and Naval Warfare Systems Center Pacic (SSC- PAC) is maturing technologies and sensor payloads that enable man-portable robots to operate autonomously within the challenging conditions of urban environments. Previously, SSC-PAC has demonstrated robotic capabilities to navigate and localize without GPS and map the ground oors of various building sizes.1 SSC-PAC has since extended those capabilities to localize and map multiple multi-story buildings within a specied area. To facilitate these capabilities, SSC-PAC developed technologies that enable the robot to detect stairs/stairwells, maintain localization across multiple environments (e.g. in a 3D world, on stairs, with/without GPS), visualize data in 3D, plan paths between any two points within the specied area, and avoid 3D obstacles. These technologies have been developed as independent behaviors under the Autonomous Capabilities Suite, a behavior architecture, and demonstrated at a MOUT site at Camp Pendleton. This paper describes the perceptions and behaviors used to produce these capabilities, as well as an example demonstration scenario.
Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment
Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary
2014-01-01
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905
NASA Technical Reports Server (NTRS)
Lathrop, William B.; Kaiser, Mary K.
2002-01-01
Two experiments examined perceived spatial orientation in a small environment as a function of experiencing that environment under three conditions: real-world, desktop-display (DD), and head-mounted display (HMD). Across the three conditions, participants acquired two targets located on a perimeter surrounding them, and attempted to remember the relative locations of the targets. Subsequently, participants were tested on how accurately and consistently they could point in the remembered direction of a previously seen target. Results showed that participants were significantly more consistent in the real-world and HMD conditions than in the DD condition. Further, it is shown that the advantages observed in the HMD and real-world conditions were not simply due to nonspatial response strategies. These results suggest that the additional idiothetic information afforded in the real-world and HMD conditions is useful for orientation purposes in our presented task domain. Our results are relevant to interface design issues concerning tasks that require spatial search, navigation, and visualization.
Developing Guidelines for Assessing Visual Analytics Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean
2011-07-01
In this paper, we develop guidelines for evaluating visual analytic environments based on a synthesis of reviews for the entries to the 2009 Visual Analytics Science and Technology (VAST) Symposium Challenge and from a user study with professional intelligence analysts. By analyzing the 2009 VAST Challenge reviews we gained a better understanding of what is important to our reviewers, both visualization researchers and professional analysts. We also report on a small user study with professional analysts to determine the important factors that they use in evaluating visual analysis systems. We then looked at guidelines developed by researchers in various domainsmore » and synthesized these into an initial set for use by others in the community. In a second part of the user study, we looked at guidelines for a new aspect of visual analytic systems – the generation of reports. Future visual analytic systems have been challenged to help analysts generate their reports. In our study we worked with analysts to understand the criteria they used to evaluate the quality of analytic reports. We propose that this knowledge will be useful as researchers look at systems to automate some of the report generation.1 Based on these efforts, we produced some initial guidelines for evaluating visual analytic environment and for evaluation of analytic reports. It is important to understand that these guidelines are initial drafts and are limited in scope because of the type of tasks for which the visual analytic systems used in the studies in this paper were designed. More research and refinement is needed by the Visual Analytics Community to provide additional evaluation guidelines for different types of visual analytic environments.« less
Premotor cortex is sensitive to auditory-visual congruence for biological motion.
Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F
2012-03-01
The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.
RAVE: Rapid Visualization Environment
NASA Technical Reports Server (NTRS)
Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos
1994-01-01
Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.
Code of Federal Regulations, 2010 CFR
2010-07-01
... rights-of-way, or other vantage point (e.g., aerial photography), including a visual inspection of areas... the facility is located from the nearest accessible vantage point, such as the property line or public...
47 CFR 90.429 - Control point and dispatch point requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... 90.429 Section 90.429 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND...-operated device which will provide continuous visual indication when the transmitter is radiating, or, a pilot lamp or meter which will provide continuous visual indication when the transmitter circuits have...
Terrain shape estimation from optical flow, using Kalman filtering
NASA Astrophysics Data System (ADS)
Hoff, William A.; Sklair, Cheryl W.
1990-01-01
As one moves through a static environment, the visual world as projected on the retina seems to flow past. This apparent motion, called optical flow, can be an important source of depth perception for autonomous robots. An important application is in planetary exploration -the landing vehicle must find a safe landing site in rugged terrain, and an autonomous rover must be able to navigate safely through this terrain. In this paper, we describe a solution to this problem. Image edge points are tracked between frames of a motion sequence, and the range to the points is calculated from the displacement of the edge points and the known motion of the camera. Kalman filtering is used to incrementally improve the range estimates to those points, and provide an estimate of the uncertainty in each range. Errors in camera motion and image point measurement can also be modelled with Kalman filtering. A surface is then interpolated to these points, providing a complete map from which hazards such as steeply sloping areas can be detected. Using the method of extended Kalman filtering, our approach allows arbitrary camera motion. Preliminary results of an implementation are presented, and show that the resulting range accuracy is on the order of 1-2% of the range.
A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer
NASA Astrophysics Data System (ADS)
Zheng, G.; Cheng, Y.; He, K.; Duan, F.; Ma, Y.
2014-01-01
Sunset Semi-Continuous Carbon Analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, here we identified a new type of SCCA calculation discrepancy caused by the default multi-point baseline correction method. When exceeding a certain threshold carbon load, multi-point correction could cause significant Total Carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples with three temperature protocols. For ambient samples, 22%, 36% and 12% TC was underestimated by the three protocols, respectively, with corresponding threshold being ~0, 20 and 25 μg C. For sucrose, however, such discrepancy was observed with only one of these protocols, indicating the need of more refractory SCCA calibration substance. The discrepancy was less significant for the NIOSH (National Institute for Occupational Safety and Health)-like protocol compared with the other two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments). Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. Proposed correction method was to use multi-point corrected data when below the determined threshold, while use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.
NASA Astrophysics Data System (ADS)
McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.
2017-12-01
Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.
Visualizing Complex Environments in the Geo- and BioSciences
NASA Astrophysics Data System (ADS)
Prabhu, A.; Fox, P. A.; Zhong, H.; Eleish, A.; Ma, X.; Zednik, S.; Morrison, S. M.; Moore, E. K.; Muscente, D.; Meyer, M.; Hazen, R. M.
2017-12-01
Earth's living and non-living components have co-evolved for 4 billion years through numerous positive and negative feedbacks. Earth and life scientists have amassed vast amounts of data in diverse fields related to planetary evolution through deep time-mineralogy and petrology, paleobiology and paleontology, paleotectonics and paleomagnetism, geochemistry and geochrononology, genomics and proteomics, and more. Integrating the data from these complimentary disciplines is very useful in gaining an understanding of the evolution of our planet's environment. The integrated data however, represent many extremely complex environments. In order to gain insights and make discoveries using this data, it is important for us to model and visualize these complex environments. As part of work in understanding the "Co-Evolution of Geo and Biospheres using Data Driven Methodologies," we have developed several visualizations to help represent the information stored in the datasets from complimentary disciplines. These visualizations include 2D and 3D force directed Networks, Chord Diagrams, 3D Klee Diagrams. Evolving Network Diagrams, Skyline Diagrams and Tree Diagrams. Combining these visualizations with the results of machine learning and data analysis methods leads to a powerful way to discover patterns and relationships about the Earth's past and today's changing environment.
Three main paradigms of simultaneous localization and mapping (SLAM) problem
NASA Astrophysics Data System (ADS)
Imani, Vandad; Haataja, Keijo; Toivanen, Pekka
2018-04-01
Simultaneous Localization and Mapping (SLAM) is one of the most challenging research areas within computer and machine vision for automated scene commentary and explanation. The SLAM technique has been a developing research area in the robotics context during recent years. By utilizing the SLAM method robot can estimate the different positions of the robot at the distinct points of time which can indicate the trajectory of robot as well as generate a map of the environment. SLAM has unique traits which are estimating the location of robot and building a map in the various types of environment. SLAM is effective in different types of environment such as indoor, outdoor district, Air, Underwater, Underground and Space. Several approaches have been investigated to use SLAM technique in distinct environments. The purpose of this paper is to provide an accurate perceptive review of case history of SLAM relied on laser/ultrasonic sensors and camera as perception input data. In addition, we mainly focus on three paradigms of SLAM problem with all its pros and cons. In the future, use intelligent methods and some new idea will be used on visual SLAM to estimate the motion intelligent underwater robot and building a feature map of marine environment.
NASA Astrophysics Data System (ADS)
Li, W.; Shigeta, K.; Hasegawa, K.; Li, L.; Yano, K.; Tanaka, S.
2017-09-01
Recently, laser-scanning technology, especially mobile mapping systems (MMSs), has been applied to measure 3D urban scenes. Thus, it has become possible to simulate a traditional cultural event in a virtual space constructed using measured point clouds. In this paper, we take the festival float procession in the Gion Festival that has a long history in Kyoto City, Japan. The city government plans to revive the original procession route that is narrow and not used at present. For the revival, it is important to know whether a festival float collides with houses, billboards, electric wires or other objects along the original route. Therefore, in this paper, we propose a method for visualizing the collisions of point cloud objects. The advantageous features of our method are (1) a see-through visualization with a correct depth feel that is helpful to robustly determine the collision areas, (2) the ability to visualize areas of high collision risk as well as real collision areas, and (3) the ability to highlight target visualized areas by increasing the point densities there.
Use of active noise cancellation devices in caregivers in the intensive care unit.
Akhtar, S; Weigle, C G; Cheng, E Y; Toohill, R; Berens, R J
2000-04-01
Recent development of noise cancellation devices may offer relief from noise in the intensive care unit environment. This study was conducted to evaluate the effect of noise cancellation devices on subjective hearing assessment by caregivers in the intensive care units. Randomized, double-blind. Adult medical intensive care unit and pediatric intensive care unit of a teaching hospital. Caregivers of patients, including nurses, parents, respiratory therapists, and nursing assistants from a medical intensive care unit and pediatric intensive care, were enrolled in the study. Each participant was asked to wear the headphones, functional or nonfunctional noise cancellation devices, for a minimum of 30 mins. Subjective ambient noise level was assessed on a 10-point visual analog scale (VAS) before and during headphone use by each participant. Headphone comfort and the preference of the caregiver to wear the headphone were also evaluated on a 10-point VAS. Simultaneously, objective measurement of noise was done with a sound level meter using the decibel-A scale and at each of nine octave bands at each bedspace. The functional headphones significantly reduced the subjective assessment of noise by 2 (out of 10) VAS points (p < 0.05) in environments of equal objective noise profiles, based on decibel-A and octave band assessments. Noise cancellation devices improve subjective assessment of noise in caretakers. The benefit of these devices on hearing loss needs further evaluation in caregivers and critically ill patients.
3D Visualization for Phoenix Mars Lander Science Operations
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol
2012-01-01
Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.
A unified dynamic neural field model of goal directed eye movements
NASA Astrophysics Data System (ADS)
Quinton, J. C.; Goffart, L.
2018-01-01
Primates heavily rely on their visual system, which exploits signals of graded precision based on the eccentricity of the target in the visual field. The interactions with the environment involve actively selecting and focusing on visual targets or regions of interest, instead of contemplating an omnidirectional visual flow. Eye-movements specifically allow foveating targets and track their motion. Once a target is brought within the central visual field, eye-movements are usually classified into catch-up saccades (jumping from one orientation or fixation to another) and smooth pursuit (continuously tracking a target with low velocity). Building on existing dynamic neural field equations, we introduce a novel model that incorporates internal projections to better estimate the current target location (associated to a peak of activity). Such estimate is then used to trigger an eye movement, leading to qualitatively different behaviours depending on the dynamics of the whole oculomotor system: (1) fixational eye-movements due to small variations in the weights of projections when the target is stationary, (2) interceptive and catch-up saccades when peaks build and relax on the neural field, (3) smooth pursuit when the peak stabilises near the centre of the field, the system reaching a fixed point attractor. Learning is nevertheless required for tracking a rapidly moving target, and the proposed model thus replicates recent results in the monkey, in which repeated exercise permits the maintenance of the target within in the central visual field at its current (here-and-now) location, despite the delays involved in transmitting retinal signals to the oculomotor neurons.
Coastal On-line Assessment and Synthesis Tool 2.0
NASA Technical Reports Server (NTRS)
Brown, Richard; Navard, Andrew; Nguyen, Beth
2011-01-01
COAST (Coastal On-line Assessment and Synthesis Tool) is a 3D, open-source Earth data browser developed by leveraging and enhancing previous NASA open-source tools. These tools use satellite imagery and elevation data in a way that allows any user to zoom from orbit view down into any place on Earth, and enables the user to experience Earth terrain in a visually rich 3D view. The benefits associated with taking advantage of an open-source geo-browser are that it is free, extensible, and offers a worldwide developer community that is available to provide additional development and improvement potential. What makes COAST unique is that it simplifies the process of locating and accessing data sources, and allows a user to combine them into a multi-layered and/or multi-temporal visual analytical look into possible data interrelationships and coeffectors for coastal environment phenomenology. COAST provides users with new data visual analytic capabilities. COAST has been upgraded to maximize use of open-source data access, viewing, and data manipulation software tools. The COAST 2.0 toolset has been developed to increase access to a larger realm of the most commonly implemented data formats used by the coastal science community. New and enhanced functionalities that upgrade COAST to COAST 2.0 include the development of the Temporal Visualization Tool (TVT) plug-in, the Recursive Online Remote Data-Data Mapper (RECORD-DM) utility, the Import Data Tool (IDT), and the Add Points Tool (APT). With these improvements, users can integrate their own data with other data sources, and visualize the resulting layers of different data types (such as spatial and spectral, for simultaneous visual analysis), and visualize temporal changes in areas of interest.
NASA Astrophysics Data System (ADS)
Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo
2017-06-01
Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ∼170 automatically recognized objects is approximately 95%. The results demonstrate that the proposed method is able to recognize street furniture in a practical scenario. Remaining difficult cases are touching objects, like a lamp pole close to a tree.
Semantics of directly manipulating spatializations.
Hu, Xinran; Bradel, Lauren; Maiti, Dipayan; House, Leanna; North, Chris; Leman, Scotland
2013-12-01
When high-dimensional data is visualized in a 2D plane by using parametric projection algorithms, users may wish to manipulate the layout of the data points to better reflect their domain knowledge or to explore alternative structures. However, few users are well-versed in the algorithms behind the visualizations, making parameter tweaking more of a guessing game than a series of decisive interactions. Translating user interactions into algorithmic input is a key component of Visual to Parametric Interaction (V2PI) [13]. Instead of adjusting parameters, users directly move data points on the screen, which then updates the underlying statistical model. However, we have found that some data points that are not moved by the user are just as important in the interactions as the data points that are moved. Users frequently move some data points with respect to some other 'unmoved' data points that they consider as spatially contextual. However, in current V2PI interactions, these points are not explicitly identified when directly manipulating the moved points. We design a richer set of interactions that makes this context more explicit, and a new algorithm and sophisticated weighting scheme that incorporates the importance of these unmoved data points into V2PI.
Automatic visualization of 3D geometry contained in online databases
NASA Astrophysics Data System (ADS)
Zhang, Jie; John, Nigel W.
2003-04-01
In this paper, the application of the Virtual Reality Modeling Language (VRML) for efficient database visualization is analyzed. With the help of JAVA programming, three examples of automatic visualization from a database containing 3-D Geometry are given. The first example is used to create basic geometries. The second example is used to create cylinders with a defined start point and end point. The third example is used to processs data from an old copper mine complex in Cheshire, United Kingdom. Interactive 3-D visualization of all geometric data in an online database is achieved with JSP technology.
Assessment of a Static Multibeam Sonar Scanner for 3d Surveying in Confined Subaquatic Environments
NASA Astrophysics Data System (ADS)
Moisan, E.; Charbonnier, P.; Foucher, P.; Grussenmeyer, P.; Guillemin, S.; Samat, O.; Pagès, C.
2016-06-01
Mechanical Scanning Sonar (MSS) is a promising technology for surveying underwater environments. Such devices are comprised of a multibeam echosounder attached to a pan & tilt positioner, that allows sweeping the scene in a similar way as Terrestrial Laser Scanners (TLS). In this paper, we report on the experimental assessment of a recent MSS, namely, the BlueView BV5000, in a confined environment: lock number 50 on the Marne-Rhin canal (France). To this aim, we hung the system upside-down to scan the lock chamber from the surface, which allows surveying the scanning positions, up to an horizontal orientation. We propose a geometric method to estimate the remaining angle and register the scans in a coordinate system attached to the site. After reviewing the different errors that impair sonar data, we compare the resulting point cloud to a TLS model that was acquired the day before, while the lock was completely empty for maintenance. While the results exhibit a bias that can be partly explained by an imperfect setup, the maximum difference is less than 15 cm, and the standard deviation is about 3.5 cm. Visual inspection shows that coarse defects of the masonry, such as stone lacks or cavities, can be detected in the MSS point cloud, while smaller details, e.g. damaged joints, are harder to notice.
Ouwehand, Kim; van Gog, Tamara; Paas, Fred
2016-10-01
Research showed that source memory functioning declines with ageing. Evidence suggests that encoding visual stimuli with manual pointing in addition to visual observation can have a positive effect on spatial memory compared with visual observation only. The present study investigated whether pointing at picture locations during encoding would lead to better spatial source memory than naming (Experiment 1) and visual observation only (Experiment 2) in young and older adults. Experiment 3 investigated whether response modality during the test phase would influence spatial source memory performance. Experiments 1 and 2 supported the hypothesis that pointing during encoding led to better source memory for picture locations than naming or observation only. Young adults outperformed older adults on the source memory but not the item memory task in both Experiments 1 and 2. In Experiments 1 and 2, participants manually responded in the test phase. Experiment 3 showed that if participants had to verbally respond in the test phase, the positive effect of pointing compared with naming during encoding disappeared. The results suggest that pointing at picture locations during encoding can enhance spatial source memory in both young and older adults, but only if the response modality is congruent in the test phase.
Visualizing nD Point Clouds as Topological Landscape Profiles to Guide Local Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oesterling, Patrick; Heine, Christian; Weber, Gunther H.
2012-05-04
Analyzing high-dimensional point clouds is a classical challenge in visual analytics. Traditional techniques, such as projections or axis-based techniques, suffer from projection artifacts, occlusion, and visual complexity.We propose to split data analysis into two parts to address these shortcomings. First, a structural overview phase abstracts data by its density distribution. This phase performs topological analysis to support accurate and non-overlapping presentation of the high-dimensional cluster structure as a topological landscape profile. Utilizing a landscape metaphor, it presents clusters and their nesting as hills whose height, width, and shape reflect cluster coherence, size, and stability, respectively. A second local analysis phasemore » utilizes this global structural knowledge to select individual clusters or point sets for further, localized data analysis. Focusing on structural entities significantly reduces visual clutter in established geometric visualizations and permits a clearer, more thorough data analysis. In conclusion, this analysis complements the global topological perspective and enables the user to study subspaces or geometric properties, such as shape.« less
Integration of today's digital state with tomorrow's visual environment
NASA Astrophysics Data System (ADS)
Fritsche, Dennis R.; Liu, Victor; Markandey, Vishal; Heimbuch, Scott
1996-03-01
New developments in visual communication technologies, and the increasingly digital nature of the industry infrastructure as a whole, are converging to enable new visual environments with an enhanced visual component in interaction, entertainment, and education. New applications and markets can be created, but this depends on the ability of the visual communications industry to provide market solutions that are cost effective and user friendly. Industry-wide cooperation in the development of integrated, open architecture applications enables the realization of such market solutions. This paper describes the work being done by Texas Instruments, in the development of its Digital Light ProcessingTM technology, to support the development of new visual communications technologies and applications.
BactoGeNIE: A large-scale comparative genome visualization for big displays
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...
2015-08-13
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
BactoGeNIE: a large-scale comparative genome visualization for big displays
2015-01-01
Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021
BactoGeNIE: A large-scale comparative genome visualization for big displays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
Payá, Luis; Reinoso, Oscar; Jiménez, Luis M; Juliá, Miguel
2017-01-01
Along the past years, mobile robots have proliferated both in domestic and in industrial environments to solve some tasks such as cleaning, assistance, or material transportation. One of their advantages is the ability to operate in wide areas without the necessity of introducing changes into the existing infrastructure. Thanks to the sensors they may be equipped with and their processing systems, mobile robots constitute a versatile alternative to solve a wide range of applications. When designing the control system of a mobile robot so that it carries out a task autonomously in an unknown environment, it is expected to take decisions about its localization in the environment and about the trajectory that it has to follow in order to arrive to the target points. More concisely, the robot has to find a relatively good solution to two crucial problems: building a model of the environment, and estimating the position of the robot within this model. In this work, we propose a framework to solve these problems using only visual information. The mobile robot is equipped with a catadioptric vision sensor that provides omnidirectional images from the environment. First, the robot goes along the trajectories to include in the model and uses the visual information captured to build this model. After that, the robot is able to estimate its position and orientation with respect to the trajectory. Among the possible approaches to solve these problems, global appearance techniques are used in this work. They have emerged recently as a robust and efficient alternative compared to landmark extraction techniques. A global description method based on Radon Transform is used to design mapping and localization algorithms and a set of images captured by a mobile robot in a real environment, under realistic operation conditions, is used to test the performance of these algorithms.
Stöckel, Tino; Fries, Udo
2013-01-01
We examined the influence of visual context information on skilled motor behaviour and motor adaptation in basketball. The rules of basketball in Europe have recently changed, such that that the distance for three-point shots increased from 6.25 m to 6.75 m. As such, we tested the extent to which basketball experts can adapt to the longer distance when a) only the unfamiliar, new three-point line was provided as floor markings (NL group), or b) the familiar, old three-point line was provided in addition to the new floor markings (OL group). In the present study 20 expert basketball players performed 40 three-point shots from 6.25 m and 40 shots from 6.75 m. We assessed the percentage of hits and analysed the landing position of the ball. Results showed better adaptation of throwing performance to the longer distance when the old three-point line was provided as a visual landmark, compared to when only the new three-point line was provided. We hypothesise that the three-point line delivered relevant information needed to successfully adapt to the greater distance in the OL group, whereas it disturbed performance and ability to adapt in the NL group. The importance of visual landmarks on motor adaptation in basketball throwing is discussed relative to the influence of other information sources (i.e. angle of elevation relative to the basket) and sport practice.
Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’
Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David
2013-01-01
Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218
Stroboscopic Training Enhances Anticipatory Timing.
Smith, Trevor Q; Mitroff, Stephen R
The dynamic aspects of sports often place heavy demands on visual processing. As such, an important goal for sports training should be to enhance visual abilities. Recent research has suggested that training in a stroboscopic environment, where visual experiences alternate between visible and obscured, may provide a means of improving attentional and visual abilities. The current study explored whether stroboscopic training could impact anticipatory timing - the ability to predict where a moving stimulus will be at a specific point in time. Anticipatory timing is a critical skill for both sports and non-sports activities, and thus finding training improvements could have broad impacts. Participants completed a pre-training assessment that used a Bassin Anticipation Timer to measure their abilities to accurately predict the timing of a moving visual stimulus. Immediately after this initial assessment, the participants completed training trials, but in one of two conditions. Those in the Control condition proceeded as before with no change. Those in the Strobe condition completed the training trials while wearing specialized eyewear that had lenses that alternated between transparent and opaque (rate of 100ms visible to 150ms opaque). Post-training assessments were administered immediately after training, 10-minutes after training, and 10-days after training. Compared to the Control group, the Strobe group was significantly more accurate immediately after training, was more likely to respond early than to respond late immediately after training and 10 minutes later, and was more consistent in their timing estimates immediately after training and 10 minutes later.
Pasqualotto, Achille; Esenkaya, Tayfun
2016-01-01
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).
Holmes, Nicholas P; Dakwar, Azar R
2015-12-01
Movements aimed towards objects occasionally have to be adjusted when the object moves. These online adjustments can be very rapid, occurring in as little as 100ms. More is known about the latency and neural basis of online control of movements to visual than to auditory target objects. We examined the latency of online corrections in reaching-to-point movements to visual and auditory targets that could change side and/or modality at movement onset. Visual or auditory targets were presented on the left or right sides, and participants were instructed to reach and point to them as quickly and as accurately as possible. On half of the trials, the targets changed side at movement onset, and participants had to correct their movements to point to the new target location as quickly as possible. Given different published approaches to measuring the latency for initiating movement corrections, we examined several different methods systematically. What we describe here as the optimal methods involved fitting a straight-line model to the velocity of the correction movement, rather than using a statistical criterion to determine correction onset. In the multimodal experiment, these model-fitting methods produced significantly lower latencies for correcting movements away from the auditory targets than away from the visual targets. Our results confirm that rapid online correction is possible for auditory targets, but further work is required to determine whether the underlying control system for reaching and pointing movements is the same for auditory and visual targets. Copyright © 2015 Elsevier Ltd. All rights reserved.
A habituation based approach for detection of visual changes in surveillance camera
NASA Astrophysics Data System (ADS)
Sha'abani, M. N. A. H.; Adan, N. F.; Sabani, M. S. M.; Abdullah, F.; Nadira, J. H. S.; Yasin, M. S. M.
2017-09-01
This paper investigates a habituation based approach in detecting visual changes using video surveillance systems in a passive environment. Various techniques have been introduced for dynamic environment such as motion detection, object classification and behaviour analysis. However, in a passive environment, most of the scenes recorded by the surveillance system are normal. Therefore, implementing a complex analysis all the time in the passive environment resulting on computationally expensive, especially when using a high video resolution. Thus, a mechanism of attention is required, where the system only responds to an abnormal event. This paper proposed a novelty detection mechanism in detecting visual changes and a habituation based approach to measure the level of novelty. The objective of the paper is to investigate the feasibility of the habituation based approach in detecting visual changes. Experiment results show that the approach are able to accurately detect the presence of novelty as deviations from the learned knowledge.
A collaborative interaction and visualization multi-modal environment for surgical planning.
Foo, Jung Leng; Martinez-Escobar, Marisol; Peloquin, Catherine; Lobe, Thom; Winer, Eliot
2009-01-01
The proliferation of virtual reality visualization and interaction technologies has changed the way medical image data is analyzed and processed. This paper presents a multi-modal environment that combines a virtual reality application with a desktop application for collaborative surgical planning. Both visualization applications can function independently but can also be synced over a network connection for collaborative work. Any changes to either application is immediately synced and updated to the other. This is an efficient collaboration tool that allows multiple teams of doctors with only an internet connection to visualize and interact with the same patient data simultaneously. With this multi-modal environment framework, one team working in the VR environment and another team from a remote location working on a desktop machine can both collaborate in the examination and discussion for procedures such as diagnosis, surgical planning, teaching and tele-mentoring.
DataFed: A Federated Data System for Visualization and Analysis of Spatio-Temporal Air Quality Data
NASA Astrophysics Data System (ADS)
Husar, R. B.; Hoijarvi, K.
2017-12-01
DataFed is a distributed web-services-based computing environment for accessing, processing, and visualizing atmospheric data in support of air quality science and management. The flexible, adaptive environment facilitates the access and flow of atmospheric data from provider to users by enabling the creation of user-driven data processing/visualization applications. DataFed `wrapper' components, non-intrusively wrap heterogeneous, distributed datasets for access by standards-based GIS web services. The mediator components (also web services) map the heterogeneous data into a spatio-temporal data model. Chained web services provide homogeneous data views (e.g., geospatial, time views) using a global multi-dimensional data model. In addition to data access and rendering, the data processing component services can be programmed for filtering, aggregation, and fusion of multidimensional data. A complete application software is written in a custom made data flow language. Currently, the federated data pool consists of over 50 datasets originating from globally distributed data providers delivering surface-based air quality measurements, satellite observations, emissions data as well as regional and global-scale air quality models. The web browser-based user interface allows point and click navigation and browsing the XYZT multi-dimensional data space. The key applications of DataFed are for exploring spatial pattern of pollutants, seasonal, weekly, diurnal cycles and frequency distributions for exploratory air quality research. Since 2008, DataFed has been used to support EPA in the implementation of the Exceptional Event Rule. The data system is also used at universities in the US, Europe and Asia.
Eating with our eyes: From visual hunger to digital satiation.
Spence, Charles; Okajima, Katsunori; Cheok, Adrian David; Petit, Olivia; Michel, Charles
2016-12-01
One of the brain's key roles is to facilitate foraging and feeding. It is presumably no coincidence, then, that the mouth is situated close to the brain in most animal species. However, the environments in which our brains evolved were far less plentiful in terms of the availability of food resources (i.e., nutriments) than is the case for those of us living in the Western world today. The growing obesity crisis is but one of the signs that humankind is not doing such a great job in terms of optimizing the contemporary food landscape. While the blame here is often put at the doors of the global food companies - offering addictive foods, designed to hit 'the bliss point' in terms of the pleasurable ingredients (sugar, salt, fat, etc.), and the ease of access to calorie-rich foods - we wonder whether there aren't other implicit cues in our environments that might be triggering hunger more often than is perhaps good for us. Here, we take a closer look at the potential role of vision; Specifically, we question the impact that our increasing exposure to images of desirable foods (what is often labelled 'food porn', or 'gastroporn') via digital interfaces might be having, and ask whether it might not inadvertently be exacerbating our desire for food (what we call 'visual hunger'). We review the growing body of cognitive neuroscience research demonstrating the profound effect that viewing such images can have on neural activity, physiological and psychological responses, and visual attention, especially in the 'hungry' brain. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Misperception of exocentric directions in auditory space
Arthur, Joeanna C.; Philbeck, John W.; Sargent, Jesse; Dopkins, Stephen
2008-01-01
Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space. PMID:18555205
The Visual Geophysical Exploration Environment: A Multi-dimensional Scientific Visualization
NASA Astrophysics Data System (ADS)
Pandya, R. E.; Domenico, B.; Murray, D.; Marlino, M. R.
2003-12-01
The Visual Geophysical Exploration Environment (VGEE) is an online learning environment designed to help undergraduate students understand fundamental Earth system science concepts. The guiding principle of the VGEE is the importance of hands-on interaction with scientific visualization and data. The VGEE consists of four elements: 1) an online, inquiry-based curriculum for guiding student exploration; 2) a suite of El Nino-related data sets adapted for student use; 3) a learner-centered interface to a scientific visualization tool; and 4) a set of concept models (interactive tools that help students understand fundamental scientific concepts). There are two key innovations featured in this interactive poster session. One is the integration of concept models and the visualization tool. Concept models are simple, interactive, Java-based illustrations of fundamental physical principles. We developed eight concept models and integrated them into the visualization tool to enable students to probe data. The ability to probe data using a concept model addresses the common problem of transfer: the difficulty students have in applying theoretical knowledge to everyday phenomenon. The other innovation is a visualization environment and data that are discoverable in digital libraries, and installed, configured, and used for investigations over the web. By collaborating with the Integrated Data Viewer developers, we were able to embed a web-launchable visualization tool and access to distributed data sets into the online curricula. The Thematic Real-time Environmental Data Distributed Services (THREDDS) project is working to provide catalogs of datasets that can be used in new VGEE curricula under development. By cataloging this curricula in the Digital Library for Earth System Education (DLESE), learners and educators can discover the data and visualization tool within a framework that guides their use.
Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai
2012-01-01
In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism. PMID:23193391
Augmented Visual Experience of Simulated Solar Phenomena
NASA Astrophysics Data System (ADS)
Tucker, A. O., IV; Berardino, R. A.; Hahne, D.; Schreurs, B.; Fox, N. J.; Raouafi, N.
2017-12-01
The Parker Solar Probe (PSP) mission will explore the Sun's corona, studying solar wind, flares and coronal mass ejections. The effects of these phenomena can impact the technology that we use in ways that are not readily apparent, including affecting satellite communications and power grids. Determining the structure and dynamics of corona magnetic fields, tracing the flow of energy that heats the corona, and exploring dusty plasma near the Sun to understand its influence on solar wind and energetic particle formation requires a suite of sensors on board the PSP spacecraft that are engineered to observe specific phenomena. Using models of these sensors and simulated observational data, we can visualize what the PSP spacecraft will "see" during its multiple passes around the Sun. Augmented reality (AR) technologies enable convenient user access to massive data sets. We are developing an application that allows users to experience environmental data from the point of view of the PSP spacecraft in AR using the Microsoft HoloLens. Observational data, including imagery, magnetism, temperature, and density are visualized in 4D within the user's immediate environment. Our application provides an educational tool for comprehending the complex relationships of observational data, which aids in our understanding of the Sun.
Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai
2012-01-01
In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.
Classroom Environments: An Experiential Analysis of the Pupil-Teacher Visual Interaction in Uruguay
ERIC Educational Resources Information Center
Cardellino, Paula; Araneda, Claudio; García Alvarado, Rodrigo
2017-01-01
We argue that the traditional physical environment is commonly taken for granted and that little consideration has been given to how this affects pupil-teacher interactions. This article presents evidence that certain physical environments do not allow equal visual interaction and, as a result, we derive a set of basic guiding principles that…
ERIC Educational Resources Information Center
Wei, Liew Tze; Sazilah, Salam
2012-01-01
This study investigated the effects of visual cues in multiple external representations (MER) environment on the learning performance of novices' program comprehension. Program codes and flowchart diagrams were used as dual representations in multimedia environment to deliver lessons on C-Programming. 17 field independent participants and 16 field…
Optical projectors simulate human eyes to establish operator's field of view
NASA Technical Reports Server (NTRS)
Beam, R. A.
1966-01-01
Device projects visual pattern limits of the field of view of an operator as his eyes are directed at a given point on a control panel. The device, which consists of two projectors, provides instant evaluation of visual ability at a point on a panel.
A study on haptic collaborative game in shared virtual environment
NASA Astrophysics Data System (ADS)
Lu, Keke; Liu, Guanyang; Liu, Lingzhi
2013-03-01
A study on collaborative game in shared virtual environment with haptic feedback over computer networks is introduced in this paper. A collaborative task was used where the players located at remote sites and played the game together. The player can feel visual and haptic feedback in virtual environment compared to traditional networked multiplayer games. The experiment was desired in two conditions: visual feedback only and visual-haptic feedback. The goal of the experiment is to assess the impact of force feedback on collaborative task performance. Results indicate that haptic feedback is beneficial for performance enhancement for collaborative game in shared virtual environment. The outcomes of this research can have a powerful impact on the networked computer games.
Visualizing Matrix Multiplication
ERIC Educational Resources Information Center
Daugulis, Peteris; Sondore, Anita
2018-01-01
Efficient visualizations of computational algorithms are important tools for students, educators, and researchers. In this article, we point out an innovative visualization technique for matrix multiplication. This method differs from the standard, formal approach by using block matrices to make computations more visual. We find this method a…
[Working conditions for supermarket employees: from experimental data to best practices].
Martellotta, Francesco; Della Crociata, Sabrina; Simone, Antonio; Calderoni, Leonardo; D'Alba, Michele; Cervellati, Massimo; Papapietro, Nunzio
2014-07-15
Thermal, acoustic and visual comfort conditions for hypermarket workers have never been investigated with scientific methods. taking advantage of a case study, with characteristics capable of generalizing the results, analytically measure the actual comfort conditions to which workers are exposed and point out possible ameliorative proposals. Carry out a detailed survey based on instrumental measurements combined with subjective questionnaires to assess the indoor environment. Even though the analysis pointed out no significant risk conditions, several smaller problems appeared in terms of local discomfort (such as cold limbs, higher sound level exposure, limited glare phenomena) for cashier workers. The origin of these problems appeared to be the pivotal position of the cash registers. Taking into account observed phenomena and their causes a list of "best practices" has been defined hoping that their adoption could further limit any impact on workers comfort conditions.
Simpson, Heidi
The ability to carry out and document a full respiratory assessment is an essential skill for all nurses. The elements included are: an initial assessment, history taking, inspection, palpation, percussion, auscultation and further investigations. A prompt initial assessment allows immediate evaluation of severity of illness and appropriate treatment measures may warrant instigation at this point. Following this, a comprehensive patient history will be elicited. Clinical examination of the patient follows and involves inspection, palpation, percussion and auscultation. At this point, consideration must be given to preparation of a light, warm, quiet, private environment for examination and suitable patient positioning. Inspection is a comprehensive visual assessment, while palpation involves using touch to gather information. The next stages are percussion and auscultation. While percussion is striking the chest to determine the state of underlying tissues, auscultation entails listening to and interpreting sound transmission through the chest wall via a stethoscope. Finally, further investigations may be necessary to confirm or negate suspected diagnoses.
Real-time tracking of visually attended objects in virtual environments and its application to LOD.
Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon
2009-01-01
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov
2014-12-15
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: Themore » visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.« less
Semantics of the visual environment encoded in parahippocampal cortex
Bonner, Michael F.; Price, Amy Rose; Peelle, Jonathan E.; Grossman, Murray
2016-01-01
Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain. PMID:26679216
Semantics of the Visual Environment Encoded in Parahippocampal Cortex.
Bonner, Michael F; Price, Amy Rose; Peelle, Jonathan E; Grossman, Murray
2016-03-01
Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together, this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain.
Yu, Rui-Feng; Yang, Lin-Dong; Wu, Xin
2017-05-01
This study identified the risk factors influencing visual fatigue in baggage X-ray security screeners and estimated the strength of correlations between those factors and visual fatigue using structural equation modelling approach. Two hundred and five X-ray security screeners participated in a questionnaire survey. The result showed that satisfaction with the VDT's physical features and the work environment conditions were negatively correlated with the intensity of visual fatigue, whereas job stress and job burnout had direct positive influences. The path coefficient between the image quality of VDT and visual fatigue was not significant. The total effects of job burnout, job stress, the VDT's physical features and the work environment conditions on visual fatigue were 0.471, 0.469, -0.268 and -0.251 respectively. These findings indicated that both extrinsic factors relating to VDT and workplace environment and psychological factors including job burnout and job stress should be considered in the workplace design and work organisation of security screening tasks to reduce screeners' visual fatigue. Practitioner Summary: This study identified the risk factors influencing visual fatigue in baggage X-ray security screeners and estimated the strength of correlations between those factors and visual fatigue. The findings were of great importance to the workplace design and the work organisation of security screening tasks to reduce screeners' visual fatigue.
Traffic Signs in Complex Visual Environments
DOT National Transportation Integrated Search
1982-11-01
The effects of sign luminance on detection and recognition of traffic control devices is mediated through contrast with the immediate surround. Additionally, complex visual scenes are known to degrade visual performance with targets well above visual...
News video story segmentation method using fusion of audio-visual features
NASA Astrophysics Data System (ADS)
Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang
2007-11-01
News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.
Person and gesture tracking with smart stereo cameras
NASA Astrophysics Data System (ADS)
Gordon, Gaile; Chen, Xiangrong; Buck, Ron
2008-02-01
Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.
Motor effects from visually induced disorientation in man.
DOT National Transportation Integrated Search
1969-11-01
The problem of disorientation in a moving optical environment was examined. Egocentric disorientation can be experienced by a pilot if the entire visual environment moves relative to his body without a clue of the objective position of the airplane i...
NASA Technical Reports Server (NTRS)
Sturm, Erick J.; Monahue, Kenneth M.; Biehl, James P.; Kokorowski, Michael; Ngalande, Cedrick,; Boedeker, Jordan
2012-01-01
The Jupiter Environment Tool (JET) is a custom UI plug-in for STK that provides an interface to Jupiter environment models for visualization and analysis. Users can visualize the different magnetic field models of Jupiter through various rendering methods, which are fully integrated within STK s 3D Window. This allows users to take snapshots and make animations of their scenarios with magnetic field visualizations. Analytical data can be accessed in the form of custom vectors. Given these custom vectors, users have access to magnetic field data in custom reports, graphs, access constraints, coverage analysis, and anywhere else vectors are used within STK.
Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution
Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir
2016-01-01
Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks–walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience. PMID:26882473
Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution.
Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir
2016-01-01
Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks-walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.
NASA Technical Reports Server (NTRS)
Chaudhary, Aashish; Votava, Petr; Nemani, Ramakrishna R.; Michaelis, Andrew; Kotfila, Chris
2016-01-01
We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.
Analytics and Visualization Pipelines for Big Data on the NASA Earth Exchange (NEX) and OpenNEX
NASA Astrophysics Data System (ADS)
Chaudhary, A.; Votava, P.; Nemani, R. R.; Michaelis, A.; Kotfila, C.
2016-12-01
We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.
Urodynamic catheter moisture sensor: A novel device to improve leak point pressure detection.
Marshall, Blake R; Arlen, Angela M; Kirsch, Andrew J
2016-06-01
High-quality urodynamic studies in patients with neurogenic lower urinary tract dysfunction are important, as UDS may be the only reliable gauge of potential risk for upper tract deterioration and the optimal tool to guide lower urinary tract management. Reliance on direct visualization of leakage during typical UDS remains a potential source of error. Given the necessity of accurate leak point pressures, we developed a wireless leak detection sensor to eliminate the need for visual inspection during UDS. A mean decrease in detrusor leak point pressure of 3 cm/H2 0 and a mean 11% decrease in capacity at leakage was observed when employing the sensor compared to visual inspection in children undergoing two fillings during a single UDS session. Removing the visual inspection component of UDS may improve accuracy of pressure readings. Neurourol. Urodynam. 35:647-648, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Yamada-Rice, Dylan
2011-01-01
This article looks at the way in which the changing visual environment affects education at two levels: in communication patterns and research methodologies. The research considers differences in the variance and quantity of types of visual media and their relationship to the written mode in the urban landscapes of Tokyo and London, using…
Up by upwest: Is slope like north?
Weisberg, Steven M; Nardi, Daniele; Newcombe, Nora S; Shipley, Thomas F
2014-10-01
Terrain slope can be used to encode the location of a goal. However, this directional information may be encoded using a conceptual north (i.e., invariantly with respect to the environment), or in an observer-relative fashion (i.e., varying depending on the direction one faces when learning the goal). This study examines which representation is used, whether the sensory modality in which slope is encoded (visual, kinaesthetic, or both) influences representations, and whether use of slope varies for men and women. In a square room, with a sloped floor explicitly pointed out as the only useful cue, participants encoded the corner in which a goal was hidden. Without direct sensory access to slope cues, participants used a dial to point to the goal. For each trial, the goal was hidden uphill or downhill, and the participants were informed whether they faced uphill or downhill when pointing. In support of observer-relative representations, participants pointed more accurately and quickly when facing concordantly with the hiding position. There was no effect of sensory modality, providing support for functional equivalence. Sex did not interact with the findings on modality or reference frame, but spatial measures correlated with success on the slope task differently for each sex.
Jung, Eunice L.; Zadbood, Asieh; Lee, Sang-Hun; Tomarken, Andrew J.; Blake, Randolph
2013-01-01
We live in a cluttered, dynamic visual environment that poses a challenge for the visual system: for objects, including those that move about, to be perceived, information specifying those objects must be integrated over space and over time. Does a single, omnibus mechanism perform this grouping operation, or does grouping depend on separate processes specialized for different feature aspects of the object? To address this question, we tested a large group of healthy young adults on their abilities to perceive static fragmented figures embedded in noise and to perceive dynamic point-light biological motion figures embedded in dynamic noise. There were indeed substantial individual differences in performance on both tasks, but none of the statistical tests we applied to this data set uncovered a significant correlation between those performance measures. These results suggest that the two tasks, despite their superficial similarity, require different segmentation and grouping processes that are largely unrelated to one another. Whether those processes are embodied in distinct neural mechanisms remains an open question. PMID:24198799
Jung, Eunice L; Zadbood, Asieh; Lee, Sang-Hun; Tomarken, Andrew J; Blake, Randolph
2013-01-01
WE LIVE IN A CLUTTERED, DYNAMIC VISUAL ENVIRONMENT THAT POSES A CHALLENGE FOR THE VISUAL SYSTEM: for objects, including those that move about, to be perceived, information specifying those objects must be integrated over space and over time. Does a single, omnibus mechanism perform this grouping operation, or does grouping depend on separate processes specialized for different feature aspects of the object? To address this question, we tested a large group of healthy young adults on their abilities to perceive static fragmented figures embedded in noise and to perceive dynamic point-light biological motion figures embedded in dynamic noise. There were indeed substantial individual differences in performance on both tasks, but none of the statistical tests we applied to this data set uncovered a significant correlation between those performance measures. These results suggest that the two tasks, despite their superficial similarity, require different segmentation and grouping processes that are largely unrelated to one another. Whether those processes are embodied in distinct neural mechanisms remains an open question.
Real-time visual simulation of APT system based on RTW and Vega
NASA Astrophysics Data System (ADS)
Xiong, Shuai; Fu, Chengyu; Tang, Tao
2012-10-01
The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.
The IRGen infrared data base modeler
NASA Technical Reports Server (NTRS)
Bernstein, Uri
1993-01-01
IRGen is a modeling system which creates three-dimensional IR data bases for real-time simulation of thermal IR sensors. Starting from a visual data base, IRGen computes the temperature and radiance of every data base surface with a user-specified thermal environment. The predicted gray shade of each surface is then computed from the user specified sensor characteristics. IRGen is based on first-principles models of heat transport and heat flux sources, and it accurately simulates the variations of IR imagery with time of day and with changing environmental conditions. The starting point for creating an IRGen data base is a visual faceted data base, in which every facet has been labeled with a material code. This code is an index into a material data base which contains surface and bulk thermal properties for the material. IRGen uses the material properties to compute the surface temperature at the specified time of day. IRGen also supports image generator features such as texturing and smooth shading, which greatly enhance image realism.
A method for fast energy estimation and visualization of protein-ligand interaction
NASA Astrophysics Data System (ADS)
Tomioka, Nobuo; Itai, Akiko; Iitaka, Yoichi
1987-10-01
A new computational and graphical method for facilitating ligand-protein docking studies is developed on a three-dimensional computer graphics display. Various physical and chemical properties inside the ligand binding pocket of a receptor protein, whose structure is elucidated by X-ray crystal analysis, are calculated on three-dimensional grid points and are stored in advance. By utilizing those tabulated data, it is possible to estimate the non-bonded and electrostatic interaction energy and the number of possible hydrogen bonds between protein and ligand molecules in real time during an interactive docking operation. The method also provides a comprehensive visualization of the local environment inside the binding pocket. With this method, it becomes easier to find a roughly stable geometry of ligand molecules, and one can therefore make a rapid survey of the binding capability of many drug candidates. The method will be useful for drug design as well as for the examination of protein-ligand interactions.
Changing Perspective: Zooming in and out during Visual Search
ERIC Educational Resources Information Center
Solman, Grayden J. F.; Cheyne, J. Allan; Smilek, Daniel
2013-01-01
Laboratory studies of visual search are generally conducted in contexts with a static observer vantage point, constrained by a fixation cross or a headrest. In contrast, in many naturalistic search settings, observers freely adjust their vantage point by physically moving through space. In two experiments, we evaluate behavior during free vantage…
Visualization of 3-D tensor fields
NASA Technical Reports Server (NTRS)
Hesselink, L.
1996-01-01
Second-order tensor fields have applications in many different areas of physics, such as general relativity and fluid mechanics. The wealth of multivariate information in tensor fields makes them more complex and abstract than scalar and vector fields. Visualization is a good technique for scientists to gain new insights from them. Visualizing a 3-D continuous tensor field is equivalent to simultaneously visualizing its three eigenvector fields. In the past, research has been conducted in the area of two-dimensional tensor fields. It was shown that degenerate points, defined as points where eigenvalues are equal to each other, are the basic singularities underlying the topology of tensor fields. Moreover, it was shown that eigenvectors never cross each other except at degenerate points. Since we live in a three-dimensional world, it is important for us to understand the underlying physics of this world. In this report, we describe a new method for locating degenerate points along with the conditions for classifying them in three-dimensional space. Finally, we discuss some topological features of three-dimensional tensor fields, and interpret topological patterns in terms of physical properties.
Basic visual function and cortical thickness patterns in posterior cortical atrophy.
Lehmann, Manja; Barnes, Josephine; Ridgway, Gerard R; Wattam-Bell, John; Warrington, Elizabeth K; Fox, Nick C; Crutch, Sebastian J
2011-09-01
Posterior cortical atrophy (PCA) is characterized by a progressive decline in higher-visual object and space processing, but the extent to which these deficits are underpinned by basic visual impairments is unknown. This study aimed to assess basic and higher-order visual deficits in 21 PCA patients. Basic visual skills including form detection and discrimination, color discrimination, motion coherence, and point localization were measured, and associations and dissociations between specific basic visual functions and measures of higher-order object and space perception were identified. All participants showed impairment in at least one aspect of basic visual processing. However, a number of dissociations between basic visual skills indicated a heterogeneous pattern of visual impairment among the PCA patients. Furthermore, basic visual impairments were associated with particular higher-order object and space perception deficits, but not with nonvisual parietal tasks, suggesting the specific involvement of visual networks in PCA. Cortical thickness analysis revealed trends toward lower cortical thickness in occipitotemporal (ventral) and occipitoparietal (dorsal) regions in patients with visuoperceptual and visuospatial deficits, respectively. However, there was also a lot of overlap in their patterns of cortical thinning. These findings suggest that different presentations of PCA represent points in a continuum of phenotypical variation.
Learning GIS and exploring geolocated data with the all-in-one Geolokit toolbox for Google Earth
NASA Astrophysics Data System (ADS)
Watlet, A.; Triantafyllou, A.; Bastin, C.
2016-12-01
GIS software are today's essential tools to gather and visualize geological data, to apply spatial and temporal analysis and finally, to create and share interactive maps for further investigations in geosciences. Such skills are especially essential to learn for students who go through fieldtrips, samples collections or field experiments. However, time is generally missing to teach in detail all the aspects of visualizing geolocated geoscientific data. For these purposes, we developed Geolokit: a lightweight freeware dedicated to geodata visualization and written in Python, a high-level, cross-platform programming language. Geolokit software is accessible through a graphical user interface, designed to run in parallel with Google Earth, benefitting from the numerous interactive capabilities. It is designed as a very user-friendly toolbox that allows `geo-users' to import their raw data (e.g. GPS, sample locations, structural data, field pictures, maps), to use fast data analysis tools and to visualize these into the Google Earth environment using KML code; with no require of third party software, except Google Earth itself. Geolokit comes with a large number of geosciences labels, symbols, colours and placemarks and is applicable to display several types of geolocated data, including: Multi-points datasets Automatically computed contours of multi-points datasets via several interpolation methods Discrete planar and linear structural geology data in 2D or 3D supporting large range of structures input format Clustered stereonets and rose diagrams 2D cross-sections as vertical sections Georeferenced maps and grids with user defined coordinates Field pictures using either geo-tracking metadata from a camera built-in GPS module, or the same-day track of an external GPS In the end, Geolokit is helpful for quickly visualizing and exploring data without losing too much time in the numerous capabilities of GIS software suites. We are looking for students and teachers to discover all the functionalities of Geolokit. As this project is under development and planned to be open source, we are definitely looking to discussions regarding particular needs or ideas, and to contributions in the Geolokit project.
Zhang, Xin; Fu, Lingdi; Geng, Yuehua; Zhai, Xiang; Liu, Yanhua
2014-03-01
Here, we administered repeated-pulse transcranial magnetic stimulation to healthy people at the left Guangming (GB37) and a mock point, and calculated the sample entropy of electroencephalo-gram signals using nonlinear dynamics. Additionally, we compared electroencephalogram sample entropy of signals in response to visual stimulation before, during, and after repeated-pulse tran-scranial magnetic stimulation at the Guangming. Results showed that electroencephalogram sample entropy at left (F3) and right (FP2) frontal electrodes were significantly different depending on where the magnetic stimulation was administered. Additionally, compared with the mock point, electroencephalogram sample entropy was higher after stimulating the Guangming point. When visual stimulation at Guangming was given before repeated-pulse transcranial magnetic stimula-tion, significant differences in sample entropy were found at five electrodes (C3, Cz, C4, P3, T8) in parietal cortex, the central gyrus, and the right temporal region compared with when it was given after repeated-pulse transcranial magnetic stimulation, indicating that repeated-pulse transcranial magnetic stimulation at Guangming can affect visual function. Analysis of electroencephalogram revealed that when visual stimulation preceded repeated pulse transcranial magnetic stimulation, sample entropy values were higher at the C3, C4, and P3 electrodes and lower at the Cz and T8 electrodes than visual stimulation followed preceded repeated pulse transcranial magnetic stimula-tion. The findings indicate that repeated-pulse transcranial magnetic stimulation at the Guangming evokes different patterns of electroencephalogram signals than repeated-pulse transcranial mag-netic stimulation at other nearby points on the body surface, and that repeated-pulse transcranial magnetic stimulation at the Guangming is associated with changes in the complexity of visually evoked electroencephalogram signals in parietal regions, central gyrus, and temporal regions.
Visualization and Tracking of Parallel CFD Simulations
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kremenetsky, Mark
1995-01-01
We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.
Individual Differences in a Spatial-Semantic Virtual Environment.
ERIC Educational Resources Information Center
Chen, Chaomei
2000-01-01
Presents two empirical case studies concerning the role of individual differences in searching through a spatial-semantic virtual environment. Discusses information visualization in information systems; cognitive factors, including associative memory, spatial ability, and visual memory; user satisfaction; and cognitive abilities and search…
NASA Technical Reports Server (NTRS)
Taylor, J. H.
1973-01-01
Some data on human vision, important in present and projected space activities, are presented. Visual environment and performance and structure of the visual system are also considered. Visual perception during stress is included.
On the performance of metrics to predict quality in point cloud representations
NASA Astrophysics Data System (ADS)
Alexiou, Evangelos; Ebrahimi, Touradj
2017-09-01
Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.
Comparison of a Visual and Head Tactile Display for Soldier Navigation
2013-12-01
environments for nuclear power plant operators, air traffic controllers, and pilots are information intensive. These environments usually involve the indirect...queue, correcting aircraft conflicts, giving instruction, clearance and advice to pilots , and assigning aircrafts to other work queues and airports...these dynamic, complex, and multitask environments (1) collect and integrate a plethora of visual information into decisions that are critical for
Patino, Cecilia M.; Varma, Rohit; Azen, Stanley P.; Conti, David V.; Nichol, Michael B.; McKean-Cowdin, Roberta
2010-01-01
Purpose To assess the impact of change in visual field (VF) on change in health related quality of life (HRQoL) at the population level. Design Prospective cohort study Participants 3,175 Los Angles Latino Eye Study (LALES) participants Methods Objective measures of VF and visual acuity and self-reported HRQoL were collected at baseline and 4-year follow-up. Analysis of covariance was used to evaluate mean differences in change of HRQoL across severity levels of change in VF and to test for effect modification by covariates. Main outcome measures General and vision-specific HRQoL. Results Of 3,175 participants, 1430 (46%) showed a change in VF (≥1 decibel [dB]) and 1651, 1715 (54%) reported a clinically important change (≥5 points) in vision-specific HRQoL. Progressive worsening and improvement in the VF were associated with increasing losses and gains in vision-specific HRQoL for the composite score and 10 of its 11 subscales (all Ptrends<0.05). Losses in VF > 5 dB and gains > 3 dB were associated with clinically meaningful losses and gains in vision-specific HRQoL, respectively. Areas of vision-specific HRQoL most affected by greater losses in VF were driving, dependency, role-functioning, and mental health. The effect of change in VF (loss or gain) on mean change in vision-specific HRQoL varied by level of baseline vision loss (in visual field and/or visual acuity) and by change in visual acuity (all P-interactions<0.05). Those with moderate/severe VF loss at baseline and with a > 5 dB loss in visual field during the study period had a mean loss of vision-specific HRQoL of 11.3 points, while those with no VF loss at baseline had a mean loss of 0.97 points Similarly, with a > 5 dB loss in VF and baseline visual acuity impairment (mild/severe) there was a loss in vision-specific HRQoL of 10.5 points, whereas with no visual acuity impairment at baseline there was a loss of vision-specific HRQoL of 3.7 points. Conclusion Both losses and gains in VF produce clinically meaningful changes in vision-specific HRQoL. In the presence of pre-existing vision loss (VF and visual acuity), similar levels of visual field change produce greater losses in quality of life. PMID:21458074
Allen, Thomas E; Letteri, Amy; Choi, Song Hoa; Dang, Daqian
2014-01-01
Brief review is provided of recent research on the impact of early visual language exposure on a variety of developmental outcomes, including literacy, cognition, and social adjustment. This body of work points to the great importance of giving young deaf children early exposure to a visual language as a critical precursor to the acquisition of literacy. Four analyses of data from the Visual Language and Visual Learning (VL2) Early Education Longitudinal Study are summarized. Each confirms findings from previously published laboratory findings and points to the positive effects of early sign language on, respectively, letter knowledge, social adaptability, sustained visual attention, and cognitive-behavioral milestones necessary for academic success. The article concludes with a consideration of the qualitative similarity hypothesis and a finding that the hypothesis is valid, but only if it can be presented as being modality independent.
Bach, Benjamin; Sicat, Ronell; Beyer, Johanna; Cordeil, Maxime; Pfister, Hanspeter
2018-01-01
We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.
Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research
NASA Astrophysics Data System (ADS)
Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.
2008-12-01
In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.
Dong, Han; Sharma, Diksha; Badano, Aldo
2014-12-01
Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.
Haptic perception and body representation in lateral and medial occipito-temporal cortices.
Costantini, Marcello; Urgesi, Cosimo; Galati, Gaspare; Romani, Gian Luca; Aglioti, Salvatore M
2011-04-01
Although vision is the primary sensory modality that humans and other primates use to identify objects in the environment, we can recognize crucial object features (e.g., shape, size) using the somatic modality. Previous studies have shown that the occipito-temporal areas dedicated to the visual processing of object forms, faces and bodies also show category-selective responses when the preferred stimuli are haptically explored out of view. Visual processing of human bodies engages specific areas in lateral (extrastriate body area, EBA) and medial (fusiform body area, FBA) occipito-temporal cortex. This study aimed at exploring the relative involvement of EBA and FBA in the haptic exploration of body parts. During fMRI scanning, participants were asked to haptically explore either real-size fake body parts or objects. We found a selective activation of right and left EBA, but not of right FBA, while participants haptically explored body parts as compared to real objects. This suggests that EBA may integrate visual body representations with somatosensory information regarding body parts and form a multimodal representation of the body. Furthermore, both left and right EBA showed a comparable level of body selectivity during haptic perception and visual imagery. However, right but not left EBA was more activated during haptic exploration than visual imagery of body parts, ruling out that the response to haptic body exploration was entirely due to the use of visual imagery. Overall, the results point to the existence of different multimodal body representations in the occipito-temporal cortex which are activated during perception and imagery of human body parts. Copyright © 2011 Elsevier Ltd. All rights reserved.
Non-lane-discipline-based car-following model under honk environment
NASA Astrophysics Data System (ADS)
Rong, Ying; Wen, Huiying
2018-04-01
This study proposed a non-lane-discipline-based car-following model by synthetically considering the visual angles and the timid/aggressive characteristics of drivers under honk environment. We firstly derived the neutral stability condition by the linear stability theory. It showed that the parameters related to visual angles and driving characteristics of drivers under honk environment all have significant impact on the stability of non-lane-discipline traffic flow. For better understanding the inner mechanism among these factors, we further analyzed how each parameter affects the traffic flow and gained further insight into how the visual angles information influences other parameters and then influences the non-lane-discipline traffic flow under honk environment. And the results showed that the other aspects such as driving characteristics of drivers or honk effect are all interacted with the "Visual-Angle Factor". And the effect of visual angle is not just to say simply it has larger stable region or not as the existing studies. Finally, to verify the proposed model, we carried out the numerical simulation under the periodic boundary condition. And the results of numerical simulation are agreed well with the theoretical findings.
Foley, Alan R; Masingila, Joanna O
2015-07-01
In this paper, the authors explore the use of mobile devices as assistive technology for students with visual impairments in resource-limited environments. This paper provides initial data and analysis from an ongoing project in Kenya using tablet devices to provide access to education and independence for university students with visual impairments in Kenya. The project is a design-based research project in which we have developed and are refining a theoretically grounded intervention--a model for developing communities of practice to support the use of mobile technology as an assistive technology. We are collecting data to assess the efficacy and improve the model as well as inform the literature that has guided the design of the intervention. In examining the impact of the use of mobile devices for the students with visual impairments, we found that the devices provide the students with (a) access to education, (b) the means to participate in everyday life and (c) the opportunity to create a community of practice. Findings from this project suggest that communities of practice are both a viable and a valuable approach for facilitating the diffusion and support of mobile devices as assistive technology for students with visual impairments in resource-limited environments. Implications for Rehabilitation The use of mobile devices as assistive technology in resource-limited environments provides students with visual impairments access to education and enhanced means to participate in everyday life. Communities of practice are both a viable and a valuable approach for facilitating the diffusion and support of mobile devices as assistive technology for students with visual impairments in resource-limited environments. Providing access to assistive technology early and consistently throughout students' schooling builds both their skill and confidence and also demonstrates the capabilities of people with visual impairments to the larger society.
NASA Astrophysics Data System (ADS)
Quattrini, R.; Battini, C.; Mammoli, R.
2018-05-01
Recently we assist to an increasing availability of HBIM models rich in geometric and informative terms. Instead, there is still a lack of researches implementing dedicated libraries, based on parametric intelligence and semantically aware, related to the architectural heritage. Additional challenges became from their portability in non-desktop environment (such as VR). The research article demonstrates the validity of a workflow applied to the architectural heritage, which starting from the semantic modeling reaches the visualization in a virtual reality environment, passing through the necessary phases of export, data migration and management. The three-dimensional modeling of the classical Doric order takes place in the BIM work environment and is configured as a necessary starting point for the implementation of data, parametric intelligences and definition of ontologies that exclusively qualify the model. The study also enables an effective method for data migration from the BIM model to databases integrated into VR technologies for AH. Furthermore, the process intends to propose a methodology, applicable in a return path, suited to the achievement of an appropriate data enrichment of each model and to the possibility of interaction in VR environment with the model.
Bedmap2; Mapping, visualizing and communicating the Antarctic sub-glacial environment.
NASA Astrophysics Data System (ADS)
Fretwell, Peter; Pritchard, Hamish
2013-04-01
Bedmap2; Mapping, visualizing and communicating the Antarctic sub-glacial environment. The Bedmap2 project has been a large cooperative effort to compile, model, map and visualize the ice-rock interface beneath the Antarctic ice sheet. Here we present the final output of that project; the Bedmap2 printed map. The map is an A1, double sided print, showing 2d and 3d visualizations of the dataset. It includes scientific interpretations, cross sections and comparisons with other areas. Paper copies of the colour double sided map will be freely distributed at this session.
Experimenter's Laboratory for Visualized Interactive Science
NASA Technical Reports Server (NTRS)
Hansen, Elaine R.; Rodier, Daniel R.; Klemp, Marjorie K.
1994-01-01
ELVIS (Experimenter's Laboratory for Visualized Interactive Science) is an interactive visualization environment that enables scientists, students, and educators to visualize and analyze large, complex, and diverse sets of scientific data. It accomplishes this by presenting the data sets as 2-D, 3-D, color, stereo, and graphic images with movable and multiple light sources combined with displays of solid-surface, contours, wire-frame, and transparency. By simultaneously rendering diverse data sets acquired from multiple sources, formats, and resolutions and by interacting with the data through an intuitive, direct-manipulation interface, ELVIS provides an interactive and responsive environment for exploratory data analysis.
Ogourtsova, Tatiana; Archambault, Philippe S; Lamontagne, Anouk
2018-01-01
Unilateral spatial neglect (USN), a highly prevalent and disabling post-stroke deficit, has been shown to affect the recovery of locomotion. However, our current understanding of USN role in goal-directed locomotion control, and this, in different cognitive/perceptual conditions tapping into daily life demands, is limited. To examine goal-directed locomotion abilities in individuals with and without post-stroke USN vs. healthy controls. Participants (n = 45, n = 15 per group) performed goal-directed locomotion trials to actual, remembered and shifting targets located 7 m away at 0° and 15° right/left while immersed in a 3-D virtual environment. Greater end-point mediolateral displacement and heading errors (end-point accuracy measures) were found for the actual and the remembered left and right targets among those with post-stroke USN compared to the two other groups (p < 0.05). A delayed onset of reorientation to the left and right shifting targets was also observed in USN+ participants vs. the other two groups (p < 0.05). Results on clinical near space USN assessment and walking speed explained only a third of the variance in goal-directed walking performance. Post-stroke USN was found to affect goal-directed locomotion in different perceptuo-cognitive conditions, both to contralesional and ipsilesional targets, demonstrating the presence of lateralized and non-lateralized deficits. Beyond neglect severity and walking capacity, other factors related to attention, executive functioning and higher-order visual perceptual abilities (e.g. optic flow perception) may account for the goal-directed walking deficits observed in post-stroke USN+. Goal-directed locomotion can be explored in the design of future VR-based evaluation and training tools for USN to improve the currently used conventional methods.
Kozhevnikov, Maria; Dhond, Rupali P.
2012-01-01
Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive (3DI) virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard and Metzler (1971) mental rotation (MR) task across the following three types of visual presentation environments; traditional 2D non-immersive (2DNI), 3D non-immersive (3DNI – anaglyphic glasses), and 3DI (head mounted display with position and head orientation tracking). In Experiment 2, we examined how the use of different backgrounds affected MR processes within the 3DI environment. In Experiment 3, we compared electroencephalogram data recorded while participants were mentally rotating visual-spatial images presented in 3DI vs. 2DNI environments. Overall, the findings of the three experiments suggest that visual-spatial processing is different in immersive and non-immersive environments, and that immersive environments may require different image encoding and transformation strategies than the two other non-immersive environments. Specifically, in a non-immersive environment, participants may utilize a scene-based frame of reference and allocentric encoding whereas immersive environments may encourage the use of a viewer-centered frame of reference and egocentric encoding. These findings also suggest that MR performed in laboratory conditions using a traditional 2D computer screen may not reflect spatial processing as it would occur in the real world. PMID:22908003
Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content
NASA Astrophysics Data System (ADS)
Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio
2013-05-01
With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.
3D models as a platform for urban analysis and studies on human perception of space
NASA Astrophysics Data System (ADS)
Fisher-Gewirtzman, D.
2012-10-01
The objective of this work is to develop an integrated visual analysis and modelling for environmental and urban systems in respect to interior space layout and functionality. This work involves interdisciplinary research efforts that focus primarily on architecture design discipline, yet incorporates experts from other and different disciplines, such as Geoinformatics, computer sciences and environment-behavior studies. This work integrates an advanced Spatial Openness Index (SOI) model within realistic geovisualized Geographical Information System (GIS) environment and assessment using subjective residents' evaluation. The advanced SOI model measures the volume of visible space at any required view point practically, for every room or function. This model enables accurate 3D simulation of the built environment regarding built structure and surrounding vegetation. This paper demonstrates the work on a case study. A 3D model of Neve-Shaanan neighbourhood in Haifa was developed. Students that live in this neighbourhood had participated in this research. Their apartments were modelled in details and inserted into a general model, representing topography and the volumes of buildings. The visual space for each room in every apartment was documented and measured and at the same time the students were asked to answer questions regarding their perception of space and view from their residence. The results of this research work had shown potential contribution to professional users, such as researchers, designers and city planners. This model can be easily used by professionals and by non-professionals such as city dwellers, contractors and developers. This work continues with additional case studies having different building typologies and functions variety, using virtual reality tools.
A GUI visualization system for airborne lidar image data to reconstruct 3D city model
NASA Astrophysics Data System (ADS)
Kawata, Yoshiyuki; Koizumi, Kohei
2015-10-01
A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.
ERIC Educational Resources Information Center
Mitchell, Donald P.; Scigliano, John A.
2000-01-01
Describes the development of an online learning environment for a visually impaired professional. Topics include physical barriers, intellectual barriers, psychological barriers, and technological barriers; selecting appropriate hardware and software; and combining technologies that include personal computers, Web-based resources, network…
Prism adaptation by mental practice.
Michel, Carine; Gaveau, Jérémie; Pozzo, Thierry; Papaxanthis, Charalambos
2013-09-01
The prediction of our actions and their interaction with the external environment is critical for sensorimotor adaptation. For instance, during prism exposure, which deviates laterally our visual field, we progressively correct movement errors by combining sensory feedback with forward model sensory predictions. However, very often we project our actions to the external environment without physically interacting with it (e.g., mental actions). An intriguing question is whether adaptation will occur if we imagine, instead of executing, an arm movement while wearing prisms. Here, we investigated prism adaptation during mental actions. In the first experiment, participants (n = 54) performed arm pointing movements before and after exposure to the optical device. They were equally divided into six groups according to prism exposure: Prisms-Active, Prisms-Imagery, Prisms-Stationary, Prisms-Stationary-Attention, No Conflict-Prisms-Imagery, No Prisms-Imagery. Adaptation, measured by the difference in pointing errors between pre-test and post-test, occurred only in Prisms-Active and Prisms-Imagery conditions. The second experiment confirmed the results of the first experiment and further showed that sensorimotor adaptation was mainly due to proprioceptive realignment in both Prisms-Active (n = 10) and Prisms-Imagery (n = 10) groups. In both experiments adaptation was greater following actual than imagined pointing movements. The present results are the first demonstration of prism adaptation by mental practice under prism exposure and they are discussed in terms of internal forward models and sensorimotor plasticity. Copyright © 2012 Elsevier Ltd. All rights reserved.
Motion sickness and proprioceptive aftereffects following virtual environment exposure
NASA Technical Reports Server (NTRS)
Stanney, K. M.; Kennedy, R. S.; Drexler, J. M.; Harm, D. L.
1999-01-01
To study the potential aftereffects of virtual environments (VE), tests of visually guided behavior and felt limb position (pointing with eyes open and closed) along with self-reports of motion sickness-like discomfort were administered before and after 30 min exposure of 34 subjects. When post- discomfort was compared to a pre-baseline, the participants reported more sickness afterward (p < 0.03). The change in felt limb position resulted in subjects pointing higher (p < 0.038) and slightly to the left, although the latter difference was not statistically significant (p = 0.08). When findings from a second study using a different VE system were compared, they essentially replicated the results of the first study with higher sickness afterward (p < 0.001) and post- pointing errors were also up (p < 0.001) and to the left (p < 0.001). While alternative explanations (e.g. learning, fatigue, boredom, habituation, etc.) of these outcomes cannot be ruled out, the consistency of the post- effects on felt limb position changes in the two VE implies that these recalibrations may linger once interaction with the VE has concluded, rendering users potentially physiologically maladapted for the real world when they return. This suggests there may be safety concerns following VE exposures until pre-exposure functioning has been regained. The results of this study emphasize the need for developing and using objective measures of post-VE exposure aftereffects in order to systematically determine under what conditions these effects may occur.
Look, Snap, See: Visual Literacy through the Camera.
ERIC Educational Resources Information Center
Spoerner, Thomas M.
1981-01-01
Activities involving photographs stimulate visual perceptual awareness. Children understand visual stimuli before having verbal capacity to deal with the world. Vision becomes the primary means for learning, understanding, and adjusting to the environment. Photography can provide an effective avenue to visual literacy. (Author)
NASA Astrophysics Data System (ADS)
Bichisao, Marta; Stallone, Angela
2017-04-01
Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.
Pasqualotto, Achille; Esenkaya, Tayfun
2016-01-01
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000
Graphical programming interface: A development environment for MRI methods.
Zwart, Nicholas R; Pipe, James G
2015-11-01
To introduce a multiplatform, Python language-based, development environment called graphical programming interface for prototyping MRI techniques. The interface allows developers to interact with their scientific algorithm prototypes visually in an event-driven environment making tasks such as parameterization, algorithm testing, data manipulation, and visualization an integrated part of the work-flow. Algorithm developers extend the built-in functionality through simple code interfaces designed to facilitate rapid implementation. This article shows several examples of algorithms developed in graphical programming interface including the non-Cartesian MR reconstruction algorithms for PROPELLER and spiral as well as spin simulation and trajectory visualization of a FLORET example. The graphical programming interface framework is shown to be a versatile prototyping environment for developing numeric algorithms used in the latest MR techniques. © 2014 Wiley Periodicals, Inc.
Audio-visual assistance in co-creating transition knowledge
NASA Astrophysics Data System (ADS)
Hezel, Bernd; Broschkowski, Ephraim; Kropp, Jürgen P.
2013-04-01
Earth system and climate impact research results point to the tremendous ecologic, economic and societal implications of climate change. Specifically people will have to adopt lifestyles that are very different from those they currently strive for in order to mitigate severe changes of our known environment. It will most likely not suffice to transfer the scientific findings into international agreements and appropriate legislation. A transition is rather reliant on pioneers that define new role models, on change agents that mainstream the concept of sufficiency and on narratives that make different futures appealing. In order for the research community to be able to provide sustainable transition pathways that are viable, an integration of the physical constraints and the societal dynamics is needed. Hence the necessary transition knowledge is to be co-created by social and natural science and society. To this end, the Climate Media Factory - in itself a massively transdisciplinary venture - strives to provide an audio-visual connection between the different scientific cultures and a bi-directional link to stake holders and society. Since methodology, particular language and knowledge level of the involved is not the same, we develop new entertaining formats on the basis of a "complexity on demand" approach. They present scientific information in an integrated and entertaining way with different levels of detail that provide entry points to users with different requirements. Two examples shall illustrate the advantages and restrictions of the approach.
Application of Andrew's Plots to Visualization of Multidimensional Data
ERIC Educational Resources Information Center
Grinshpun, Vadim
2016-01-01
Importance: The article raises a point of visual representation of big data, recently considered to be demanded for many scientific and real-life applications, and analyzes particulars for visualization of multi-dimensional data, giving examples of the visual analytics-related problems. Objectives: The purpose of this paper is to study application…
Adapting the iSNOBAL model for improved visualization in a GIS environment
NASA Astrophysics Data System (ADS)
Johansen, W. J.; Delparte, D.
2014-12-01
Snowmelt is a primary means of crucial water resources in much of the western United States. Researchers are developing models that estimate snowmelt to aid in water resource management. One such model is the image snowcover energy and mass balance (iSNOBAL) model. It uses input climate grids to simulate the development and melting of snowpack in mountainous regions. This study looks at applying this model to the Reynolds Creek Experimental Watershed in southwestern Idaho, utilizing novel approaches incorporating geographic information systems (GIS). To improve visualization of the iSNOBAL model, we have adapted it to run in a GIS environment. This type of environment is suited to both the input grid creation and the visualization of results. The data used for input grid creation can be stored locally or on a web-server. Kriging interpolation embedded within Python scripts are used to create air temperature, soil temperature, humidity, and precipitation grids, while built-in GIS and existing tools are used to create solar radiation and wind grids. Additional Python scripting is then used to perform model calculations. The final product is a user-friendly and accessible version of the iSNOBAL model, including the ability to easily visualize and interact with model results, all within a web- or desktop-based GIS environment. This environment allows for interactive manipulation of model parameters and visualization of the resulting input grids for the model calculations. Future work is moving towards adapting the model further for use in a 3D gaming engine for improved visualization and interaction.
Radiological tele-immersion for next generation networks.
Ai, Z; Dech, F; Rasmussen, M; Silverstein, J C
2000-01-01
Since the acquisition of high-resolution three-dimensional patient images has become widespread, medical volumetric datasets (CT or MR) larger than 100 MB and encompassing more than 250 slices are common. It is important to make this patient-specific data quickly available and usable to many specialists at different geographical sites. Web-based systems have been developed to provide volume or surface rendering of medical data over networks with low fidelity, but these cannot adequately handle stereoscopic visualization or huge datasets. State-of-the-art virtual reality techniques and high speed networks have made it possible to create an environment for clinicians geographically distributed to immersively share these massive datasets in real-time. An object-oriented method for instantaneously importing medical volumetric data into Tele-Immersive environments has been developed at the Virtual Reality in Medicine Laboratory (VRMedLab) at the University of Illinois at Chicago (UIC). This networked-VR setup is based on LIMBO, an application framework or template that provides the basic capabilities of Tele-Immersion. We have developed a modular general purpose Tele-Immersion program that automatically combines 3D medical data with the methods for handling the data. For this purpose a DICOM loader for IRIS Performer has been developed. The loader was designed for SGI machines as a shared object, which is executed at LIMBO's runtime. The loader loads not only the selected DICOM dataset, but also methods for rendering, handling, and interacting with the data, bringing networked, real-time, stereoscopic interaction with radiological data to reality. Collaborative, interactive methods currently implemented in the loader include cutting planes and windowing. The Tele-Immersive environment has been tested on the UIC campus over an ATM network. We tested the environment with 3 nodes; one ImmersaDesk at the VRMedLab, one CAVE at the Electronic Visualization Laboratory (EVL) on east campus, and a CT scan machine in UIC Hospital. CT data was pulled directly from the scan machine to the Tele-Immersion server in our Laboratory, and then the data was synchronously distributed by our Onyx2 Rack server to all the VR setups. Instead of permitting medical volume visualization at one VR device, by combining teleconferencing, tele-presence, and virtual reality, the Tele-Immersive environment will enable geographically distributed clinicians to intuitively interact with the same medical volumetric models, point, gesture, converse, and see each other. This environment will bring together clinicians at different geographic locations to participate in Tele-Immersive consultation and collaboration.
ERIC Educational Resources Information Center
Gao, Tao; Gao, Zaifeng; Li, Jie; Sun, Zhongqiang; Shen, Mowei
2011-01-01
Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual…
Development of a Computerized Visual Search Test
ERIC Educational Resources Information Center
Reid, Denise; Babani, Harsha; Jon, Eugenia
2009-01-01
Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…
Listeners' expectation of room acoustical parameters based on visual cues
NASA Astrophysics Data System (ADS)
Valente, Daniel L.
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.
Haptic interfaces: Hardware, software and human performance
NASA Technical Reports Server (NTRS)
Srinivasan, Mandayam A.
1995-01-01
Virtual environments are computer-generated synthetic environments with which a human user can interact to perform a wide variety of perceptual and motor tasks. At present, most of the virtual environment systems engage only the visual and auditory senses, and not the haptic sensorimotor system that conveys the sense of touch and feel of objects in the environment. Computer keyboards, mice, and trackballs constitute relatively simple haptic interfaces. Gloves and exoskeletons that track hand postures have more interaction capabilities and are available in the market. Although desktop and wearable force-reflecting devices have been built and implemented in research laboratories, the current capabilities of such devices are quite limited. To realize the full promise of virtual environments and teleoperation of remote systems, further developments of haptic interfaces are critical. In this paper, the status and research needs in human haptics, technology development and interactions between the two are described. In particular, the excellent performance characteristics of Phantom, a haptic interface recently developed at MIT, are highlighted. Realistic sensations of single point of contact interactions with objects of variable geometry (e.g., smooth, textured, polyhedral) and material properties (e.g., friction, impedance) in the context of a variety of tasks (e.g., needle biopsy, switch panels) achieved through this device are described and the associated issues in haptic rendering are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Kristin A.; Scholtz, Jean; Whiting, Mark A.
The VAST Challenge has been a popular venue for academic and industry participants for over ten years. Many participants comment that the majority of their time in preparing VAST Challenge entries is discovering elements in their software environments that need to be redesigned in order to solve the given task. Fortunately, there is no need to wait until the VAST Challenge is announced to test out software systems. The Visual Analytics Benchmark Repository contains all past VAST Challenge tasks, data, solutions and submissions. This paper details the various types of evaluations that may be conducted using the Repository information. Inmore » this paper we describe how developers can do informal evaluations of various aspects of their visual analytics environments using VAST Challenge information. Aspects that can be evaluated include the appropriateness of the software for various tasks, the various data types and formats that can be accommodated, the effectiveness and efficiency of the process supported by the software, and the intuitiveness of the visualizations and interactions. Researchers can compare their visualizations and interactions to those submitted to determine novelty. In addition, the paper provides pointers to various guidelines that software teams can use to evaluate the usability of their software. While these evaluations are not a replacement for formal evaluation methods, this information can be extremely useful during the development of visual analytics environments.« less
Corrosion Evaluation of Tank 40 Leak Detection Box
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mickalonis, J.I.
1999-07-29
'Leak detection from the transfer lines in the tank farm has been a concern for many years because of the need to minimize exposure of personnel and contamination of the environment. The leak detection box (LDB) is one line of defense, which must be maintained to meet this objective. The evaluation of a failed LDB was one item from an action plan aimed at minimizing the degradation of LDBs. The Tank 40 LDB, which failed in service, was dug up and shipped to SRTC for evaluation. During a video inspection while in service, this LDB was found to have blackmore » tubercles on the interior, which suggested possible microbial involvement. The failure point, however, was believed to have occurred in the drain line from the transfer line jacket. Visual, metallurgical, and biological analyses were performed on the LDB. The analysis results showed that there was not any adverse microbiological growth or significant localized corrosion. The corrosion of the LDB was caused by exposure to aqueous environments and was typical of carbon steel pipes in soil environments.'« less
A biomimetic vision-based hovercraft accounts for bees' complex behaviour in various corridors.
Roubieu, Frédéric L; Serres, Julien R; Colonnier, Fabien; Franceschini, Nicolas; Viollet, Stéphane; Ruffier, Franck
2014-09-01
Here we present the first systematic comparison between the visual guidance behaviour of a biomimetic robot and those of honeybees flying in similar environments. We built a miniature hovercraft which can travel safely along corridors with various configurations. For the first time, we implemented on a real physical robot the 'lateral optic flow regulation autopilot', which we previously studied computer simulations. This autopilot inspired by the results of experiments on various species of hymenoptera consists of two intertwined feedback loops, the speed and lateral control loops, each of which has its own optic flow (OF) set-point. A heading-lock system makes the robot move straight ahead as fast as 69 cm s(-1) with a clearance from one wall as small as 31 cm, giving an unusually high translational OF value (125° s(-1)). Our biomimetic robot was found to navigate safely along straight, tapered and bent corridors, and to react appropriately to perturbations such as the lack of texture on one wall, the presence of a tapering or non-stationary section of the corridor and even a sloping terrain equivalent to a wind disturbance. The front end of the visual system consists of only two local motion sensors (LMS), one on each side. This minimalistic visual system measuring the lateral OF suffices to control both the robot's forward speed and its clearance from the walls without ever measuring any speeds or distances. We added two additional LMSs oriented at +/-45° to improve the robot's performances in stiffly tapered corridors. The simple control system accounts for worker bees' ability to navigate safely in six challenging environments: straight corridors, single walls, tapered corridors, straight corridors with part of one wall moving or missing, as well as in the presence of wind.
VisSearch: A Collaborative Web Searching Environment
ERIC Educational Resources Information Center
Lee, Young-Jin
2005-01-01
VisSearch is a collaborative Web searching environment intended for sharing Web search results among people with similar interests, such as college students taking the same course. It facilitates students' Web searches by visualizing various Web searching processes. It also collects the visualized Web search results and applies an association rule…
Visual resource management of the sea
Louis V. Mills Jr.
1979-01-01
The scenic quality of the marine environment has become an important concern for the design and planning professions. Increased public use of the underwater environment has resulted from technological advancements in SCUBA, recreational submarines and through development of under-water restaurants and parks. This paper presents an approach to an underwater visual...
Using Visualization to Motivate Student Participation in Collaborative Online Learning Environments
ERIC Educational Resources Information Center
Jin, Sung-Hee
2017-01-01
Online participation in collaborative online learning environments is instrumental in motivating students to learn and promoting their learning satisfaction, but there has been little research on the technical supports for motivating students' online participation. The purpose of this study was to develop a visualization tool to motivate learners…
The Contribution of Visualization to Learning Computer Architecture
ERIC Educational Resources Information Center
Yehezkel, Cecile; Ben-Ari, Mordechai; Dreyfus, Tommy
2007-01-01
This paper describes a visualization environment and associated learning activities designed to improve learning of computer architecture. The environment, EasyCPU, displays a model of the components of a computer and the dynamic processes involved in program execution. We present the results of a research program that analysed the contribution of…
Visual Literacy in Instructional Design Programs
ERIC Educational Resources Information Center
Ervine, Michelle D.
2016-01-01
In this technologically advanced environment, users have become highly visual, with television, videos, web sites and images dominating the learning environment. These new forms of searching and learning are changing the perspective of what it means to be literate. Literacy can no longer solely rely on text-based materials, but should also…
Audio Visual Technology and the Teaching of Foreign Languages.
ERIC Educational Resources Information Center
Halbig, Michael C.
Skills in comprehending the spoken language source are becoming increasingly important due to the audio-visual orientation of our culture. It would seem natural, therefore, to adjust the learning goals and environment accordingly. The video-cassette machine is an ideal means for creating this learning environment and developing the listening…
Borgersen, Nanna Jo; Henriksen, Mikael Johannes Vuokko; Konge, Lars; Sørensen, Torben Lykke; Thomsen, Ann Sofia Skou; Subhi, Yousif
2016-01-01
Direct ophthalmoscopy is well-suited for video-based instruction, particularly if the videos enable the student to see what the examiner sees when performing direct ophthalmoscopy. We evaluated the pedagogical effectiveness of instructional YouTube videos on direct ophthalmoscopy by evaluating their content and approach to visualization. In order to synthesize main themes and points for direct ophthalmoscopy, we formed a broad panel consisting of a medical student, junior and senior physicians, and took into consideration book chapters targeting medical students and physicians in general. We then systematically searched YouTube. Two authors reviewed eligible videos to assess eligibility and extract data on video statistics, content, and approach to visualization. Correlations between video statistics and contents were investigated using two-tailed Spearman's correlation. We screened 7,640 videos, of which 27 were found eligible for this study. Overall, a median of 12 out of 18 points (interquartile range: 8-14 key points) were covered; no videos covered all of the 18 points assessed. We found the most difficulties in the approach to visualization of how to approach the patient and how to examine the fundus. Time spent on fundus examination correlated with the number of views per week (Spearman's ρ=0.53; P=0.029). Videos may help overcome the pedagogical issues in teaching direct ophthalmoscopy; however, the few available videos on YouTube fail to address this particular issue adequately. There is a need for high-quality videos that include relevant points, provide realistic visualization of the examiner's view, and give particular emphasis on fundus examination.
The climate visualizer: Sense-making through scientific visualization
NASA Astrophysics Data System (ADS)
Gordin, Douglas N.; Polman, Joseph L.; Pea, Roy D.
1994-12-01
This paper describes the design of a learning environment, called the Climate Visualizer, intended to facilitate scientific sense-making in high school classrooms by providing students the ability to craft, inspect, and annotate scientific visualizations. The theoretical back-ground for our design presents a view of learning as acquiring and critiquing cultural practices and stresses the need for students to appropriate the social and material aspects of practice when learning an area. This is followed by a description of the design of the Climate Visualizer, including detailed accounts of its provision of spatial and temporal context and the quantitative and visual representations it employs. A broader context is then explored by describing its integration into the high school science classroom. This discussion explores how visualizations can promote the creation of scientific theories, especially in conjunction with the Collaboratory Notebook, an embedded environment for creating and critiquing scientific theories and visualizations. Finally, we discuss the design trade-offs we have made in light of our theoretical orientation, and our hopes for further progress.
The development of organized visual search
Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.
2013-01-01
Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560
Visual analysis of fluid dynamics at NASA's numerical aerodynamic simulation facility
NASA Technical Reports Server (NTRS)
Watson, Velvin R.
1991-01-01
A study aimed at describing and illustrating visualization tools used in Computational Fluid Dynamics (CFD) and indicating how these tools are likely to change by showing a projected resolution of the human computer interface is presented. The following are outlined using a graphically based test format: the revolution of human computer environments for CFD research; comparison of current environments; current environments with the ideal; predictions for the future CFD environments; what can be done to accelerate the improvements. The following comments are given: when acquiring visualization tools, potential rapid changes must be considered; environmental changes over the next ten years due to human computer interface cannot be fathomed; data flow packages such as AVS, apE, Explorer and Data Explorer are easy to learn and use for small problems, excellent for prototyping, but not so efficient for large problems; the approximation techniques used in visualization software must be appropriate for the data; it has become more cost effective to move jobs that fit on workstations and run only memory intensive jobs on the supercomputer; use of three dimensional skills will be maximized when the three dimensional environment is built in from the start.
Remembering from any angle: The flexibility of visual perspective during retrieval
Rice, Heather J.; Rubin, David C.
2010-01-01
When recalling autobiographical memories, individuals often experience visual images associated with the event. These images can be constructed from two different perspectives: first person, in which the event is visualized from the viewpoint experienced at encoding, or third person, in which the event is visualized from an external vantage point. Using a novel technique to measure visual perspective, we examined where the external vantage point is situated in third-person images. Individuals in two studies were asked to recall either 10 or 15 events from their lives and describe the perspectives they experienced. Wide variation in spatial locations was observed within third-person perspectives, with the location of these perspectives depending on the event being recalled. Results suggest remembering from an external viewpoint may be more common than previous studies have demonstrated. PMID:21109466
Visual Data Analysis for Satellites
NASA Technical Reports Server (NTRS)
Lau, Yee; Bhate, Sachin; Fitzpatrick, Patrick
2008-01-01
The Visual Data Analysis Package is a collection of programs and scripts that facilitate visual analysis of data available from NASA and NOAA satellites, as well as dropsonde, buoy, and conventional in-situ observations. The package features utilities for data extraction, data quality control, statistical analysis, and data visualization. The Hierarchical Data Format (HDF) satellite data extraction routines from NASA's Jet Propulsion Laboratory were customized for specific spatial coverage and file input/output. Statistical analysis includes the calculation of the relative error, the absolute error, and the root mean square error. Other capabilities include curve fitting through the data points to fill in missing data points between satellite passes or where clouds obscure satellite data. For data visualization, the software provides customizable Generic Mapping Tool (GMT) scripts to generate difference maps, scatter plots, line plots, vector plots, histograms, timeseries, and color fill images.
Integrated evaluation of visually induced motion sickness in terms of autonomic nervous regulation.
Kiryu, Tohru; Tada, Gen; Toyama, Hiroshi; Iijima, Atsuhiko
2008-01-01
To evaluate visually-induced motion sickness, we integrated subjective and objective responses in terms of autonomic nervous regulation. Twenty-seven subjects viewed a 2-min-long first-person-view video section five times (total 10 min) continuously. Measured biosignals, the RR interval, respiration, and blood pressure, were used to estimate the indices related to autonomic nervous activity (ANA). Then we determined the trigger points and some sensation sections based on the time-varying behavior of ANA-related indices. We found that there was a suitable combination of biosignals to present the symptoms of visually-induced motion sickness. Based on the suitable combination, integrating trigger points and subjective scores allowed us to represent the time-distribution of subjective responses during visual exposure, and helps us to understand what types of camera motions will cause visually-induced motion sickness.
ERIC Educational Resources Information Center
Desmurget, Michel; Turner, Robert S.; Prablanc, Claude; Russo, Gary S.; Alexander, Garret E.; Grafton, Scott T.
2005-01-01
Six results are reported. (a) Reaching accuracy increases when visual capture of the target is allowed (e.g., target on vs. target off at saccade onset). (b) Whatever the visual condition, trajectories diverge only after peak acceleration, suggesting that accuracy is improved through feedback mechanisms. (c) Feedback corrections are smoothly…
Modeling and Visualization Process of the Curve of Pen Point by GeoGebra
ERIC Educational Resources Information Center
Aktümen, Muharem; Horzum, Tugba; Ceylan, Tuba
2013-01-01
This study describes the mathematical construction of a real-life model by means of parametric equations, as well as the two- and three-dimensional visualization of the model using the software GeoGebra. The model was initially considered as "determining the parametric equation of the curve formed on a plane by the point of a pen, positioned…
An automatic calibration procedure for remote eye-gaze tracking systems.
Model, Dmitri; Guestrin, Elias D; Eizenman, Moshe
2009-01-01
Remote gaze estimation systems use calibration procedures to estimate subject-specific parameters that are needed for the calculation of the point-of-gaze. In these procedures, subjects are required to fixate on a specific point or points at specific time instances. Advanced remote gaze estimation systems can estimate the optical axis of the eye without any personal calibration procedure, but use a single calibration point to estimate the angle between the optical axis and the visual axis (line-of-sight). This paper presents a novel automatic calibration procedure that does not require active user participation. To estimate the angles between the optical and visual axes of each eye, this procedure minimizes the distance between the intersections of the visual axes of the left and right eyes with the surface of a display while subjects look naturally at the display (e.g., watching a video clip). Simulation results demonstrate that the performance of the algorithm improves as the range of viewing angles increases. For a subject sitting 75 cm in front of an 80 cm x 60 cm display (40" TV) the standard deviation of the error in the estimation of the angles between the optical and visual axes is 0.5 degrees.
Evaluation of stereoscopic display with visual function and interview
NASA Astrophysics Data System (ADS)
Okuyama, Fumio
1999-05-01
The influence of binocular stereoscopic (3D) television display on the human eye were compared with one of a 2D display, using human visual function testing and interviews. A 40- inch double lenticular display was used for 2D/3D comparison experiments. Subjects observed the display for 30 minutes at a distance 1.0 m, with a combination of 2D material and one of 3D material. The participants were twelve young adults. Main optometric test with visual function measured were visual acuity, refraction, phoria, near vision point, accommodation etc. The interview consisted of 17 questions. Testing procedures were performed just before watching, just after watching, and forty-five minutes after watching. Changes in visual function are characterized as prolongation of near vision point, decrease of accommodation and increase in phoria. 3D viewing interview results show much more visual fatigue in comparison with 2D results. The conclusions are: 1) change in visual function is larger and visual fatigue is more intense when viewing 3D images. 2) The evaluation method with visual function and interview proved to be very satisfactory for analyzing the influence of stereoscopic display on human eye.
Behavior Selection of Mobile Robot Based on Integration of Multimodal Information
NASA Astrophysics Data System (ADS)
Chen, Bin; Kaneko, Masahide
Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.
Merem, Edmund; Robinson, Bennetta; Wesley, Joan M; Yerramilli, Sudha; Twumasi, Yaw A
2010-05-01
Geo-information technologies are valuable tools for ecological assessment in stressed environments. Visualizing natural features prone to disasters from the oil sector spatially not only helps in focusing the scope of environmental management with records of changes in affected areas, but it also furnishes information on the pace at which resource extraction affects nature. Notwithstanding the recourse to ecosystem protection, geo-spatial analysis of the impacts remains sketchy. This paper uses GIS and descriptive statistics to assess the ecological impacts of petroleum extraction activities in Texas. While the focus ranges from issues to mitigation strategies, the results point to growth in indicators of ecosystem decline.
Merem, Edmund; Robinson, Bennetta; Wesley, Joan M.; Yerramilli, Sudha; Twumasi, Yaw A.
2010-01-01
Geo-information technologies are valuable tools for ecological assessment in stressed environments. Visualizing natural features prone to disasters from the oil sector spatially not only helps in focusing the scope of environmental management with records of changes in affected areas, but it also furnishes information on the pace at which resource extraction affects nature. Notwithstanding the recourse to ecosystem protection, geo-spatial analysis of the impacts remains sketchy. This paper uses GIS and descriptive statistics to assess the ecological impacts of petroleum extraction activities in Texas. While the focus ranges from issues to mitigation strategies, the results point to growth in indicators of ecosystem decline. PMID:20623014
Experimental Study of Saddle Point of Attachment in Laminar Juncture Flow
NASA Technical Reports Server (NTRS)
Coon, Michael D.; Tobak, Murray
1995-01-01
An experimental study of laminar horseshoe vortex flows upstream of a cylinder/flat plate juncture has been conducted to verify the existence of saddle-point-of-attachment topologies. In the classical depiction of this flowfield, a saddle point of separation exists on the flat plate upstream of the cylinder, and the boundary layer separates from the surface. Recent computations have indicated that the topology may actually involve a saddle point of attachment on the surface and additional singular points in the flow. Laser light sheet flow visualizations have been performed on the symmetry plane and crossflow planes to identify the saddle-point-of-attachment flowfields. The visualizations reveal that saddle-point-of-attachment topologies occur over a range of Reynolds numbers in both single and multiple vortex regimes. An analysis of the flow topologies is presented that describes the existence and evolution of the singular points in the flowfield.
Are children with low vision adapted to the visual environment in classrooms of mainstream schools?
Negiloni, Kalpa; Ramani, Krishna Kumar; Jeevitha, R; Kalva, Jayashree; Sudhir, Rachapalle Reddi
2018-02-01
The study aimed to evaluate the classroom environment of children with low vision and provide recommendations to reduce visual stress, with focus on mainstream schooling. The medical records of 110 children (5-17 years) seen in low vision clinic during 1 year period (2015) at a tertiary care center in south India were extracted. The visual function levels of children were compared to the details of their classroom environment. The study evaluated and recommended the chalkboard visual task size and viewing distance required for children with mild, moderate, and severe visual impairment (VI). The major causes of low vision based on the site of abnormality and etiology were retinal (80%) and hereditary (67%) conditions, respectively, in children with mild (n = 18), moderate (n = 72), and severe (n = 20) VI. Many of the children (72%) had difficulty in viewing chalkboard and common strategies used for better visibility included copying from friends (47%) and going closer to chalkboard (42%). To view the chalkboard with reduced visual stress, a child with mild VI can be seated at a maximum distance of 4.3 m from the chalkboard, with the minimum size of visual task (height of lowercase letter writing on chalkboard) recommended to be 3 cm. For 3/60-6/60 range, the maximum viewing distance with the visual task size of 4 cm is recommended to be 85 cm to 1.7 m. Simple modifications of the visual task size and seating arrangements can aid children with low vision with better visibility of chalkboard and reduced visual stress to manage in mainstream schools.
The Role of Visual Thinking in Writing the News Story
ERIC Educational Resources Information Center
Choo, Suzanne
2010-01-01
In this article, the author begins with a proposition asking what if visual thinking were privileged in the English classroom and then proceeds to elaborate on a curriculum grounded on three principles: (1) sense and perception as starting points; (2) meta-conceptual links between visual and verbal texts; and (3) the art of visualization in…
The use of embodied self-rotation for visual and spatial perspective-taking
Surtees, Andrew; Apperly, Ian; Samson, Dana
2013-01-01
Previous research has shown that calculating if something is to someone’s left or right involves a simulative process recruiting representations of our own body in imagining ourselves in the position of the other person (Kessler and Rutherford, 2010). We compared left and right judgements from another’s spatial position (spatial perspective judgements) to judgements of how a numeral appeared from another’s point of view (visual perspective judgements). Experiment 1 confirmed that these visual and spatial perspective judgements involved a process of rotation as they became more difficult with angular disparity between the self and other. There was evidence of some difference between the two, but both showed a linear pattern. Experiment 2 went a step further in showing that these judgements used embodied self rotations, as their difficulty was also dependent on the current position of the self within the world. This effect was significantly stronger in spatial perspective-taking, but was present in both cases. We conclude that embodied self-rotations, through which we actively imagine ourselves assuming someone else’s position in the world can subserve not only reasoning about where objects are in relation to someone else but also how the objects in their environment appear to them. PMID:24204334
Modulation of thermal noise and spectral sensitivity in Lake Baikal cottoid fish rhodopsins.
Luk, Hoi Ling; Bhattacharyya, Nihar; Montisci, Fabio; Morrow, James M; Melaccio, Federico; Wada, Akimori; Sheves, Mudi; Fanelli, Francesca; Chang, Belinda S W; Olivucci, Massimo
2016-12-09
Lake Baikal is the deepest and one of the most ancient lakes in the world. Its unique ecology has resulted in the colonization of a diversity of depth habitats by a unique fauna that includes a group of teleost fish of the sub-order Cottoidei. This relatively recent radiation of cottoid fishes shows a gradual blue-shift in the wavelength of the absorption maximum of their visual pigments with increasing habitat depth. Here we combine homology modeling and quantum chemical calculations with experimental in vitro measurements of rhodopsins to investigate dim-light adaptation. The calculations, which were able to reproduce the trend of observed absorption maxima in both A1 and A2 rhodopsins, reveal a Barlow-type relationship between the absorption maxima and the thermal isomerization rate suggesting a link between the observed blue-shift and a thermal noise decrease. A Nakanishi point-charge analysis of the electrostatic effects of non-conserved and conserved amino acid residues surrounding the rhodopsin chromophore identified both close and distant sites affecting simultaneously spectral tuning and visual sensitivity. We propose that natural variation at these sites modulate both the thermal noise and spectral shifting in Baikal cottoid visual pigments resulting in adaptations that enable vision in deep water light environments.
Modulation of thermal noise and spectral sensitivity in Lake Baikal cottoid fish rhodopsins
NASA Astrophysics Data System (ADS)
Luk, Hoi Ling; Bhattacharyya, Nihar; Montisci, Fabio; Morrow, James M.; Melaccio, Federico; Wada, Akimori; Sheves, Mudi; Fanelli, Francesca; Chang, Belinda S. W.; Olivucci, Massimo
2016-12-01
Lake Baikal is the deepest and one of the most ancient lakes in the world. Its unique ecology has resulted in the colonization of a diversity of depth habitats by a unique fauna that includes a group of teleost fish of the sub-order Cottoidei. This relatively recent radiation of cottoid fishes shows a gradual blue-shift in the wavelength of the absorption maximum of their visual pigments with increasing habitat depth. Here we combine homology modeling and quantum chemical calculations with experimental in vitro measurements of rhodopsins to investigate dim-light adaptation. The calculations, which were able to reproduce the trend of observed absorption maxima in both A1 and A2 rhodopsins, reveal a Barlow-type relationship between the absorption maxima and the thermal isomerization rate suggesting a link between the observed blue-shift and a thermal noise decrease. A Nakanishi point-charge analysis of the electrostatic effects of non-conserved and conserved amino acid residues surrounding the rhodopsin chromophore identified both close and distant sites affecting simultaneously spectral tuning and visual sensitivity. We propose that natural variation at these sites modulate both the thermal noise and spectral shifting in Baikal cottoid visual pigments resulting in adaptations that enable vision in deep water light environments.
Chang, Hsiang-Chih; Lee, Po-Lei; Lo, Men-Tzung; Lee, I-Hui; Yeh, Ting-Kuang; Chang, Chun-Yen
2012-05-01
This study proposes a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) independent of amplitude-frequency and phase calibrations. Six stepping delay flickering sequences (SDFSs) at 32-Hz flickering frequency were used to implement a six-command BCI system. EEG signals recorded from Oz position were first filtered within 29-35 Hz, segmented based on trigger events of SDFSs to obtain SDFS epochs, and then stored separately in epoch registers. An epoch-average process suppressed the inter-SDFS interference. For each detection point, the latest six SDFS epochs in each epoch register were averaged and the normalized power of averaged responses was calculated. The visual target that induced the maximum normalized power was identified as the visual target. Eight subjects were recruited in this study. All subjects were requested to produce the "563241" command sequence four times. The averaged accuracy, command transfer interval, and information transfer rate (mean ± std.) values for all eight subjects were 97.38 ± 5.97%, 3.56 ± 0.68 s, and 42.46 ± 11.17 bits/min, respectively. The proposed system requires no calibration in either the amplitude-frequency characteristic or the reference phase of SSVEP which may provide an efficient and reliable channel for the neuromuscular disabled to communicate with external environments.
Immersive Earth Science: Data Visualization in Virtual Reality
NASA Astrophysics Data System (ADS)
Skolnik, S.; Ramirez-Linan, R.
2017-12-01
Utilizing next generation technology, Navteca's exploration of 3D and volumetric temporal data in Virtual Reality (VR) takes advantage of immersive user experiences where stakeholders are literally inside the data. No longer restricted by the edges of a screen, VR provides an innovative way of viewing spatially distributed 2D and 3D data that leverages a 360 field of view and positional-tracking input, allowing users to see and experience data differently. These concepts are relevant to many sectors, industries, and fields of study, as real-time collaboration in VR can enhance understanding and mission with VR visualizations that display temporally-aware 3D, meteorological, and other volumetric datasets. The ability to view data that is traditionally "difficult" to visualize, such as subsurface features or air columns, is a particularly compelling use of the technology. Various development iterations have resulted in Navteca's proof of concept that imports and renders volumetric point-cloud data in the virtual reality environment by interfacing PC-based VR hardware to a back-end server and popular GIS software. The integration of the geo-located data in VR and subsequent display of changeable basemaps, overlaid datasets, and the ability to zoom, navigate, and select specific areas show the potential for immersive VR to revolutionize the way Earth data is viewed, analyzed, and communicated.
NASA Astrophysics Data System (ADS)
Shipman, J. S.; Anderson, J. W.
2017-12-01
An ideal tool for ecologists and land managers to investigate the impacts of both projected environmental changes and policy alternatives is the creation of immersive, interactive, virtual landscapes. As a new frontier in visualizing and understanding geospatial data, virtual landscapes require a new toolbox for data visualization that includes traditional GIS tools and uncommon tools such as the Unity3d game engine. Game engines provide capabilities to not only explore data but to build and interact with dynamic models collaboratively. These virtual worlds can be used to display and illustrate data that is often more understandable and plausible to both stakeholders and policy makers than is achieved using traditional maps.Within this context we will present funded research that has been developed utilizing virtual landscapes for geographic visualization and decision support among varied stakeholders. We will highlight the challenges and lessons learned when developing interactive virtual environments that require large multidisciplinary team efforts with varied competences. The results will emphasize the importance of visualization and interactive virtual environments and the link with emerging research disciplines within Visual Analytics.
XML-Based Visual Specification of Multidisciplinary Applications
NASA Technical Reports Server (NTRS)
Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad
2001-01-01
The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.
Computer-Based Tools for Inquiry in Undergraduate Classrooms: Results from the VGEE
NASA Astrophysics Data System (ADS)
Pandya, R. E.; Bramer, D. J.; Elliott, D.; Hay, K. E.; Mallaiahgari, L.; Marlino, M. R.; Middleton, D.; Ramamurhty, M. K.; Scheitlin, T.; Weingroff, M.; Wilhelmson, R.; Yoder, J.
2002-05-01
The Visual Geophysical Exploration Environment (VGEE) is a suite of computer-based tools designed to help learners connect observable, large-scale geophysical phenomena to underlying physical principles. Technologically, this connection is mediated by java-based interactive tools: a multi-dimensional visualization environment, authentic scientific data-sets, concept models that illustrate fundamental physical principles, and an interactive web-based work management system for archiving and evaluating learners' progress. Our preliminary investigations showed, however, that the tools alone are not sufficient to empower undergraduate learners; learners have trouble in organizing inquiry and using the visualization tools effectively. To address these issues, the VGEE includes an inquiry strategy and scaffolding activities that are similar to strategies used successfully in K-12 classrooms. The strategy is organized around the steps: identify, relate, explain, and integrate. In the first step, students construct visualizations from data to try to identify salient features of a particular phenomenon. They compare their previous conceptions of a phenomenon to the data examine their current knowledge and motivate investigation. Next, students use the multivariable functionality of the visualization environment to relate the different features they identified. Explain moves the learner temporarily outside the visualization to the concept models, where they explore fundamental physical principles. Finally, in integrate, learners use these fundamental principles within the visualization environment by literally placing the concept model within the visualization environment as a probe and watching it respond to larger-scale patterns. This capability, unique to the VGEE, addresses the disconnect that novice learners often experience between fundamental physics and observable phenomena. It also allows learners the opportunity to reflect on and refine their knowledge as well as anchor it within a context for long-term retention. We are implementing the VGEE in one of two otherwise identical entry-level atmospheric courses. In addition to comparing student learning and attitudes in the two courses, we are analyzing student participation with the VGEE to evaluate the effectiveness and usability of the VGEE. In particular, we seek to identify the scaffolding students need to construct physically meaningful multi-dimensional visualizations, and evaluate the effectiveness of the visualization-embedded concept-models in addressing inert knowledge. We will also examine the utility of the inquiry strategy in developing content knowledge, process-of-science knowledge, and discipline-specific investigatory skills. Our presentation will include video examples of student use to illustrate our findings.
Pursuit Eye-Movements in Curve Driving Differentiate between Future Path and Tangent Point Models
Lappi, Otto; Pekkanen, Jami; Itkonen, Teemu H.
2013-01-01
For nearly 20 years, looking at the tangent point on the road edge has been prominent in models of visual orientation in curve driving. It is the most common interpretation of the commonly observed pattern of car drivers looking through a bend, or at the apex of the curve. Indeed, in the visual science literature, visual orientation towards the inside of a bend has become known as “tangent point orientation”. Yet, it remains to be empirically established whether it is the tangent point the drivers are looking at, or whether some other reference point on the road surface, or several reference points, are being targeted in addition to, or instead of, the tangent point. Recently discovered optokinetic pursuit eye-movements during curve driving can provide complementary evidence over and above traditional gaze-position measures. This paper presents the first detailed quantitative analysis of pursuit eye movements elicited by curvilinear optic flow in real driving. The data implicates the far zone beyond the tangent point as an important gaze target area during steady-state cornering. This is in line with the future path steering models, but difficult to reconcile with any pure tangent point steering model. We conclude that the tangent point steering models do not provide a general explanation of eye movement and steering during a curve driving sequence and cannot be considered uncritically as the default interpretation when the gaze position distribution is observed to be situated in the region of the curve apex. PMID:23894300
Some lessons learned in three years with ADS-33C. [rotorcraft handling qualities specification
NASA Technical Reports Server (NTRS)
Key, David L.; Blanken, Chris L.; Hoh, Roger H.
1993-01-01
Three years of using the U.S. Army's rotorcraft handling qualities specification, Aeronautical Design Standard - 33, has shown it to be surprisingly robust. It appears to provide an excellent basis for design and for assessment, however, as the subtleties become more well understood, several areas needing refinement became apparent. Three responses to these needs have been documented in this paper: (1) The yaw-axis attitude quickness for hover target acquisition and tracking can be relaxed slightly. (2) Understanding and application of criteria for degraded visual environments needed elaboration. This and some guidelines for testing to obtain visual cue ratings have been documented. (3) The flight test maneuvers were an innovation that turned out to be very valuable. Their extensive use has made it necessary to tighten definitions and testing guidance. This was accomplished for a good visual environment and is underway for degraded visual environments.
Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao
2009-01-01
Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.
Visual efficiency among teenaged athletes and non-athletes
Omar, Rokiah; Kuan, Yau Meng; Zuhairi, Nurul Atikah; Manan, Faudziah Abd; Knight, Victor Feizal
2017-01-01
AIM To compare visual efficiency, specifically accom-modation, vergence, and oculomotor functions among athletes and non-athletes. METHODS A cross-sectional study on sports vision screening was used to evaluate the visual skills of 214 elementary students (107 athletes, 107 non-athletes), aged between 13 and 16y. The visual screening assessed visual parameters such as ocular motor alignment, accommodation, and vergence functions. RESULTS Mean visual parameters were compared between age-group matched athletes (mean age 14.82±0.98y) and non-athletes (mean age 15.00±1.04y). The refractive errors of all participants were corrected to maximal attainable best corrected visual acuity of logMAR 0.0. Accommodation function assessment evaluated amplitude of accommodation and accommodation facility. Vergence functions measured the near point of convergence, vergence facility, and distance fusional vergence at break and recovery point. Ocular motor alignment was not statistically significant between both groups. Athletes had a statistically significant amplitude of accommodation for both the right eye (t=2.30, P=0.02) and the left eye (t=1.99, P=0.05). Conversely, non-athletes had better accommodation facility (t=-2.54, P=0.01) and near point of convergence (t=4.39, P<0.001) when compared to athletes. Vergence facility was found to be better among athletes (t=2.47, P=0.01). Nevertheless, non-athletes were significantly better for both distance negative and positive fusional vergence. CONCLUSION Although the findings are still inconclusive as to whether athletes had superior visual skills as compared to non-athletes, it remains important to identify and elucidate the key visual skills needed by athletes in order for them to achieve higher performance in their sports. PMID:28944208
NASA Astrophysics Data System (ADS)
Park, Byeongjin; Sohn, Hoon
2018-04-01
The practicality of laser ultrasonic scanning is limited because scanning at a high spatial resolution demands a prohibitively long scanning time. Inspired by binary search, an accelerated defect visualization technique is developed to visualize defect with a reduced scanning time. The pitch-catch distance between the excitation point and the sensing point is also fixed during scanning to maintain a high signal-to-noise ratio of measured ultrasonic responses. The approximate defect boundary is identified by examining the interactions between ultrasonic waves and defect observed at the scanning points that are sparsely selected by a binary search algorithm. Here, a time-domain laser ultrasonic response is transformed into a spatial ultrasonic domain response using a basis pursuit approach so that the interactions between ultrasonic waves and defect can be better identified in the spatial ultrasonic domain. Then, the area inside the identified defect boundary is visualized as defect. The performance of the proposed defect visualization technique is validated through an experiment on a semiconductor chip. The proposed defect visualization technique accelerates the defect visualization process in three aspects: (1) The number of measurements that is necessary for defect visualization is dramatically reduced by a binary search algorithm; (2) The number of averaging that is necessary to achieve a high signal-to-noise ratio is reduced by maintaining the wave propagation distance short; and (3) With the proposed technique, defect can be identified with a lower spatial resolution than the spatial resolution required by full-field wave propagation imaging.
Chung, Byunghoon; Lee, Hun; Choi, Bong Joon; Seo, Kyung Ryul; Kim, Eung Kwon; Kim, Dae Yune; Kim, Tae-Im
2017-02-01
The purpose of this study was to investigate the clinical efficacy of an optimized prolate ablation procedure for correcting residual refractive errors following laser surgery. We analyzed 24 eyes of 15 patients who underwent an optimized prolate ablation procedure for the correction of residual refractive errors following laser in situ keratomileusis, laser-assisted subepithelial keratectomy, or photorefractive keratectomy surgeries. Preoperative ophthalmic examinations were performed, and uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction values (sphere, cylinder, and spherical equivalent), point spread function, modulation transfer function, corneal asphericity (Q value), ocular aberrations, and corneal haze measurements were obtained postoperatively at 1, 3, and 6 months. Uncorrected distance visual acuity improved and refractive errors decreased significantly at 1, 3, and 6 months postoperatively. Total coma aberration increased at 3 and 6 months postoperatively, while changes in all other aberrations were not statistically significant. Similarly, no significant changes in point spread function were detected, but modulation transfer function increased significantly at the postoperative time points measured. The optimized prolate ablation procedure was effective in terms of improving visual acuity and objective visual performance for the correction of persistent refractive errors following laser surgery.
Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya
2013-01-01
It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.
Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap
Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya
2013-01-01
It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549
NASA Astrophysics Data System (ADS)
Demir, I.
2014-12-01
Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The hydrological simulation system is a web-based 3D interactive learning environment for teaching hydrological processes and concepts. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create or load predefined scenarios, control environmental parameters, and evaluate environmental mitigation alternatives. The web-based simulation system provides an environment for students to learn about the hydrological processes (e.g. flooding and flood damage), and effects of development and human activity in the floodplain. The system utilizes latest web technologies and graphics processing unit (GPU) for water simulation and object collisions on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various visualization and interaction modes.
Color polymorphic lures target different visual channels in prey.
White, Thomas E; Kemp, Darrell J
2016-06-01
Selection for signal efficacy in variable environments may favor color polymorphism, but little is known about this possibility outside of sexual systems. Here we used the color polymorphic orb-web spider Gasteracantha fornicata, whose yellow- or white-banded dorsal signal attracts dipteran prey, to test the hypothesis that morphs may be tuned to optimize either chromatic or achromatic conspicuousness in their visually noisy forest environments. We used data from extensive observations of naturally existing spiders and precise assessments of visual environments to model signal conspicuousness according to dipteran vision. Modeling supported a distinct bias in the chromatic (yellow morph) or achromatic (white morph) contrast presented by spiders at the times when they caught prey, as opposed to all other times at which they may be viewed. Hence, yellow spiders were most successful when their signal produced maximum color contrast against viewing backgrounds, whereas white spiders were most successful when they presented relatively greatest luminance contrast. Further modeling across a hypothetical range of lure variation confirmed that yellow versus white signals should, respectively, enhance chromatic versus achromatic conspicuousness to flies, in G. fornicata's visual environments. These findings suggest that color polymorphism may be adaptively maintained by selection for conspicuousness within different visual channels in receivers. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
ERIC Educational Resources Information Center
Lancioni, Giulio E.; O'Reilly, Mark F.; Singh, Nirbhay N.; Sigafoos, Jeff; Campodonico, Francesca; Oliva, Doretta
2009-01-01
Persons with profound visual impairments and other disabilities, such as neuromotor and intellectual disabilities, may encounter serious orientation and mobility problems even in familiar indoor environments, such as their homes. Teaching these persons to develop maps of their daily environment, using miniature replicas of the areas or some…
Motor Effects from Visually Induced Disorientation in Man.
ERIC Educational Resources Information Center
Brecher, M. Herbert; Brecher, Gerhard A.
The problem of disorientation in a moving optical environment was examined. A pilot can experience egocentric disorientation if the entire visual environment moves relative to his body without a clue as to the objectives position of the airplane in respect to the ground. A simple method of measuring disorientation was devised. In this method…
Reconfigurable Image Generator
NASA Technical Reports Server (NTRS)
Archdeacon, John L. (Inventor); Iwai, Nelson H. (Inventor); Kato, Kenji H. (Inventor); Sweet, Barbara T. (Inventor)
2017-01-01
A RiG may simulate visual conditions of a real world environment, and generate the necessary amount of pixels in a visual simulation at rates up to 120 frames per second. RiG may also include a database generation system capable of producing visual databases suitable to drive the visual fidelity required by the RiG.
Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism
ERIC Educational Resources Information Center
Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter
2011-01-01
The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…
Visual Programming: A Programming Tool for Increasing Mathematics Achivement
ERIC Educational Resources Information Center
Swanier, Cheryl A.; Seals, Cheryl D.; Billionniere, Elodie V.
2009-01-01
This paper aims to address the need of increasing student achievement in mathematics using a visual programming language such as Scratch. This visual programming language facilitates creating an environment where students in K-12 education can develop mathematical simulations while learning a visual programming language at the same time.…
Purpura, Giulia; Cioni, Giovanni; Tinelli, Francesca
2018-07-01
Object recognition is a long and complex adaptive process and its full maturation requires combination of many different sensory experiences as well as cognitive abilities to manipulate previous experiences in order to develop new percepts and subsequently to learn from the environment. It is well recognized that the transfer of visual and haptic information facilitates object recognition in adults, but less is known about development of this ability. In this study, we explored the developmental course of object recognition capacity in children using unimodal visual information, unimodal haptic information, and visuo-haptic information transfer in children from 4 years to 10 years and 11 months of age. Participants were tested through a clinical protocol, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects, and visuo-haptic transfer of these two types of information. Results show an age-dependent development of object recognition abilities for visual, haptic, and visuo-haptic modalities. A significant effect of time on development of unimodal and crossmodal recognition skills was found. Moreover, our data suggest that multisensory processes for common object recognition are active at 4 years of age. They facilitate recognition of common objects, and, although not fully mature, are significant in adaptive behavior from the first years of age. The study of typical development of visuo-haptic processes in childhood is a starting point for future studies regarding object recognition in impaired populations.
Evolutionary adaptations: theoretical and practical implications for visual ergonomics.
Fostervold, Knut Inge; Watten, Reidulf G; Volden, Frode
2014-01-01
The literature discussing visual ergonomics often mention that human vision is adapted to light emitted by the sun. However, theoretical and practical implications of this viewpoint is seldom discussed or taken into account. The paper discusses some of the main theoretical implications of an evolutionary approach to visual ergonomics. Based on interactional theory and ideas from ecological psychology an evolutionary stress model is proposed as a theoretical framework for future research in ergonomics and human factors. The model stresses the importance of developing work environments that fits with our evolutionary adaptations. In accordance with evolutionary psychology, the environment of evolutionary adaptedness (EEA) and evolutionarily-novel environments (EN) are used as key concepts. Using work with visual display units (VDU) as an example, the paper discusses how this knowledge can be utilized in an ergonomic analysis of risk factors in the work environment. The paper emphasises the importance of incorporating evolutionary theory in the field of ergonomics. Further, the paper encourages scientific practices that further our understanding of any phenomena beyond the borders of traditional proximal explanations.
The Influence of Texture Symmetry in Marker Pointing:. Experimenting with Humans and Algorithms
NASA Astrophysics Data System (ADS)
Cardaci, M.; Tabacchi, M. E.
2012-12-01
Symmetry plays a fundamental role in aiding the visual system, to organize its environmental stimuli and to detect visual patterns of natural and artificial objects. Various kinds of symmetry exist, and we will discuss how internal symmetry due to textures influences the choice of direction in visual tasks. Two experiments are presented: the first, with human subjects, deals with the effect of textures on preferences for a pointing direction. The second emulates the performances obtained in the first through the use of an algorithm based on a physic metaphor. Results from both experiments are shown and comment.
Seminar in Visual Communication.
ERIC Educational Resources Information Center
Western Washington Univ., Bellingham.
Teachers involved in the Visual Communication Education project attended a summer program in 1966 at which the following edited lectures were made by resource people who represented diverse points of view: (1) "The Design and Technical Foundations of Visual Communication" by Kenneth G. Scheid examines supply and deman, technological…
Visualizations and Mental Models - The Educational Implications of GEOWALL
NASA Astrophysics Data System (ADS)
Rapp, D.; Kendeou, P.
2003-12-01
Work in the earth sciences has outlined many of the faulty beliefs that students possess concerning particular geological systems and processes. Evidence from educational and cognitive psychology has demonstrated that students often have difficulty overcoming their na‹ve beliefs about science. Prior knowledge is often remarkably resistant to change, particularly when students' existing mental models for geological principles may be faulty or inaccurate. Figuring out how to help students revise their mental models to include appropriate information is a major challenge. Up until this point, research has tended to focus on whether 2-dimensional computer visualizations are useful tools for helping students develop scientifically correct models. Research suggests that when students are given the opportunity to use dynamic computer-based visualizations, they are more likely to recall the learned information, and are more likely to transfer that knowledge to novel settings. Unfortunately, 2-dimensional visualization systems are often inadequate representations of the material that educators would like students to learn. For example, a 2-dimensional image of the Earth's surface does not adequately convey particular features that are critical for visualizing the geological environment. This may limit the models that students can construct following these visualizations. GEOWALL is a stereo projection system that has attempted to address this issue. It can display multidimensional static geologic images and dynamic geologic animations in a 3-dimensional format. Our current research examines whether multidimensional visualization systems such as GEOWALL may facilitate learning by helping students to develop more complex mental models. This talk will address some of the cognitive issues that influence the construction of mental models, and the difficulty of updating existing mental models. We will also discuss our current work that seeks to examine whether GEOWALL is an effective tool for helping students to learn geological information (and potentially restructure their na‹ve conceptions of geologic principles).
NASA Astrophysics Data System (ADS)
Akristiniy, Vera A.; Dikova, Elena A.
2018-03-01
The article is devoted to one of the types of urban planning studies - the visual-landscape analysis during the integration of high-rise buildings within the historic urban environment for the purposes of providing pre-design and design studies in terms of preserving the historical urban environment and the implementation of the reconstructional resource of the area. In the article formed and systematized the stages and methods of conducting the visual-landscape analysis taking into account the influence of high-rise buildings on objects of cultural heritage and valuable historical buildings of the city. Practical application of the visual-landscape analysis provides an opportunity to assess the influence of hypothetical location of high-rise buildings on the perception of a historically developed environment and optimal building parameters. The contents of the main stages in the conduct of the visual - landscape analysis and their key aspects, concerning the construction of predicted zones of visibility of the significant historically valuable urban development objects and hypothetically planned of the high-rise buildings are revealed. The obtained data are oriented to the successive development of the planning and typological structure of the city territory and preservation of the compositional influence of valuable fragments of the historical environment in the structure of the urban landscape. On their basis, an information database is formed to determine the permissible urban development parameters of the high-rise buildings for the preservation of the compositional integrity of the urban area.
Attention during active visual tasks: counting, pointing, or simply looking
Wilder, John D.; Schnitzer, Brian S.; Gersch, Timothy M.; Dosher, Barbara A.
2009-01-01
Visual attention and saccades are typically studied in artificial situations, with stimuli presented to the steadily fixating eye, or saccades made along specified paths. By contrast, in the real world saccadic patterns are constrained only by the demands of the motivating task. We studied attention during pauses between saccades made to perform 3 free-viewing tasks: counting dots, pointing to the same dots with a visible cursor, or simply looking at the dots using a freely-chosen path. Attention was assessed by the ability to identify the orientation of a briefly-presented Gabor probe. All primary tasks produced losses in identification performance, with counting producing the largest losses, followed by pointing and then looking-only. Looking-only resulted in a 37% increase in contrast thresholds in the orientation task. Counting produced more severe losses that were not overcome by increasing Gabor contrast. Detection or localization of the Gabor, unlike identification, were largely unaffected by any of the primary tasks. Taken together, these results show that attention is required to control saccades, even with freely-chosen paths, but the attentional demands of saccades are less than those attached to tasks such as counting, which have a significant cognitive load. Counting proved to be a highly demanding task that either exhausted momentary processing capacity (e.g., working memory or executive functions), or, alternatively, encouraged a strategy of filtering out all signals irrelevant to counting itself. The fact that the attentional demands of saccades (as well as those of detection/localization) are relatively modest makes it possible to continually adjust both the spatial and temporal pattern of saccades so as to re-allocate attentional resources as needed to handle the complex and multifaceted demands of real-world environments. PMID:18649913
Teaching Technology Education to Visually Impaired Students.
ERIC Educational Resources Information Center
Mann, Rene
1987-01-01
Discusses various types of visual impairments and how the learning environment can be adapted to limit their effect. Presents suggestions for adapting industrial arts laboratory activities to maintain safety standards while allowing the visually impaired to participate. (CH)
Simulation environment and graphical visualization environment: a COPD use-case.
Huertas-Migueláñez, Mercedes; Mora, Daniel; Cano, Isaac; Maier, Dieter; Gomez-Cabrero, David; Lluch-Ariet, Magí; Miralles, Felip
2014-11-28
Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios.
Interobject grouping facilitates visual awareness.
Stein, Timo; Kaiser, Daniel; Peelen, Marius V
2015-01-01
In organizing perception, the human visual system takes advantage of regularities in the visual input to perceptually group related image elements. Simple stimuli that can be perceptually grouped based on physical regularities, for example by forming an illusory contour, have a competitive advantage in entering visual awareness. Here, we show that regularities that arise from the relative positioning of complex, meaningful objects in the visual environment also modulate visual awareness. Using continuous flash suppression, we found that pairs of objects that were positioned according to real-world spatial regularities (e.g., a lamp above a table) accessed awareness more quickly than the same object pairs shown in irregular configurations (e.g., a table above a lamp). This advantage was specific to upright stimuli and abolished by stimulus inversion, meaning that it did not reflect physical stimulus confounds or the grouping of simple image elements. Thus, knowledge of the spatial configuration of objects in the environment shapes the contents of conscious perception.
Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver; Keranen, Soile V.E.; Biggin, Mark
Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila Transcription Network Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point-Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions. We address this challenge by linking PCX and Matlab via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchersmore » the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.« less
Ergonomics in the office environment
NASA Technical Reports Server (NTRS)
Courtney, Theodore K.
1993-01-01
Perhaps the four most popular 'ergonomic' office culprits are: (1) the computer or visual display terminal (VDT); (2) the office chair; (3) the workstation; and (4) other automated equipment such as the facsimile machine, photocopier, etc. Among the ergonomics issues in the office environment are visual fatigue, musculoskeletal disorders, and radiation/electromagnetic (VLF,ELF) field exposure from VDT's. We address each of these in turn and then review some regulatory considerations regarding such stressors in the office and general industrial environment.
Fly eye radar or micro-radar sensor technology
NASA Astrophysics Data System (ADS)
Molchanov, Pavlo; Asmolova, Olga
2014-05-01
To compensate for its eye's inability to point its eye at a target, the fly's eye consists of multiple angularly spaced sensors giving the fly the wide-area visual coverage it needs to detect and avoid the threats around him. Based on a similar concept a revolutionary new micro-radar sensor technology is proposed for detecting and tracking ground and/or airborne low profile low altitude targets in harsh urban environments. Distributed along a border or around a protected object (military facility and buildings, camp, stadium) small size, low power unattended radar sensors can be used for target detection and tracking, threat warning, pre-shot sniper protection and provides effective support for homeland security. In addition it can provide 3D recognition and targets classification due to its use of five orders more pulses than any scanning radar to each space point, by using few points of view, diversity signals and intelligent processing. The application of an array of directional antennas eliminates the need for a mechanical scanning antenna or phase processor. It radically decreases radar size and increases bearing accuracy several folds. The proposed micro-radar sensors can be easy connected to one or several operators by point-to-point invisible protected communication. The directional antennas have higher gain, can be multi-frequency and connected to a multi-functional network. Fly eye micro-radars are inexpensive, can be expendable and will reduce cost of defense.
Pictorial communication in virtual and real environments
NASA Technical Reports Server (NTRS)
Ellis, Stephen R. (Editor)
1991-01-01
Papers about the communication between human users and machines in real and synthetic environments are presented. Individual topics addressed include: pictorial communication, distortions in memory for visual displays, cartography and map displays, efficiency of graphical perception, volumetric visualization of 3D data, spatial displays to increase pilot situational awareness, teleoperation of land vehicles, computer graphics system for visualizing spacecraft in orbit, visual display aid for orbital maneuvering, multiaxis control in telemanipulation and vehicle guidance, visual enhancements in pick-and-place tasks, target axis effects under transformed visual-motor mappings, adapting to variable prismatic displacement. Also discussed are: spatial vision within egocentric and exocentric frames of reference, sensory conflict in motion sickness, interactions of form and orientation, perception of geometrical structure from congruence, prediction of three-dimensionality across continuous surfaces, effects of viewpoint in the virtual space of pictures, visual slant underestimation, spatial constraints of stereopsis in video displays, stereoscopic stance perception, paradoxical monocular stereopsis and perspective vergence. (No individual items are abstracted in this volume)
Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.
Sanchez, Yerly; Pinzon, David; Zheng, Bin
2017-10-01
To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.
Borgersen, Nanna Jo; Henriksen, Mikael Johannes Vuokko; Konge, Lars; Sørensen, Torben Lykke; Thomsen, Ann Sofia Skou; Subhi, Yousif
2016-01-01
Background Direct ophthalmoscopy is well-suited for video-based instruction, particularly if the videos enable the student to see what the examiner sees when performing direct ophthalmoscopy. We evaluated the pedagogical effectiveness of instructional YouTube videos on direct ophthalmoscopy by evaluating their content and approach to visualization. Methods In order to synthesize main themes and points for direct ophthalmoscopy, we formed a broad panel consisting of a medical student, junior and senior physicians, and took into consideration book chapters targeting medical students and physicians in general. We then systematically searched YouTube. Two authors reviewed eligible videos to assess eligibility and extract data on video statistics, content, and approach to visualization. Correlations between video statistics and contents were investigated using two-tailed Spearman’s correlation. Results We screened 7,640 videos, of which 27 were found eligible for this study. Overall, a median of 12 out of 18 points (interquartile range: 8–14 key points) were covered; no videos covered all of the 18 points assessed. We found the most difficulties in the approach to visualization of how to approach the patient and how to examine the fundus. Time spent on fundus examination correlated with the number of views per week (Spearman’s ρ=0.53; P=0.029). Conclusion Videos may help overcome the pedagogical issues in teaching direct ophthalmoscopy; however, the few available videos on YouTube fail to address this particular issue adequately. There is a need for high-quality videos that include relevant points, provide realistic visualization of the examiner’s view, and give particular emphasis on fundus examination. PMID:27574393
Luyckx, K; Dewulf, J; Van Weyenberg, S; Herman, L; Zoons, J; Vervaet, E; Heyndrickx, M; De Reu, K
2015-04-01
Cleaning and disinfection of the broiler stable environment is an essential part of farm hygiene management. Adequate cleaning and disinfection is essential for prevention and control of animal diseases and zoonoses. The goal of this study was to shed light on the dynamics of microbiological and non-microbiological parameters during the successive steps of cleaning and disinfection and to select the most suitable sampling methods and parameters to evaluate cleaning and disinfection in broiler houses. The effectiveness of cleaning and disinfection protocols was measured in six broiler houses on two farms through visual inspection, adenosine triphosphate hygiene monitoring and microbiological analyses. Samples were taken at three time points: 1) before cleaning, 2) after cleaning, and 3) after disinfection. Before cleaning and after disinfection, air samples were taken in addition to agar contact plates and swab samples taken from various sampling points for enumeration of total aerobic flora, Enterococcus spp., and Escherichia coli and the detection of E. coli and Salmonella. After cleaning, air samples, swab samples, and adenosine triphosphate swabs were taken and a visual score was also assigned for each sampling point. The mean total aerobic flora determined by swab samples decreased from 7.7±1.4 to 5.7±1.2 log CFU/625 cm2 after cleaning and to 4.2±1.6 log CFU/625 cm2 after disinfection. Agar contact plates were used as the standard for evaluating cleaning and disinfection, but in this study they were found to be less suitable than swabs for enumeration. In addition to measuring total aerobic flora, Enterococcus spp. seemed to be a better hygiene indicator to evaluate cleaning and disinfection protocols than E. coli. All stables were Salmonella negative, but the detection of its indicator organism E. coli provided additional information for evaluating cleaning and disinfection protocols. Adenosine triphosphate analyses gave additional information about the hygiene level of the different sampling points. © 2015 Poultry Science Association Inc.
Visual Communication: Integrating Visual Instruction into Business Communication Courses
ERIC Educational Resources Information Center
Baker, William H.
2006-01-01
Business communication courses are ideal for teaching visual communication principles and techniques. Many assignments lend themselves to graphic enrichment, such as flyers, handouts, slide shows, Web sites, and newsletters. Microsoft Publisher and Microsoft PowerPoint are excellent tools for these assignments, with Publisher being best for…
Robotic Vision-Based Localization in an Urban Environment
NASA Technical Reports Server (NTRS)
Mchenry, Michael; Cheng, Yang; Matthies
2007-01-01
A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.
Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes.
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning
2015-08-27
This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications.
Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning
2015-01-01
This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications. PMID:26343656
Panoramic Epipolar Image Generation for Mobile Mapping System
NASA Astrophysics Data System (ADS)
Chen, T.; Yamamoto, K.; Chhatkuli, S.; Shimamura, H.
2012-07-01
The notable improvements on performance and low cost of digital cameras and GPS/IMU devices have caused MMSs (Mobile Mapping Systems) to be gradually becoming one of the most important devices for mapping highway and railway networks, generating and updating road navigation data and constructing urban 3D models over the last 20 years. Moreover, the demands for large scale visual street-level image database construction by the internet giants such as Google and Microsoft have made the further rapid development of this technology. As one of the most important sensors, the omni-directional cameras are being commonly utilized on many MMSs to collect panoramic images for 3D close range photogrammetry and fusion with 3D laser point clouds since these cameras could record much visual information of the real environment in one image at field view angle of 360° in longitude direction and 180° in latitude direction. This paper addresses the problem of panoramic epipolar image generation for 3D modelling and mapping by stereoscopic viewing. These panoramic images are captured with Point Grey's Ladybug3 mounted on the top of Mitsubishi MMS-X 220 at 2m intervals along the streets in urban environment. Onboard GPS/IMU, speedometer and post sequence image analysis technology such as bundle adjustment provided high accuracy position and attitude data for these panoramic images and laser data, this makes it possible to construct the epipolar geometric relationship between any two adjacent panoramic images and then the panoramic epipolar images could be generated. Three kinds of projection planes: sphere, cylinder and flat plane are selected as the epipolar images' planes. In final we select the flat plane and use its effective parts (middle parts of base line's two sides) for epipolar image generation. The corresponding geometric relations and results will be presented in this paper.
On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments
NASA Astrophysics Data System (ADS)
Çöltekin, A.; Lokka, I.; Zahner, M.
2016-06-01
Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.
Spits, Christine; Wallace, Luke; Reinke, Karin
2017-04-20
Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential.
Gold, Diane R.; Rifas-Shiman, Sheryl L.; Melly, Steven J.; Zanobetti, Antonella; Coull, Brent A.; Schwartz, Joel D.; Gryparis, Alexandros; Kloog, Itai; Koutrakis, Petros; Bellinger, David C.; White, Roberta F.; Sagiv, Sharon K.; Oken, Emily
2015-01-01
Background Influences of prenatal and early-life exposures to air pollution on cognition are not well understood. Objectives We examined associations of gestational and childhood exposure to traffic-related pollution with childhood cognition. Methods We studied 1,109 mother–child pairs in Project Viva, a prospective birth cohort study in eastern Massachusetts (USA). In mid-childhood (mean age, 8.0 years), we measured verbal and nonverbal intelligence, visual motor abilities, and visual memory. For periods in late pregnancy and childhood, we estimated spatially and temporally resolved black carbon (BC) and fine particulate matter (PM2.5) exposures, residential proximity to major roadways, and near-residence traffic density. We used linear regression models to examine associations of exposures with cognitive assessment scores, adjusted for potential confounders. Results Compared with children living ≥ 200 m from a major roadway at birth, those living < 50 m away had lower nonverbal IQ [–7.5 points; 95% confidence interval (CI): –13.1, –1.9], and somewhat lower verbal IQ (–3.8 points; 95% CI: –8.2, 0.6) and visual motor abilities (–5.3 points; 95% CI: –11.0, 0.4). Cross-sectional associations of major roadway proximity and cognition at mid-childhood were weaker. Prenatal and childhood exposure to traffic density and PM2.5 did not appear to be associated with poorer cognitive performance. Third-trimester and childhood BC exposures were associated with lower verbal IQ in minimally adjusted models; but after adjustment for socioeconomic covariates, associations were attenuated or reversed. Conclusions Residential proximity to major roadways during gestation and early life may affect cognitive development. Influences of pollutants and socioeconomic conditions on cognition may be difficult to disentangle. Citation Harris MH, Gold DR, Rifas-Shiman SL, Melly SJ, Zanobetti A, Coull BA, Schwartz JD, Gryparis A, Kloog I, Koutrakis P, Bellinger DC, White RF, Sagiv SK, Oken E. 2015. Prenatal and childhood traffic-related pollution exposure and childhood cognition in the Project Viva cohort (Massachusetts, USA). Environ Health Perspect 123:1072–1078; http://dx.doi.org/10.1289/ehp.1408803 PMID:25839914
Cehreli, S Burcak; Polat-Ozsoy, Omur; Sar, Cagla; Cubukcu, H Evren; Cehreli, Zafer C
2012-04-01
The amount of the residual adhesive after bracket debonding is frequently assessed in a qualitative manner, utilizing the adhesive remnant index (ARI). This study aimed to investigate whether quantitative assessment of the adhesive remnant yields more precise results compared to qualitative methods utilizing the 4- and 5-point ARI scales. Twenty debonded brackets were selected. Evaluation and scoring of the adhesive remnant on bracket bases were made consecutively using: 1. qualitative assessment (visual scoring) and 2. quantitative measurement (image analysis) on digital photographs. Image analysis was made on scanning electron micrographs (SEM) and high-precision elemental maps of the adhesive remnant as determined by energy dispersed X-ray spectrometry. Evaluations were made in accordance with the original 4-point and the modified 5-point ARI scales. Intra-class correlation coefficients (ICCs) were calculated, and the data were evaluated using Friedman test followed by Wilcoxon signed ranks test with Bonferroni correction. ICC statistics indicated high levels of agreement for qualitative visual scoring among examiners. The 4-point ARI scale was compliant with the SEM assessments but indicated significantly less adhesive remnant compared to the results of quantitative elemental mapping. When the 5-point scale was used, both quantitative techniques yielded similar results with those obtained qualitatively. These results indicate that qualitative visual scoring using the ARI is capable of generating similar results with those assessed by quantitative image analysis techniques. In particular, visual scoring with the 5-point ARI scale can yield similar results with both the SEM analysis and elemental mapping.
Are children with low vision adapted to the visual environment in classrooms of mainstream schools?
Negiloni, Kalpa; Ramani, Krishna Kumar; Jeevitha, R; Kalva, Jayashree; Sudhir, Rachapalle Reddi
2018-01-01
Purpose: The study aimed to evaluate the classroom environment of children with low vision and provide recommendations to reduce visual stress, with focus on mainstream schooling. Methods: The medical records of 110 children (5–17 years) seen in low vision clinic during 1 year period (2015) at a tertiary care center in south India were extracted. The visual function levels of children were compared to the details of their classroom environment. The study evaluated and recommended the chalkboard visual task size and viewing distance required for children with mild, moderate, and severe visual impairment (VI). Results: The major causes of low vision based on the site of abnormality and etiology were retinal (80%) and hereditary (67%) conditions, respectively, in children with mild (n = 18), moderate (n = 72), and severe (n = 20) VI. Many of the children (72%) had difficulty in viewing chalkboard and common strategies used for better visibility included copying from friends (47%) and going closer to chalkboard (42%). To view the chalkboard with reduced visual stress, a child with mild VI can be seated at a maximum distance of 4.3 m from the chalkboard, with the minimum size of visual task (height of lowercase letter writing on chalkboard) recommended to be 3 cm. For 3/60–6/60 range, the maximum viewing distance with the visual task size of 4 cm is recommended to be 85 cm to 1.7 m. Conclusion: Simple modifications of the visual task size and seating arrangements can aid children with low vision with better visibility of chalkboard and reduced visual stress to manage in mainstream schools. PMID:29380777
Chen, Song; Li, Xuena; Chen, Meijie; Yin, Yafu; Li, Na; Li, Yaming
2016-10-01
This study is aimed to compare the diagnostic power of using quantitative analysis or visual analysis with single time point imaging (STPI) PET/CT and dual time point imaging (DTPI) PET/CT for the classification of solitary pulmonary nodules (SPN) lesions in granuloma-endemic regions. SPN patients who received early and delayed (18)F-FDG PET/CT at 60min and 180min post-injection were retrospectively reviewed. Diagnoses are confirmed by pathological results or follow-ups. Three quantitative metrics, early SUVmax, delayed SUVmax and retention index(the percentage changes between the early SUVmax and delayed SUVmax), were measured for each lesion. Three 5-point scale score was given by blinded interpretations performed by physicians based on STPI PET/CT images, DTPI PET/CT images and CT images, respectively. ROC analysis was performed on three quantitative metrics and three visual interpretation scores. One-hundred-forty-nine patients were retrospectively included. The areas under curve (AUC) of the ROC curves of early SUVmax, delayed SUVmax, RI, STPI PET/CT score, DTPI PET/CT score and CT score are 0.73, 0.74, 0.61, 0.77 0.75 and 0.76, respectively. There were no significant differences between the AUCs in visual interpretation of STPI PET/CT images and DTPI PET/CT images, nor in early SUVmax and delayed SUVmax. The differences of sensitivity, specificity and accuracy between STPI PET/CT and DTPI PET/CT were not significantly different in either quantitative analysis or visual interpretation. In granuloma-endemic regions, DTPI PET/CT did not offer significant improvement over STPI PET/CT in differentiating malignant SPNs in both quantitative analysis and visual interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Experience Report: Visual Programming in the Real World
NASA Technical Reports Server (NTRS)
Baroth, E.; Hartsough, C
1994-01-01
This paper reports direct experience with two commercial, widely used visual programming environments. While neither of these systems is object oriented, the tools have transformed the development process and indicate a direction for visual object oriented tools to proceed.
Concept of Operations for Commercial and Business Aircraft Synthetic Vision Systems. 1.0
NASA Technical Reports Server (NTRS)
Williams Daniel M.; Waller, Marvin C.; Koelling, John H.; Burdette, Daniel W.; Capron, William R.; Barry, John S.; Gifford, Richard B.; Doyle, Thomas M.
2001-01-01
A concept of operations (CONOPS) for the Commercial and Business (CaB) aircraft synthetic vision systems (SVS) is described. The CaB SVS is expected to provide increased safety and operational benefits in normal and low visibility conditions. Providing operational benefits will promote SVS implementation in the Net, improve aviation safety, and assist in meeting the national aviation safety goal. SVS will enhance safety and enable consistent gate-to-gate aircraft operations in normal and low visibility conditions. The goal for developing SVS is to support operational minima as low as Category 3b in a variety of environments. For departure and ground operations, the SVS goal is to enable operations with a runway visual range of 300 feet. The system is an integrated display concept that provides a virtual visual environment. The SVS virtual visual environment is composed of three components: an enhanced intuitive view of the flight environment, hazard and obstacle defection and display, and precision navigation guidance. The virtual visual environment will support enhanced operations procedures during all phases of flight - ground operations, departure, en route, and arrival. The applications selected for emphasis in this document include low visibility departures and arrivals including parallel runway operations, and low visibility airport surface operations. These particular applications were selected because of significant potential benefits afforded by SVS.
Härer, Andreas; Torres-Dowdall, Julián; Meyer, Axel
2017-10-01
Colonization of novel habitats is typically challenging to organisms. In the initial stage after colonization, approximation to fitness optima in the new environment can occur by selection acting on standing genetic variation, modification of developmental patterns or phenotypic plasticity. Midas cichlids have recently colonized crater Lake Apoyo from great Lake Nicaragua. The photic environment of crater Lake Apoyo is shifted towards shorter wavelengths compared to great Lake Nicaragua and Midas cichlids from both lakes differ in visual sensitivity. We investigated the contribution of ontogeny and phenotypic plasticity in shaping the visual system of Midas cichlids after colonizing this novel photic environment. To this end, we measured cone opsin expression both during development and after experimental exposure to different light treatments. Midas cichlids from both lakes undergo ontogenetic changes in cone opsin expression, but visual sensitivity is consistently shifted towards shorter wavelengths in crater lake fish, which leads to a paedomorphic retention of their visual phenotype. This shift might be mediated by lower levels of thyroid hormone in crater lake Midas cichlids (measured indirectly as dio2 and dio3 gene expression). Exposing fish to different light treatments revealed that cone opsin expression is phenotypically plastic in both species during early development, with short and long wavelength light slowing or accelerating ontogenetic changes, respectively. Notably, this plastic response was maintained into adulthood only in the derived crater lake Midas cichlids. We conclude that the rapid evolution of Midas cichlids' visual system after colonizing crater Lake Apoyo was mediated by a shift in visual sensitivity during ontogeny and was further aided by phenotypic plasticity during development. © 2017 John Wiley & Sons Ltd.
Corridor One:An Integrated Distance Visualization Enuronments for SSI+ASCI Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christopher R. Johnson, Charles D. Hansen
2001-10-29
The goal of Corridor One: An Integrated Distance Visualization Environment for ASCI and SSI Application was to combine the forces of six leading edge laboratories working in the areas of visualization and distributed computing and high performance networking (Argonne National Laboratory, Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, University of Illinois, University of Utah and Princeton University) to develop and deploy the most advanced integrated distance visualization environment for large-scale scientific visualization and demonstrate it on applications relevant to the DOE SSI and ASCI programs. The Corridor One team brought world class expertise in parallel rendering, deep image basedmore » rendering, immersive environment technology, large-format multi-projector wall based displays, volume and surface visualization algorithms, collaboration tools and streaming media technology, network protocols for image transmission, high-performance networking, quality of service technology and distributed computing middleware. Our strategy was to build on the very successful teams that produced the I-WAY, ''Computational Grids'' and CAVE technology and to add these to the teams that have developed the fastest parallel visualizations systems and the most widely used networking infrastructure for multicast and distributed media. Unfortunately, just as we were getting going on the Corridor One project, DOE cut the program after the first year. As such, our final report consists of our progress during year one of the grant.« less
ERIC Educational Resources Information Center
Cattaneo, Zaira; Mattavelli, Giulia; Papagno, Costanza; Herbert, Andrew; Silvanto, Juha
2011-01-01
The human visual system is able to efficiently extract symmetry information from the visual environment. Prior neuroimaging evidence has revealed symmetry-preferring neuronal representations in the dorsolateral extrastriate visual cortex; the objective of the present study was to investigate the necessity of these representations in symmetry…
ERIC Educational Resources Information Center
Kraemer, David J. M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.
2017-01-01
Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In 2 experiments, participants watched videos of routes through 4 virtual cities and were subsequently tested on their memory for observed landmarks and their ability to…
Effects of Visual Cues and Self-Explanation Prompts: Empirical Evidence in a Multimedia Environment
ERIC Educational Resources Information Center
Lin, Lijia; Atkinson, Robert K.; Savenye, Wilhelmina C.; Nelson, Brian C.
2016-01-01
The purpose of this study was to investigate the impacts of visual cues and different types of self-explanation prompts on learning, cognitive load, and intrinsic motivation in an interactive multimedia environment that was designed to deliver a computer-based lesson about the human cardiovascular system. A total of 126 college students were…
Visual Reasoning in Computational Environment: A Case of Graph Sketching
ERIC Educational Resources Information Center
Leung, Allen; Chan, King Wah
2004-01-01
This paper reports the case of a form six (grade 12) Hong Kong student's exploration of graph sketching in a computational environment. In particular, the student summarized his discovery in the form of two empirical laws. The student was interviewed and the interviewed data were used to map out a possible path of his visual reasoning. Critical…
ERIC Educational Resources Information Center
Simpkins, N. K.
2014-01-01
This article reports an investigation into undergraduate student experiences and views of a visual or "blocks" based programming language and its environment. An additional and central aspect of this enquiry is to substantiate the perceived degree of transferability of programming skills learnt within the visual environment to a typical…
Data sonification and sound visualization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaper, H. G.; Tipei, S.; Wiebel, E.
1999-07-01
Sound can help us explore and analyze complex data sets in scientific computing. The authors describe a digital instrument for additive sound synthesis (Diass) and a program to visualize sounds in a virtual reality environment (M4Cave). Both are part of a comprehensive music composition environment that includes additional software for computer-assisted composition and automatic music notation.
[Visually-impaired adolescents' interpersonal relationships at school].
Bezerra, Camilla Pontes; Pagliuca, Lorita Marlena Freitag
2007-09-01
This study describes the school environment and how interpersonal relationships are conducted in view of the needs of visually handicapped adolescents. Data were collected through observations of the physical environment of two schools in Fortaleza, Ceara, Brazil, with the support of a checklist, in order to analyze the existence of obstacles. Four visually handicapped adolescents from 14 to 20 years of age were interviewed. Conclusions were that the obstacles that hamper the free locomotion, communication, and physical and social interaction of the blind--or people with other eye disorders--during their activities at school are numerous.
Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment
Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru
2013-01-01
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873
Psychophysical Studies of Visual Cortical Function
1992-01-12
principle of generic image samplin& a hypothesis which provides ageometric tool to understand visual surface learning. We also have investigated the...perception of depth from unpaired points ( DaVinci stereopsis), showing that such points lead to depth andsubjective contours. In color filling in, we have...outlined and systematized a number of important phenomenon which we label under the rubric of DaVinci stereopsis. We summarize these results in a number of
Design of an off-axis visual display based on a free-form projection screen to realize stereo vision
NASA Astrophysics Data System (ADS)
Zhao, Yuanming; Cui, Qingfeng; Piao, Mingxu; Zhao, Lidong
2017-10-01
A free-form projection screen is designed for an off-axis visual display, which shows great potential in applications such as flight training for providing both accommodation and convergence cues for pilots. The method based on point cloud is proposed for the design of the free-form surface, and the design of the point cloud is controlled by a program written in the macro-language. In the visual display based on the free-form projection screen, when the error of the screen along Z-axis is 1 mm, the error of visual distance at each filed is less than 1%. And the resolution of the design for full field is better than 1‧, which meet the requirement of resolution for human eyes.
Horn, R R; Williams, A M; Scott, M A; Hodges, N J
2005-07-01
The authors examined the observational learning of 24 participants whom they constrained to use the model by removing intrinsic visual knowledge of results (KR). Matched participants assigned to video (VID), point-light (PL), and no-model (CON) groups performed a soccer-chipping task in which vision was occluded at ball contact. Pre- and posttests were interspersed with alternating periods of demonstration and acquisition. The authors assessed delayed retention 2-3 days later. In support of the visual perception perspective, the participants who observed the models showed immediate and enduring changes to more closely imitate the model's relative motion. While observing the demonstration, the PL group participants were more selective in their visual search than were the VID group participants but did not perform more accurately or learn more.
Selective weighting of action-related feature dimensions in visual working memory.
Heuer, Anna; Schubö, Anna
2017-08-01
Planning an action primes feature dimensions that are relevant for that particular action, increasing the impact of these dimensions on perceptual processing. Here, we investigated whether action planning also affects the short-term maintenance of visual information. In a combined memory and movement task, participants were to memorize items defined by size or color while preparing either a grasping or a pointing movement. Whereas size is a relevant feature dimension for grasping, color can be used to localize the goal object and guide a pointing movement. The results showed that memory for items defined by size was better during the preparation of a grasping movement than during the preparation of a pointing movement. Conversely, memory for color tended to be better when a pointing movement rather than a grasping movement was being planned. This pattern was not only observed when the memory task was embedded within the preparation period of the movement, but also when the movement to be performed was only indicated during the retention interval of the memory task. These findings reveal that a weighting of information in visual working memory according to action relevance can even be implemented at the representational level during maintenance, demonstrating that our actions continue to influence visual processing beyond the perceptual stage.
Oehl, M; Sutter, C
2015-05-01
With aging visual feedback becomes increasingly relevant in action control. Consequently, visual device and task characteristics should more and more affect tool use. Focussing on late working age, the present study aims to investigate age-related differences in processing task irrelevant (display size) and task relevant visual information (task difficulty). Young and middle-aged participants (20-35 and 36-64 years of age, respectively) sat in front of a touch screen with differently sized active touch areas (4″ to 12″) and performed pointing tasks with differing task difficulties (1.8-5 bits). Both display size and age affected pointing performance, but the two variables did not interact and aiming duration moderated both effects. Furthermore, task difficulty affected the pointing durations of middle-aged adults moreso than those of young adults. Again, aiming duration accounted for the variance in the data. The onset of an age-related decline in aiming duration can be clearly located in middle adulthood. Thus, the fine psychomotor ability "aiming" is a moderator and predictor for age-related differences in pointing tasks. The results support a user-specific design for small technical devices with touch interfaces. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Zbiciak, M.; Grabowik, C.; Janik, W.
2015-11-01
Nowadays the design constructional process is almost exclusively aided with CAD/CAE/CAM systems. It is evaluated that nearly 80% of design activities have a routine nature. These design routine tasks are highly susceptible to automation. Design automation is usually made with API tools which allow building original software responsible for adding different engineering activities. In this paper the original software worked out in order to automate engineering tasks at the stage of a product geometrical shape design is presented. The elaborated software works exclusively in NX Siemens CAD/CAM/CAE environment and was prepared in Microsoft Visual Studio with application of the .NET technology and NX SNAP library. The software functionality allows designing and modelling of spur and helicoidal involute gears. Moreover, it is possible to estimate relative manufacturing costs. With the Generator module it is possible to design and model both standard and non-standard gear wheels. The main advantage of the model generated in such a way is its better representation of an involute curve in comparison to those which are drawn in specialized standard CAD systems tools. It comes from fact that usually in CAD systems an involute curve is drawn by 3 points that respond to points located on the addendum circle, the reference diameter of a gear and the base circle respectively. In the Generator module the involute curve is drawn by 11 involute points which are located on and upper the base and the addendum circles therefore 3D gear wheels models are highly accurate. Application of the Generator module makes the modelling process very rapid so that the gear wheel modelling time is reduced to several seconds. During the conducted research the analysis of differences between standard 3 points and 11 points involutes was made. The results and conclusions drawn upon analysis are shown in details.
Experience, Context, and the Visual Perception of Human Movement
ERIC Educational Resources Information Center
Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie
2004-01-01
Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…
Interpretation of human pointing by African elephants: generalisation and rationality.
Smet, Anna F; Byrne, Richard W
2014-11-01
Factors influencing the abilities of different animals to use cooperative social cues from humans are still unclear, in spite of long-standing interest in the topic. One of the few species that have been found successful at using human pointing is the African elephant (Loxodonta africana); despite few opportunities for learning about pointing, elephants follow a pointing gesture in an object-choice task, even when the pointing signal and experimenter's body position are in conflict, and when the gesture itself is visually subtle. Here, we show that the success of captive African elephants at using human pointing is not restricted to situations where the pointing signal is sustained until the time of choice: elephants followed human pointing even when the pointing gesture was withdrawn before they had responded to it. Furthermore, elephants rapidly generalised their response to a type of social cue they were unlikely to have seen before: pointing with the foot. However, unlike young children, they showed no sign of evaluating the 'rationality' of this novel pointing gesture according to its visual context: that is, whether the experimenter's hands were occupied or not.
The impact of modality and working memory capacity on achievement in a multimedia environment
NASA Astrophysics Data System (ADS)
Stromfors, Charlotte M.
This study explored the impact of working memory capacity and student learning in a dual modality, multimedia environment titled Visualizing Topography. This computer-based instructional program focused on the basic skills in reading and interpreting topographic maps. Two versions of the program presented the same instructional content but varied the modality of verbal information: the audio-visual condition coordinated topographic maps and narration; the visual-visual condition provided the same topographic maps with readable text. An analysis of covariance procedure was conducted to evaluate the effects due to the two conditions in relation to working memory capacity, controlling for individual differences in spatial visualization and prior knowledge. The scores on the Figural Intersection Test were used to separate subjects into three levels in terms of their measured working memory capacity: low, medium, and high. Subjects accessed Visualizing Topography by way of the Internet and proceeded independently through the program. The program architecture was linear in format. Subjects had a minimum amount of flexibility within each of five segments, but not between segments. One hundred and fifty-one subjects were randomly assigned to either the audio-visual or the visual-visual condition. The average time spent in the program was thirty-one minutes. The results of the ANCOVA revealed a small to moderate modality effect favoring an audio-visual condition. The results also showed that subjects with low and medium working capacity benefited more from the audio-visual condition than the visual-visual condition, while subjects with a high working memory capacity did not benefit from either condition. Although splitting the data reduced group sizes, ANCOVA results by gender suggested that the audio-visual condition favored females with low working memory capacities. The results have implications for designers of educational software, the teachers who select software, and the students themselves. Splitting information into two, non-redundant sources, one audio and one visual, may effectively extend working memory capacity. This is especially significant for the student population encountering difficult science concepts that require the formation and manipulation of mental representations. It is recommended that multimedia environments be designed or selected with attention to modality conditions that facilitate student learning.
NASA Astrophysics Data System (ADS)
Oliver, Joseph Steve; Hodges, Georgia W.; Moore, James N.; Cohen, Allan; Jang, Yoonsun; Brown, Scott A.; Kwon, Kyung A.; Jeong, Sophia; Raven, Sara P.; Jurkiewicz, Melissa; Robertson, Tom P.
2017-11-01
Research into the efficacy of modules featuring dynamic visualizations, case studies, and interactive learning environments is reported here. This quasi-experimental 2-year study examined the implementation of three interactive computer-based instructional modules within a curricular unit covering cellular biology concepts in an introductory high school biology course. The modules featured dynamic visualizations and focused on three processes that underlie much of cellular biology: diffusion, osmosis, and filtration. Pre-tests and post-tests were used to assess knowledge growth across the unit. A mixture Rasch model analysis of the post-test data revealed two groups of students. In both years of the study, a large proportion of the students were classified as low-achieving based on their pre-test scores. The use of the modules in the Cell Unit in year 2 was associated with a much larger proportion of the students having transitioned to the high-achieving group than in year 1. In year 2, the same teachers taught the same concepts as year 1 but incorporated the interactive computer-based modules into the cell biology unit of the curriculum. In year 2, 67% of students initially classified as low-achieving were classified as high-achieving at the end of the unit. Examination of responses to assessments embedded within the modules as well as post-test items linked transition to the high-achieving group with correct responses to items that both referenced the visualization and the contextualization of that visualization within the module. This study points to the importance of dynamic visualization within contextualized case studies as a means to support student knowledge acquisition in biology.
Accurately Decoding Visual Information from fMRI Data Obtained in a Realistic Virtual Environment
2015-06-09
Center for Learning and Memory , The University of Texas at Austin, 100 E 24th Street, Stop C7000, Austin, TX 78712, USA afloren@utexas.edu Received: 18...information from fMRI data obtained in a realistic virtual environment. Front. Hum. Neurosci. 9:327. doi: 10.3389/fnhum.2015.00327 Accurately decoding...visual information from fMRI data obtained in a realistic virtual environment Andrew Floren 1*, Bruce Naylor 2, Risto Miikkulainen 3 and David Ress 4
NASA Astrophysics Data System (ADS)
Harris, E.
Planning, Implementation and Optimization of Future Space Missions using an Immersive Visualization Environment (IVE) Machine E. N. Harris, Lockheed Martin Space Systems, Denver, CO and George.W. Morgenthaler, U. of Colorado at Boulder History: A team of 3-D engineering visualization experts at the Lockheed Martin Space Systems Company have developed innovative virtual prototyping simulation solutions for ground processing and real-time visualization of design and planning of aerospace missions over the past 6 years. At the University of Colorado, a team of 3-D visualization experts are developing the science of 3-D visualization and immersive visualization at the newly founded BP Center for Visualization, which began operations in October, 2001. (See IAF/IAA-01-13.2.09, "The Use of 3-D Immersive Visualization Environments (IVEs) to Plan Space Missions," G. A. Dorn and G. W. Morgenthaler.) Progressing from Today's 3-D Engineering Simulations to Tomorrow's 3-D IVE Mission Planning, Simulation and Optimization Techniques: 3-D (IVEs) and visualization simulation tools can be combined for efficient planning and design engineering of future aerospace exploration and commercial missions. This technology is currently being developed and will be demonstrated by Lockheed Martin in the (IVE) at the BP Center using virtual simulation for clearance checks, collision detection, ergonomics and reach-ability analyses to develop fabrication and processing flows for spacecraft and launch vehicle ground support operations and to optimize mission architecture and vehicle design subject to realistic constraints. Demonstrations: Immediate aerospace applications to be demonstrated include developing streamlined processing flows for Reusable Space Transportation Systems and Atlas Launch Vehicle operations and Mars Polar Lander visual work instructions. Long-range goals include future international human and robotic space exploration missions such as the development of a Mars Reconnaissance Orbiter and Lunar Base construction scenarios. Innovative solutions utilizing Immersive Visualization provide the key to streamlining the mission planning and optimizing engineering design phases of future aerospace missions.
NASA Technical Reports Server (NTRS)
Hindson, W. S.; Hardy, G. H.; Innis, R. C.
1981-01-01
Flight tests were carried out to assess the feasibility of piloted steep curved, and decelerating approach profiles in powered lift STOL aircraft. Several STOL control concepts representative of a variety of aircraft were evaluated in conjunction with suitably designed flight directions. The tests were carried out in a real navigation environment, employed special electronic cockpit displays, and included the development of the performance achieved and the control utilization involved in flying 180 deg turning, descending, and decelerating approach profiles to landing. The results suggest that such moderately complex piloted instrument approaches may indeed be feasible from a pilot acceptance point of view, given an acceptable navigation environment. Systems with the capability of those used in this experiment can provide the potential of achieving instrument operations on curved, descending, and decelerating landing approaches to weather minima corresponding to CTOL Category 2 criteria, while also providing a means of realizing more efficient operations during visual flight conditions.
PCSIM: A Parallel Simulation Environment for Neural Circuits Fully Integrated with Python
Pecevski, Dejan; Natschläger, Thomas; Schuch, Klaus
2008-01-01
The Parallel Circuit SIMulator (PCSIM) is a software package for simulation of neural circuits. It is primarily designed for distributed simulation of large scale networks of spiking point neurons. Although its computational core is written in C++, PCSIM's primary interface is implemented in the Python programming language, which is a powerful programming environment and allows the user to easily integrate the neural circuit simulator with data analysis and visualization tools to manage the full neural modeling life cycle. The main focus of this paper is to describe PCSIM's full integration into Python and the benefits thereof. In particular we will investigate how the automatically generated bidirectional interface and PCSIM's object-oriented modular framework enable the user to adopt a hybrid modeling approach: using and extending PCSIM's functionality either employing pure Python or C++ and thus combining the advantages of both worlds. Furthermore, we describe several supplementary PCSIM packages written in pure Python and tailored towards setting up and analyzing neural simulations. PMID:19543450
Confidence Leak in Perceptual Decision Making.
Rahnev, Dobromir; Koizumi, Ai; McCurdy, Li Yan; D'Esposito, Mark; Lau, Hakwan
2015-11-01
People live in a continuous environment in which the visual scene changes on a slow timescale. It has been shown that to exploit such environmental stability, the brain creates a continuity field in which objects seen seconds ago influence the perception of current objects. What is unknown is whether a similar mechanism exists at the level of metacognitive representations. In three experiments, we demonstrated a robust intertask confidence leak-that is, confidence in one's response on a given task or trial influencing confidence on the following task or trial. This confidence leak could not be explained by response priming or attentional fluctuations. Better ability to modulate confidence leak predicted higher capacity for metacognition as well as greater gray matter volume in the prefrontal cortex. A model based on normative principles from Bayesian inference explained the results by postulating that observers subjectively estimate the perceptual signal strength in a stable environment. These results point to the existence of a novel metacognitive mechanism mediated by regions in the prefrontal cortex. © The Author(s) 2015.