Sample records for including visual odometry

  1. Localization Using Visual Odometry and a Single Downward-Pointing Camera

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.

    2012-01-01

    Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.

  2. Applicability of Deep-Learning Technology for Relative Object-Based Navigation

    DTIC Science & Technology

    2017-09-01

    burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing...possible selections for navigating an unmanned ground vehicle (UGV) is through real- time visual odometry. To navigate in such an environment, the UGV...UGV) is through real- time visual odometry. To navigate in such an environment, the UGV needs to be able to detect, identify, and relate the static

  3. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    PubMed

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  4. Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images

    PubMed Central

    Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi

    2016-01-01

    Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. PMID:27399704

  5. RGB-D SLAM Combining Visual Odometry and Extended Information Filter

    PubMed Central

    Zhang, Heng; Liu, Yanli; Tan, Jindong; Xiong, Naixue

    2015-01-01

    In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A visual dead reckoning algorithm based on visual residuals is devised, which is used to estimate motion control input. In addition, we use a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from the RGB-D frame and use them as landmarks. Furthermore, considering both the 3D positions and the BRAND descriptors of the landmarks, our observation model avoids explicit data association between the observations and the map by marginalizing the observation likelihood over all possible associations. Experimental validation is provided, which compares the proposed RGB-D SLAM algorithm with just RGB-D visual odometry and a graph-based RGB-D SLAM algorithm using the publicly-available RGB-D dataset. The results of the experiments demonstrate that our system is quicker than the graph-based RGB-D SLAM algorithm. PMID:26263990

  6. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.

    PubMed

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-12-22

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.

  7. The precision of locomotor odometry in humans.

    PubMed

    Durgin, Frank H; Akagi, Mikio; Gallistel, Charles R; Haiken, Woody

    2009-03-01

    Two experiments measured the human ability to reproduce locomotor distances of 4.6-100 m without visual feedback and compared distance production with time production. Subjects were not permitted to count steps. It was found that the precision of human odometry follows Weber's law that variability is proportional to distance. The coefficients of variation for distance production were much lower than those measured for time production for similar durations. Gait parameters recorded during the task (average step length and step frequency) were found to be even less variable suggesting that step integration could be the basis for non-visual human odometry.

  8. Visual Odometry for Autonomous Deep-Space Navigation

    NASA Technical Reports Server (NTRS)

    Robinson, Shane; Pedrotty, Sam

    2016-01-01

    Visual Odometry fills two critical needs shared by all future exploration architectures considered by NASA: Autonomous Rendezvous and Docking (AR&D), and autonomous navigation during loss of comm. To do this, a camera is combined with cutting-edge algorithms (called Visual Odometry) into a unit that provides accurate relative pose between the camera and the object in the imagery. Recent simulation analyses have demonstrated the ability of this new technology to reliably, accurately, and quickly compute a relative pose. This project advances this technology by both preparing the system to process flight imagery and creating an activity to capture said imagery. This technology can provide a pioneering optical navigation platform capable of supporting a wide variety of future missions scenarios: deep space rendezvous, asteroid exploration, loss-of-comm.

  9. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors

    PubMed Central

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-01-01

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight. PMID:28025524

  10. Single-camera visual odometry to track a surgical X-ray C-arm base.

    PubMed

    Esfandiari, Hooman; Lichti, Derek; Anglin, Carolyn

    2017-12-01

    This study provides a framework for a single-camera odometry system for localizing a surgical C-arm base. An application-specific monocular visual odometry system (a downward-looking consumer-grade camera rigidly attached to the C-arm base) is proposed in this research. The cumulative dead-reckoning estimation of the base is extracted based on frame-to-frame homography estimation. Optical-flow results are utilized to feed the odometry. Online positional and orientation parameters are then reported. Positional accuracy of better than 2% (of the total traveled distance) for most of the cases and 4% for all the cases studied and angular accuracy of better than 2% (of absolute cumulative changes in orientation) were achieved with this method. This study provides a robust and accurate tracking framework that not only can be integrated with the current C-arm joint-tracking system (i.e. TC-arm) but also is capable of being employed for similar applications in other fields (e.g. robotics).

  11. Robot Vision Library

    NASA Technical Reports Server (NTRS)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  12. A Coordinated Control Architecture for Disaster Response Robots

    DTIC Science & Technology

    2016-01-01

    to use these same algorithms to provide navigation Odometry for the vehicle motions when the robot is driving. Visual Odometry The YouTube link... depressed the accelerator pedal. We relied on the fact that the vehicle quickly comes to rest when the accelerator pedal is not being pressed. The

  13. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    PubMed Central

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  14. Fast instantaneous center of rotation estimation algorithm for a skied-steered robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2015-05-01

    Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.

  15. Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation

    PubMed Central

    Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin

    2014-01-01

    Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780

  16. Intelligent Behavioral Action Aiding for Improved Autonomous Image Navigation

    DTIC Science & Technology

    2012-09-13

    odometry, SICK laser scanning unit ( Lidar ), Inertial Measurement Unit (IMU) and ultrasonic distance measurement system (Figure 32). The Lidar , IMU...2010, July) GPS world. [Online]. http://www.gpsworld.com/tech-talk- blog/gnss-independent-navigation-solution-using-integrated- lidar -data-11378 [4...Milford, David McKinnon, Michael Warren, Gordon Wyeth, and Ben Upcroft, "Feature-based Visual Odometry and Featureless Place Recognition for SLAM in

  17. Advanced Wireless Integrated Navy Network - AWINN

    DTIC Science & Technology

    2005-09-30

    progress report No. 3 on AWINN hardware and software configurations of smart , wideband, multi-function antennas, secure configurable platform, close-in...results to the host PC via a UART soft core. The UART core used is a proprietary Xilinx core which incorporates features described in National...current software uses wheel odometry and visual landmarks to create a map and estimate position on an internal x, y grid . The wheel odometry provides a

  18. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    PubMed Central

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-01-01

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143

  19. Visual Odometry for Autonomous Deep-Space Navigation Project

    NASA Technical Reports Server (NTRS)

    Robinson, Shane; Pedrotty, Sam

    2016-01-01

    Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory’s considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm’s performance and ability to process ‘flight-like’ imagery formats with a ‘flight-like’ trajectory, positioning ourselves to easily process flight data from the upcoming ‘ISS Selfie’ activity and then compare the algorithm’s quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system.Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.

  20. Visual Odometry for Autonomous Deep-Space Navigation Project

    NASA Technical Reports Server (NTRS)

    Robinson, Shane; Pedrotty, Sam

    2016-01-01

    Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory's considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm's performance and ability to process 'flight-like' imagery formats with a 'flight-like' trajectory, positioning ourselves to easily process flight data from the upcoming 'ISS Selfie' activity and then compare the algorithm's quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system. Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.

  1. Quantitative Evaluation of Stereo Visual Odometry for Autonomous Vessel Localisation in Inland Waterway Sensing Applications

    PubMed Central

    Kriechbaumer, Thomas; Blackburn, Kim; Breckon, Toby P.; Hamilton, Oliver; Rivas Casado, Monica

    2015-01-01

    Autonomous survey vessels can increase the efficiency and availability of wide-area river environment surveying as a tool for environment protection and conservation. A key challenge is the accurate localisation of the vessel, where bank-side vegetation or urban settlement preclude the conventional use of line-of-sight global navigation satellite systems (GNSS). In this paper, we evaluate unaided visual odometry, via an on-board stereo camera rig attached to the survey vessel, as a novel, low-cost localisation strategy. Feature-based and appearance-based visual odometry algorithms are implemented on a six degrees of freedom platform operating under guided motion, but stochastic variation in yaw, pitch and roll. Evaluation is based on a 663 m-long trajectory (>15,000 image frames) and statistical error analysis against ground truth position from a target tracking tachymeter integrating electronic distance and angular measurements. The position error of the feature-based technique (mean of ±0.067 m) is three times smaller than that of the appearance-based algorithm. From multi-variable statistical regression, we are able to attribute this error to the depth of tracked features from the camera in the scene and variations in platform yaw. Our findings inform effective strategies to enhance stereo visual localisation for the specific application of river monitoring. PMID:26694411

  2. PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features

    PubMed Central

    Zhao, Ji; Guo, Yue; He, Wenhao; Yuan, Kui

    2018-01-01

    To address the problem of estimating camera trajectory and to build a structural three-dimensional (3D) map based on inertial measurements and visual observations, this paper proposes point–line visual–inertial odometry (PL-VIO), a tightly-coupled monocular visual–inertial odometry system exploiting both point and line features. Compared with point features, lines provide significantly more geometrical structure information on the environment. To obtain both computation simplicity and representational compactness of a 3D spatial line, Plücker coordinates and orthonormal representation for the line are employed. To tightly and efficiently fuse the information from inertial measurement units (IMUs) and visual sensors, we optimize the states by minimizing a cost function which combines the pre-integrated IMU error term together with the point and line re-projection error terms in a sliding window optimization framework. The experiments evaluated on public datasets demonstrate that the PL-VIO method that combines point and line features outperforms several state-of-the-art VIO systems which use point features only. PMID:29642648

  3. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  4. Robotic Vision-Based Localization in an Urban Environment

    NASA Technical Reports Server (NTRS)

    Mchenry, Michael; Cheng, Yang; Matthies

    2007-01-01

    A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.

  5. Exomars VisLoc- The Visual Localisation System for the Exomars Rover

    NASA Astrophysics Data System (ADS)

    Ward, R.; Hamilton, W.; Silva, N.; Pereira, V.

    2016-08-01

    Maintaining accurate knowledge of the current position of vehicles on the surface of Mars is a considerable problem. The lack of an orbital GPS means that the absolute position of a rover at any instant is very difficult to determine, and with that it is difficult to accurately and safely plan hazard avoidance manoeuvres.Some on-board methods of determining the evolving POSE of a rover are well known, such as using wheel odometry to keep a log of the distance travelled. However there are associated problems - wheels can slip in the martial soil providing odometry readings which can mislead navigation algorithms. One solution to this is to use a visual localisation system, which uses cameras to determine the actual rover motion from images of the terrain. By measuring movement from the terrain an independent measure of the actual movement can be obtained to a high degree of accuracy.This paper presents the progress of the project to develop a the Visual Localisation system for the ExoMars rover (VisLoc). The core algorithmm used in the system is known as OVO (Oxford Visual Odometry), developed at the Mobile Robotics Group at the University of Oxford. Over a number of projects this system has been adapted from its original purpose (navigation systems for autonomous vehicles) to be a viable system for the unique challenges associated with extra-terrestrial use.

  6. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.

  7. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    NASA Astrophysics Data System (ADS)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  8. Low Cost Embedded Stereo System for Underwater Surveys

    NASA Astrophysics Data System (ADS)

    Nawaf, M. M.; Boï, J.-M.; Merad, D.; Royer, J.-P.; Drap, P.

    2017-11-01

    This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.

  9. Acquiring Semantically Meaningful Models for Robotic Localization, Mapping and Target Recognition

    DTIC Science & Technology

    2014-12-21

    information, including suggesstions for reducing this burden, to Washington Headquarters Services , Directorate for Information Operations and Reports, 1215...Representations • Point features tracking • Recovery of relative motion, visual odometry • Loop closure • Environment models, sparse clouds of points...that co- occur with the object of interest Chair-Background Table-Background Object Level Segmentation Jaccard Index Silber .[5] 15.12 RenFox[4

  10. Attitude and position estimation on the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Ali, Khaled S.; Vanelli, C. Anthony; Biesiadecki, Jeffrey J.; Maimone, Mark W.; Yang Cheng, A.; San Martin, Miguel; Alexander, James W.

    2005-01-01

    NASA/JPL 's Mars Exploration Rovers acquire their attitude upon command and autonomously propagate their attitude and position. The rovers use accelerometers and images of the sun to acquire attitude, autonomously searching the sky for the sun with a pointable camera. To propagate the attitude and position the rovers use either accelerometer and gyro readings or gyro readings and wheel odometiy, depending on the nature of the movement ground operators are commanding. Where necessary, visual odometry is performed on images to fine tune the position updates, particularly in high slip environments. The capability also exists for visual odometry attitude updates. This paper describes the techniques used by the rovers to acquire and maintain attitude and position knowledge, the accuracy which is obtainable, and lessons learned after more than one year in operation.

  11. Reading the Rover Tracks

    NASA Image and Video Library

    2012-08-29

    The straight lines in Curiosity zigzag track marks are Morse code for JPL. The footprint is an important reference mark that the rover can use to drive more precisely via a system called visual odometry.

  12. Towards Guided Underwater Survey Using Light Visual Odometry

    NASA Astrophysics Data System (ADS)

    Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.

    2017-02-01

    A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.

  13. Spectrally queued feature selection for robotic visual odometery

    NASA Astrophysics Data System (ADS)

    Pirozzo, David M.; Frederick, Philip A.; Hunt, Shawn; Theisen, Bernard; Del Rose, Mike

    2011-01-01

    Over the last two decades, research in Unmanned Vehicles (UV) has rapidly progressed and become more influenced by the field of biological sciences. Researchers have been investigating mechanical aspects of varying species to improve UV air and ground intrinsic mobility, they have been exploring the computational aspects of the brain for the development of pattern recognition and decision algorithms and they have been exploring perception capabilities of numerous animals and insects. This paper describes a 3 month exploratory applied research effort performed at the US ARMY Research, Development and Engineering Command's (RDECOM) Tank Automotive Research, Development and Engineering Center (TARDEC) in the area of biologically inspired spectrally augmented feature selection for robotic visual odometry. The motivation for this applied research was to develop a feasibility analysis on multi-spectrally queued feature selection, with improved temporal stability, for the purposes of visual odometry. The intended application is future semi-autonomous Unmanned Ground Vehicle (UGV) control as the richness of data sets required to enable human like behavior in these systems has yet to be defined.

  14. Evaluation of odometry algorithm performances using a railway vehicle dynamic model

    NASA Astrophysics Data System (ADS)

    Allotta, B.; Pugi, L.; Ridolfi, A.; Malvezzi, M.; Vettori, G.; Rindi, A.

    2012-05-01

    In modern railway Automatic Train Protection and Automatic Train Control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. Simplified two-dimensional models of railway vehicles have been usually used for Hardware in the Loop test rig testing of conventional odometry algorithms and of on-board safety relevant subsystems (like the Wheel Slide Protection braking system) in which the train speed is estimated from the measures of the wheel angular speed. Two-dimensional models are not suitable to develop solutions like the inertial type localisation algorithms (using 3D accelerometers and 3D gyroscopes) and the introduction of Global Positioning System (or similar) or the magnetometer. In order to test these algorithms correctly and increase odometry performances, a three-dimensional multibody model of a railway vehicle has been developed, using Matlab-Simulink™, including an efficient contact model which can simulate degraded adhesion conditions (the development and prototyping of odometry algorithms involve the simulation of realistic environmental conditions). In this paper, the authors show how a 3D railway vehicle model, able to simulate the complex interactions arising between different on-board subsystems, can be useful to evaluate the odometry algorithm and safety relevant to on-board subsystem performances.

  15. An artificial neural network architecture for non-parametric visual odometry in wireless capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Dimas, George; Iakovidis, Dimitris K.; Karargyris, Alexandros; Ciuti, Gastone; Koulaouzidis, Anastasios

    2017-09-01

    Wireless capsule endoscopy is a non-invasive screening procedure of the gastrointestinal (GI) tract performed with an ingestible capsule endoscope (CE) of the size of a large vitamin pill. Such endoscopes are equipped with a usually low-frame-rate color camera which enables the visualization of the GI lumen and the detection of pathologies. The localization of the commercially available CEs is performed in the 3D abdominal space using radio-frequency (RF) triangulation from external sensor arrays, in combination with transit time estimation. State-of-the-art approaches, such as magnetic localization, which have been experimentally proved more accurate than the RF approach, are still at an early stage. Recently, we have demonstrated that CE localization is feasible using solely visual cues and geometric models. However, such approaches depend on camera parameters, many of which are unknown. In this paper the authors propose a novel non-parametric visual odometry (VO) approach to CE localization based on a feed-forward neural network architecture. The effectiveness of this approach in comparison to state-of-the-art geometric VO approaches is validated using a robotic-assisted in vitro experimental setup.

  16. Gps-Denied Geo-Localisation Using Visual Odometry

    NASA Astrophysics Data System (ADS)

    Gupta, Ashish; Chang, Huan; Yilmaz, Alper

    2016-06-01

    The primary method for geo-localization is based on GPS which has issues of localization accuracy, power consumption, and unavailability. This paper proposes a novel approach to geo-localization in a GPS-denied environment for a mobile platform. Our approach has two principal components: public domain transport network data available in GIS databases or OpenStreetMap; and a trajectory of a mobile platform. This trajectory is estimated using visual odometry and 3D view geometry. The transport map information is abstracted as a graph data structure, where various types of roads are modelled as graph edges and typically intersections are modelled as graph nodes. A search for the trajectory in real time in the graph yields the geo-location of the mobile platform. Our approach uses a simple visual sensor and it has a low memory and computational footprint. In this paper, we demonstrate our method for trajectory estimation and provide examples of geolocalization using public-domain map data. With the rapid proliferation of visual sensors as part of automated driving technology and continuous growth in public domain map data, our approach has the potential to completely augment, or even supplant, GPS based navigation since it functions in all environments.

  17. Benchmarking real-time RGBD odometry for light-duty UAVs

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Sahawneh, Laith R.; Brink, Kevin M.

    2016-06-01

    This article describes the theoretical and implementation challenges associated with generating 3D odometry estimates (delta-pose) from RGBD sensor data in real-time to facilitate navigation in cluttered indoor environments. The underlying odometry algorithm applies to general 6DoF motion; however, the computational platforms, trajectories, and scene content are motivated by their intended use on indoor, light-duty UAVs. Discussion outlines the overall software pipeline for sensor processing and details how algorithm choices for the underlying feature detection and correspondence computation impact the real-time performance and accuracy of the estimated odometry and associated covariance. This article also explores the consistency of odometry covariance estimates and the correlation between successive odometry estimates. The analysis is intended to provide users information needed to better leverage RGBD odometry within the constraints of their systems.

  18. Spirit rover localization and topographic mapping at the landing site of Gusev crater, Mars

    USGS Publications Warehouse

    Li, R.; Archinal, B.A.; Arvidson, R. E.; Bell, J.; Christensen, P.; Crumpler, L.; Des Marais, D.J.; Di, K.; Duxbury, T.; Golombek, M.P.; Grant, J. A.; Greeley, R.; Guinn, J.; Johnson, Aaron H.; Kirk, R.L.; Maimone, M.; Matthies, L.H.; Malin, M.; Parker, T.; Sims, M.; Thompson, S.; Squyres, S. W.; Soderblom, L.A.

    2006-01-01

    By sol 440, the Spirit rover has traversed a distance of 3.76 km (actual distance traveled instead of odometry). Localization of the lander and the rover along the traverse has been successfully performed at the Gusev crater landing site. We localized the lander in the Gusev crater using two-way Doppler radio positioning and cartographic triangulations through landmarks visible in both orbital and ground images. Additional high-resolution orbital images were used to verify the determined lander position. Visual odometry and bundle adjustment technologies were applied to compensate for wheel slippage, azimuthal angle drift, and other navigation errors (which were as large as 10.5% in the Husband Hill area). We generated topographic products, including 72 ortho maps and three-dimensional (3-D) digital terrain models, 11 horizontal and vertical traverse profiles, and one 3-D crater model (up to sol 440). Also discussed in this paper are uses of the data for science operations planning, geological traverse surveys, surveys of wind-related features, and other science applications. Copyright 2006 by the American Geophysical Union.

  19. Robust Stereo Visual Odometry Using Improved RANSAC-Based Methods for Mobile Robot Localization

    PubMed Central

    Liu, Yanqing; Gu, Yuzhang; Li, Jiamao; Zhang, Xiaolin

    2017-01-01

    In this paper, we present a novel approach for stereo visual odometry with robust motion estimation that is faster and more accurate than standard RANSAC (Random Sample Consensus). Our method makes improvements in RANSAC in three aspects: first, the hypotheses are preferentially generated by sampling the input feature points on the order of ages and similarities of the features; second, the evaluation of hypotheses is performed based on the SPRT (Sequential Probability Ratio Test) that makes bad hypotheses discarded very fast without verifying all the data points; third, we aggregate the three best hypotheses to get the final estimation instead of only selecting the best hypothesis. The first two aspects improve the speed of RANSAC by generating good hypotheses and discarding bad hypotheses in advance, respectively. The last aspect improves the accuracy of motion estimation. Our method was evaluated in the KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) and the New Tsukuba dataset. Experimental results show that the proposed method achieves better results for both speed and accuracy than RANSAC. PMID:29027935

  20. Minimalistic optic flow sensors applied to indoor and outdoor visual guidance and odometry on a car-like robot.

    PubMed

    Mafrica, Stefano; Servel, Alain; Ruffier, Franck

    2016-11-10

    Here we present a novel bio-inspired optic flow (OF) sensor and its application to visual  guidance and odometry on a low-cost car-like robot called BioCarBot. The minimalistic OF sensor was robust to high-dynamic-range lighting conditions and to various visual patterns encountered thanks to its M 2 APIX auto-adaptive pixels and the new cross-correlation OF algorithm implemented. The low-cost car-like robot estimated its velocity and steering angle, and therefore its position and orientation, via an extended Kalman filter (EKF) using only two downward-facing OF sensors and the Ackerman steering model. Indoor and outdoor experiments were carried out in which the robot was driven in the closed-loop mode based on the velocity and steering angle estimates. The experimental results obtained show that our novel OF sensor can deliver high-frequency measurements ([Formula: see text]) in a wide OF range (1.5-[Formula: see text]) and in a 7-decade high-dynamic light level range. The OF resolution was constant and could be adjusted as required (up to [Formula: see text]), and the OF precision obtained was relatively high (standard deviation of [Formula: see text] with an average OF of [Formula: see text], under the most demanding lighting conditions). An EKF-based algorithm gave the robot's position and orientation with a relatively high accuracy (maximum errors outdoors at a very low light level: [Formula: see text] and [Formula: see text] over about [Formula: see text] and [Formula: see text]) despite the low-resolution control systems of the steering servo and the DC motor, as well as a simplified model identification and calibration. Finally, the minimalistic OF-based odometry results were compared to those obtained using measurements based on an inertial measurement unit (IMU) and a motor's speed sensor.

  1. Tracking Positions and Attitudes of Mars Rovers

    NASA Technical Reports Server (NTRS)

    Ali, Khaled; vanelli, Charles; Biesiadecki, Jeffrey; Martin, Alejandro San; Maimone, Mark; Cheng, Yang; Alexander, James

    2006-01-01

    The Surface Attitude Position and Pointing (SAPP) software, which runs on computers aboard the Mars Exploration Rovers, tracks the positions and attitudes of the rovers on the surface of Mars. Each rover acquires data on attitude from a combination of accelerometer readings and images of the Sun acquired autonomously, using a pointable camera to search the sky for the Sun. Depending on the nature of movement commanded remotely by operators on Earth, the software propagates attitude and position by use of either (1) accelerometer and gyroscope readings or (2) gyroscope readings and wheel odometry. Where necessary, visual odometry is performed on images to fine-tune the position updates, particularly on high-wheel-slip terrain. The attitude data are used by other software and ground-based personnel for pointing a high-gain antenna, planning and execution of driving, and positioning and aiming scientific instruments.

  2. Stereo and IMU-Assisted Visual Odometry for Small Robots

    NASA Technical Reports Server (NTRS)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  3. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  4. A stingless bee can use visual odometry to estimate both height and distance.

    PubMed

    Eckles, M A; Roubik, D W; Nieh, J C

    2012-09-15

    Bees move and forage within three dimensions and rely heavily on vision for navigation. The use of vision-based odometry has been studied extensively in horizontal distance measurement, but not vertical distance measurement. The honey bee Apis mellifera and the stingless bee Melipona seminigra measure distance visually using optic flow-movement of images as they pass across the retina. The honey bees gauge height using image motion in the ventral visual field. The stingless bees forage at different tropical forest canopy levels, ranging up to 40 m at our site. Thus, estimating height would be advantageous. We provide the first evidence that the stingless bee Melipona panamica utilizes optic flow information to gauge not only distance traveled but also height above ground, by processing information primarily from the lateral visual field. After training bees to forage at a set height in a vertical tunnel lined with black and white stripes, we observed foragers that explored a new tunnel with no feeder. In a new tunnel, bees searched at the same height they were trained to. In a narrower tunnel, bees experienced more image motion and significantly lowered their search height. In a wider tunnel, bees experienced less image motion and searched at significantly greater heights. In a tunnel without optic cues, bees were disoriented and searched at random heights. A horizontal tunnel testing these variables similarly affected foraging, but bees exhibited less precision (greater variance in search positions). Accurately gauging flight height above ground may be crucial for this species and others that compete for resources located at heights ranging from ground level to the high tropical forest canopies.

  5. Enhancement Strategies for Frame-To Uas Stereo Visual Odometry

    NASA Astrophysics Data System (ADS)

    Kersten, J.; Rodehorst, V.

    2016-06-01

    Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.

  6. Underwater image mosaicking and visual odometry

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz; Tangirala, Sekhar; Sorber, Scott

    2017-05-01

    This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.

  7. A Model for an Angular Velocity-Tuned Motion Detector Accounting for Deviations in the Corridor-Centering Response of the Bee.

    PubMed

    Cope, Alex J; Sabo, Chelsea; Gurney, Kevin; Vasilaki, Eleni; Marshall, James A R

    2016-05-01

    We present a novel neurally based model for estimating angular velocity (AV) in the bee brain, capable of quantitatively reproducing experimental observations of visual odometry and corridor-centering in free-flying honeybees, including previously unaccounted for manipulations of behaviour. The model is fitted using electrophysiological data, and tested using behavioural data. Based on our model we suggest that the AV response can be considered as an evolutionary extension to the optomotor response. The detector is tested behaviourally in silico with the corridor-centering paradigm, where bees navigate down a corridor with gratings (square wave or sinusoidal) on the walls. When combined with an existing flight control algorithm the detector reproduces the invariance of the average flight path to the spatial frequency and contrast of the gratings, including deviations from perfect centering behaviour as found in the real bee's behaviour. In addition, the summed response of the detector to a unit distance movement along the corridor is constant for a large range of grating spatial frequencies, demonstrating that the detector can be used as a visual odometer.

  8. A Model for an Angular Velocity-Tuned Motion Detector Accounting for Deviations in the Corridor-Centering Response of the Bee

    PubMed Central

    Sabo, Chelsea; Gurney, Kevin; Vasilaki, Eleni; Marshall, James A. R.

    2016-01-01

    We present a novel neurally based model for estimating angular velocity (AV) in the bee brain, capable of quantitatively reproducing experimental observations of visual odometry and corridor-centering in free-flying honeybees, including previously unaccounted for manipulations of behaviour. The model is fitted using electrophysiological data, and tested using behavioural data. Based on our model we suggest that the AV response can be considered as an evolutionary extension to the optomotor response. The detector is tested behaviourally in silico with the corridor-centering paradigm, where bees navigate down a corridor with gratings (square wave or sinusoidal) on the walls. When combined with an existing flight control algorithm the detector reproduces the invariance of the average flight path to the spatial frequency and contrast of the gratings, including deviations from perfect centering behaviour as found in the real bee’s behaviour. In addition, the summed response of the detector to a unit distance movement along the corridor is constant for a large range of grating spatial frequencies, demonstrating that the detector can be used as a visual odometer. PMID:27148968

  9. Assessing the Reliability and the Accuracy of Attitude Extracted from Visual Odometry for LIDAR Data Georeferencing

    NASA Astrophysics Data System (ADS)

    Leroux, B.; Cali, J.; Verdun, J.; Morel, L.; He, H.

    2017-08-01

    Airborne LiDAR systems require the use of Direct Georeferencing (DG) in order to compute the coordinates of the surveyed point in the mapping frame. An UAV platform does not derogate to this need, but its payload has to be lighter than this installed onboard so the manufacturer needs to find an alternative to heavy sensors and navigation systems. For the georeferencing of these data, a possible solution could be to replace the Inertial Measurement Unit (IMU) by a camera and record the optical flow. The different frames would then be processed thanks to photogrammetry so as to extract the External Orientation Parameters (EOP) and, therefore, the path of the camera. The major advantages of this method called Visual Odometry (VO) is low cost, no drifts IMU-induced, option for the use of Ground Control Points (GCPs) such as on airborne photogrammetry surveys. In this paper we shall present a test bench designed to assess the reliability and accuracy of the attitude estimated from VO outputs. The test bench consists of a trolley which embeds a GNSS receiver, an IMU sensor and a camera. The LiDAR is replaced by a tacheometer in order to survey the control points already known. We have also developped a methodology applied to this test bench for the calibration of the external parameters and the computation of the surveyed point coordinates. Several tests have revealed a difference about 2-3 centimeters between the control point coordinates measured and those already known.

  10. a Robust Method for Stereo Visual Odometry Based on Multiple Euclidean Distance Constraint and Ransac Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.

    2017-07-01

    Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  11. A small-scale hyperacute compound eye featuring active eye tremor: application to visual stabilization, target tracking, and short-range odometry.

    PubMed

    Colonnier, Fabien; Manecy, Augustin; Juston, Raphaël; Mallot, Hanspeter; Leitel, Robert; Floreano, Dario; Viollet, Stéphane

    2015-02-25

    In this study, a miniature artificial compound eye (15 mm in diameter) called the curved artificial compound eye (CurvACE) was endowed for the first time with hyperacuity, using similar micro-movements to those occurring in the fly's compound eye. A periodic micro-scanning movement of only a few degrees enables the vibrating compound eye to locate contrasting objects with a 40-fold greater resolution than that imposed by the interommatidial angle. In this study, we developed a new algorithm merging the output of 35 local processing units consisting of adjacent pairs of artificial ommatidia. The local measurements performed by each pair are processed in parallel with very few computational resources, which makes it possible to reach a high refresh rate of 500 Hz. An aerial robotic platform with two degrees of freedom equipped with the active CurvACE placed over naturally textured panels was able to assess its linear position accurately with respect to the environment thanks to its efficient gaze stabilization system. The algorithm was found to perform robustly at different light conditions as well as distance variations relative to the ground and featured small closed-loop positioning errors of the robot in the range of 45 mm. In addition, three tasks of interest were performed without having to change the algorithm: short-range odometry, visual stabilization, and tracking contrasting objects (hands) moving over a textured background.

  12. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    PubMed

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  13. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    PubMed Central

    Ci, Wenyan; Huang, Yingping

    2016-01-01

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508

  14. Intelligent visual localization of wireless capsule endoscopes enhanced by color information.

    PubMed

    Dimas, George; Spyrou, Evaggelos; Iakovidis, Dimitris K; Koulaouzidis, Anastasios

    2017-10-01

    Wireless capsule endoscopy (WCE) is performed with a miniature swallowable endoscope enabling the visualization of the whole gastrointestinal (GI) tract. One of the most challenging problems in WCE is the localization of the capsule endoscope (CE) within the GI lumen. Contemporary, radiation-free localization approaches are mainly based on the use of external sensors and transit time estimation techniques, with practically low localization accuracy. Latest advances for the solution of this problem include localization approaches based solely on visual information from the CE camera. In this paper we present a novel visual localization approach based on an intelligent, artificial neural network, architecture which implements a generic visual odometry (VO) framework capable of estimating the motion of the CE in physical units. Unlike the conventional, geometric, VO approaches, the proposed one is adaptive to the geometric model of the CE used; therefore, it does not require any prior knowledge about and its intrinsic parameters. Furthermore, it exploits color as a cue to increase localization accuracy and robustness. Experiments were performed using a robotic-assisted setup providing ground truth information about the actual location of the CE. The lowest average localization error achieved is 2.70 ± 1.62 cm, which is significantly lower than the error obtained with the geometric approach. This result constitutes a promising step towards the in-vivo application of VO, which will open new horizons for accurate local treatment, including drug infusion and surgical interventions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Using Visual Odometry to Estimate Position and Attitude

    NASA Technical Reports Server (NTRS)

    Maimone, Mark; Cheng, Yang; Matthies, Larry; Schoppers, Marcel; Olson, Clark

    2007-01-01

    A computer program in the guidance system of a mobile robot generates estimates of the position and attitude of the robot, using features of the terrain on which the robot is moving, by processing digitized images acquired by a stereoscopic pair of electronic cameras mounted rigidly on the robot. Developed for use in localizing the Mars Exploration Rover (MER) vehicles on Martian terrain, the program can also be used for similar purposes on terrestrial robots moving in sufficiently visually textured environments: examples include low-flying robotic aircraft and wheeled robots moving on rocky terrain or inside buildings. In simplified terms, the program automatically detects visual features and tracks them across stereoscopic pairs of images acquired by the cameras. The 3D locations of the tracked features are then robustly processed into an estimate of overall vehicle motion. Testing has shown that by use of this software, the error in the estimate of the position of the robot can be limited to no more than 2 percent of the distance traveled, provided that the terrain is sufficiently rich in features. This software has proven extremely useful on the MER vehicles during driving on sandy and highly sloped terrains on Mars.

  16. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    NASA Astrophysics Data System (ADS)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  17. Localization Methods for a Mobile Robot in Urban Environments

    DTIC Science & Technology

    2004-10-04

    Columbia University, Department of Computer Science, 2001. [30] R. Brown and P. Hwang , Introduction to random signals and applied Kalman filtering, 3rd...sensor. An extended Kalman filter integrates the sensor data and keeps track of the uncertainty associated with it. The second method is based on...errors+ compass/GPS errors corrected odometry pose odometry error estimates zk zk h(x)~ h(x)~ Kalman Filter zk Fig. 4. A diagram of the extended

  18. An Outdoor Navigation Platform with a 3D Scanner and Gyro-assisted Odometry

    NASA Astrophysics Data System (ADS)

    Yoshida, Tomoaki; Irie, Kiyoshi; Koyanagi, Eiji; Tomono, Masahiro

    This paper proposes a light-weight navigation platform that consists of gyro-assisted odometry, a 3D laser scanner and map-based localization for human-scale robots. The gyro-assisted odometry provides highly accurate positioning only by dead-reckoning. The 3D laser scanner has a wide field of view and uniform measuring-point distribution. The map-based localization is robust and computationally inexpensive by utilizing a particle filter on a 2D grid map generated by projecting 3D points on to the ground. The system uses small and low-cost sensors, and can be applied to a variety of mobile robots in human-scale environments. Outdoor navigation experiments were conducted at the Tsukuba Challenge held in 2009 and 2010, which is an open proving ground for human-scale robots. Our robot successfully navigated the assigned 1-km courses in a fully autonomous mode multiple times.

  19. Towards automated visual flexible endoscope navigation.

    PubMed

    van der Stap, Nanda; van der Heijden, Ferdinand; Broeders, Ivo A M J

    2013-10-01

    The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research. A systematic literature search was performed using three general search terms in two medical-technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included. Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date. Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process.

  20. Neural basis of forward flight control and landing in honeybees.

    PubMed

    Ibbotson, M R; Hung, Y-S; Meffin, H; Boeddeker, N; Srinivasan, M V

    2017-11-06

    The impressive repertoire of honeybee visually guided behaviors, and their ability to learn has made them an important tool for elucidating the visual basis of behavior. Like other insects, bees perform optomotor course correction to optic flow, a response that is dependent on the spatial structure of the visual environment. However, bees can also distinguish the speed of image motion during forward flight and landing, as well as estimate flight distances (odometry), irrespective of the visual scene. The neural pathways underlying these abilities are unknown. Here we report on a cluster of descending neurons (DNIIIs) that are shown to have the directional tuning properties necessary for detecting image motion during forward flight and landing on vertical surfaces. They have stable firing rates during prolonged periods of stimulation and respond to a wide range of image speeds, making them suitable to detect image flow during flight behaviors. While their responses are not strictly speed tuned, the shape and amplitudes of their speed tuning functions are resistant to large changes in spatial frequency. These cells are prime candidates not only for the control of flight speed and landing, but also the basis of a neural 'front end' of the honeybee's visual odometer.

  1. Odometry and Laser Scanner Fusion Based on a Discrete Extended Kalman Filter for Robotic Platooning Guidance

    PubMed Central

    Espinosa, Felipe; Santos, Carlos; Marrón-Romera, Marta; Pizarro, Daniel; Valdés, Fernando; Dongil, Javier

    2011-01-01

    This paper describes a relative localization system used to achieve the navigation of a convoy of robotic units in indoor environments. This positioning system is carried out fusing two sensorial sources: (a) an odometric system and (b) a laser scanner together with artificial landmarks located on top of the units. The laser source allows one to compensate the cumulative error inherent to dead-reckoning; whereas the odometry source provides less pose uncertainty in short trajectories. A discrete Extended Kalman Filter, customized for this application, is used in order to accomplish this aim under real time constraints. Different experimental results with a convoy of Pioneer P3-DX units tracking non-linear trajectories are shown. The paper shows that a simple setup based on low cost laser range systems and robot built-in odometry sensors is able to give a high degree of robustness and accuracy to the relative localization problem of convoy units for indoor applications. PMID:22164079

  2. Odometry and laser scanner fusion based on a discrete extended Kalman Filter for robotic platooning guidance.

    PubMed

    Espinosa, Felipe; Santos, Carlos; Marrón-Romera, Marta; Pizarro, Daniel; Valdés, Fernando; Dongil, Javier

    2011-01-01

    This paper describes a relative localization system used to achieve the navigation of a convoy of robotic units in indoor environments. This positioning system is carried out fusing two sensorial sources: (a) an odometric system and (b) a laser scanner together with artificial landmarks located on top of the units. The laser source allows one to compensate the cumulative error inherent to dead-reckoning; whereas the odometry source provides less pose uncertainty in short trajectories. A discrete Extended Kalman Filter, customized for this application, is used in order to accomplish this aim under real time constraints. Different experimental results with a convoy of Pioneer P3-DX units tracking non-linear trajectories are shown. The paper shows that a simple setup based on low cost laser range systems and robot built-in odometry sensors is able to give a high degree of robustness and accuracy to the relative localization problem of convoy units for indoor applications.

  3. An enhanced inertial navigation system based on a low-cost IMU and laser scanner

    NASA Astrophysics Data System (ADS)

    Kim, Hyung-Soon; Baeg, Seung-Ho; Yang, Kwang-Woong; Cho, Kuk; Park, Sangdeok

    2012-06-01

    This paper describes an enhanced fusion method for an Inertial Navigation System (INS) based on a 3-axis accelerometer sensor, a 3-axis gyroscope sensor and a laser scanner. In GPS-denied environments, indoor or dense forests, a pure INS odometry is available for estimating the trajectory of a human or robot. However it has a critical implementation problem: a drift error of velocity, position and heading angles. Commonly the problem can be solved by fusing visual landmarks, a magnetometer or radio beacons. These methods are not robust in diverse environments: darkness, fog or sunlight, an unstable magnetic field and an environmental obstacle. We propose to overcome the drift problem using an Iterative Closest Point (ICP) scan matching algorithm with a laser scanner. This system consists of three parts. The first is the INS. It estimates attitude, velocity, position based on a 6-axis Inertial Measurement Unit (IMU) with both 'Heuristic Reduction of Gyro Drift' (HRGD) and 'Heuristic Reduction of Velocity Drift' (HRVD) methods. A frame-to-frame ICP matching algorithm for estimating position and attitude by laser scan data is the second. The third is an extended kalman filter method for multi-sensor data fusing: INS and Laser Range Finder (LRF). The proposed method is simple and robust in diverse environments, so we could reduce the drift error efficiently. We confirm the result comparing an odometry of the experimental result with ICP and LRF aided-INS in a long corridor.

  4. Visual Target Tracking on the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Biesiadecki, Jeffrey J.; Ali, Khaled S.

    2008-01-01

    Visual Target Tracking (VTT) has been implemented in the new Mars Exploration Rover (MER) Flight Software (FSW) R9.2 release, which is now running on both Spirit and Opportunity rovers. Applying the normalized cross-correlation (NCC) algorithm with template image magnification and roll compensation on MER Navcam images, VTT tracks the target and enables the rover to approach the target within a few cm over a 10 m traverse. Each VTT update takes 1/2 to 1 minute on the rovers, 2-3 times faster than one Visual Odometry (Visodom) update. VTT is a key element to achieve a target approach and instrument placement over a 10-m run in a single sol in contrast to the original baseline of 3 sols. VTT has been integrated into the MER FSW so that it can operate with any combination of blind driving, Autonomous Navigation (Autonav) with hazard avoidance, and Visodom. VTT can either guide the rover towards the target or simply image the target as the rover drives by. Three recent VTT operational checkouts on Opportunity were all successful, tracking the selected target reliably within a few pixels.

  5. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    PubMed

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

  6. Estimation and Control for Autonomous Coring from a Rover Manipulator

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Backes, Paul; DiCicco, Matt; Bajracharya, Max

    2010-01-01

    A system consisting of a set of estimators and autonomous behaviors has been developed which allows robust coring from a low-mass rover platform, while accommodating for moderate rover slip. A redundant set of sensors, including a force-torque sensor, visual odometry, and accelerometers are used to monitor discrete critical and operational modes, as well as to estimate continuous drill parameters during the coring process. A set of critical failure modes pertinent to shallow coring from a mobile platform is defined, and autonomous behaviors associated with each critical mode are used to maintain nominal coring conditions. Autonomous shallow coring is demonstrated from a low-mass rover using a rotary-percussive coring tool mounted on a 5 degree-of-freedom (DOF) arm. A new architecture of using an arm-stabilized, rotary percussive tool with the robotic arm used to provide the drill z-axis linear feed is validated. Particular attention to hole start using this architecture is addressed. An end-to-end coring sequence is demonstrated, where the rover autonomously detects and then recovers from a series of slip events that exceeded 9 cm total displacement.

  7. Using virtual environment for autonomous vehicle algorithm validation

    NASA Astrophysics Data System (ADS)

    Levinskis, Aleksandrs

    2018-04-01

    This paper describes possible use of modern game engine for validating and proving the concept of algorithm design. As the result simple visual odometry algorithm will be provided to show the concept and go over all workflow stages. Some of stages will involve using of Kalman filter in such a way that it will estimate optical flow velocity as well as position of moving camera located at vehicle body. In particular Unreal Engine 4 game engine will be used for generating optical flow patterns and ground truth path. For optical flow determination Horn and Schunck method will be applied. As the result, it will be shown that such method can estimate position of the camera attached to vehicle with certain displacement error respect to ground truth depending on optical flow pattern. For displacement rate RMS error is calculating between estimated and actual position.

  8. Rover Slip Validation and Prediction Algorithm

    NASA Technical Reports Server (NTRS)

    Yen, Jeng

    2009-01-01

    A physical-based simulation has been developed for the Mars Exploration Rover (MER) mission that applies a slope-induced wheel-slippage to the rover location estimator. Using the digital elevation map from the stereo images, the computational method resolves the quasi-dynamic equations of motion that incorporate the actual wheel-terrain speed to estimate the gross velocity of the vehicle. Based on the empirical slippage measured by the Visual Odometry software of the rover, this algorithm computes two factors for the slip model by minimizing the distance of the predicted and actual vehicle location, and then uses the model to predict the next drives. This technique, which has been deployed to operate the MER rovers in the extended mission periods, can accurately predict the rover position and attitude, mitigating the risk and uncertainties in the path planning on high-slope areas.

  9. High-resolution hyperspectral ground mapping for robotic vision

    NASA Astrophysics Data System (ADS)

    Neuhaus, Frank; Fuchs, Christian; Paulus, Dietrich

    2018-04-01

    Recently released hyperspectral cameras use large, mosaiced filter patterns to capture different ranges of the light's spectrum in each of the camera's pixels. Spectral information is sparse, as it is not fully available in each location. We propose an online method that avoids explicit demosaicing of camera images by fusing raw, unprocessed, hyperspectral camera frames inside an ego-centric ground surface map. It is represented as a multilayer heightmap data structure, whose geometry is estimated by combining a visual odometry system with either dense 3D reconstruction or 3D laser data. We use a publicly available dataset to show that our approach is capable of constructing an accurate hyperspectral representation of the surface surrounding the vehicle. We show that in many cases our approach increases spatial resolution over a demosaicing approach, while providing the same amount of spectral information.

  10. An Autonomous Gps-Denied Unmanned Vehicle Platform Based on Binocular Vision for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.

    2018-04-01

    Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.

  11. Autonomous Rock Tracking and Acquisition from a Mars Rover

    NASA Technical Reports Server (NTRS)

    Maimone, Mark W.; Nesnas, Issa A.; Das, Hari

    1999-01-01

    Future Mars exploration missions will perform two types of experiments: science instrument placement for close-up measurement, and sample acquisition for return to Earth. In this paper we describe algorithms we developed for these tasks, and demonstrate them in field experiments using a self-contained Mars Rover prototype, the Rocky 7 rover. Our algorithms perform visual servoing on an elevation map instead of image features, because the latter are subject to abrupt scale changes during the approach. 'This allows us to compensate for the poor odometry that results from motion on loose terrain. We demonstrate the successful grasp of a 5 cm long rock over 1m away using 103-degree field-of-view stereo cameras, and placement of a flexible mast on a rock outcropping over 5m away using 43 degree FOV stereo cameras.

  12. Autonomous Deep-Space Optical Navigation Project

    NASA Technical Reports Server (NTRS)

    D'Souza, Christopher

    2014-01-01

    This project will advance the Autonomous Deep-space navigation capability applied to Autonomous Rendezvous and Docking (AR&D) Guidance, Navigation and Control (GNC) system by testing it on hardware, particularly in a flight processor, with a goal of limited testing in the Integrated Power, Avionics and Software (IPAS) with the ARCM (Asteroid Retrieval Crewed Mission) DRO (Distant Retrograde Orbit) Autonomous Rendezvous and Docking (AR&D) scenario. The technology, which will be harnessed, is called 'optical flow', also known as 'visual odometry'. It is being matured in the automotive and SLAM (Simultaneous Localization and Mapping) applications but has yet to be applied to spacecraft navigation. In light of the tremendous potential of this technique, we believe that NASA needs to design a optical navigation architecture that will use this technique. It is flexible enough to be applicable to navigating around planetary bodies, such as asteroids.

  13. Absolute position calculation for a desktop mobile rehabilitation robot based on three optical mouse sensors.

    PubMed

    Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry

    2011-01-01

    ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm.

  14. Design, Implementation and Validation of the Three-Wheel Holonomic Motion System of the Assistant Personal Robot (APR).

    PubMed

    Moreno, Javier; Clotet, Eduard; Lupiañez, Ruben; Tresanchez, Marcel; Martínez, Dani; Pallejà, Tomàs; Casanovas, Jordi; Palacín, Jordi

    2016-10-10

    This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm.

  15. Design, Implementation and Validation of the Three-Wheel Holonomic Motion System of the Assistant Personal Robot (APR)

    PubMed Central

    Moreno, Javier; Clotet, Eduard; Lupiañez, Ruben; Tresanchez, Marcel; Martínez, Dani; Pallejà, Tomàs; Casanovas, Jordi; Palacín, Jordi

    2016-01-01

    This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm. PMID:27735857

  16. Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry

    NASA Technical Reports Server (NTRS)

    Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)

    2016-01-01

    A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.

  17. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  18. An innovative localisation algorithm for railway vehicles

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Rindi, A.; Vettori, G.

    2014-11-01

    In modern railway automatic train protection and automatic train control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. The aim of this work has been developing an innovative localisation algorithm for railway vehicles able to enhance the performances, in terms of speed and position estimation accuracy, of the classical odometry algorithms, such as the Italian Sistema Controllo Marcia Treno (SCMT). The proposed strategy consists of a sensor fusion between the information coming from a tachometer and an Inertial Measurements Unit (IMU). The sensor outputs have been simulated through a 3D multibody model of a railway vehicle. The work has provided the development of a custom IMU, designed by ECM S.p.a, in order to meet their industrial and business requirements. The industrial requirements have to be compliant with the European Train Control System (ETCS) standards: the European Rail Traffic Management System (ERTMS), a project developed by the European Union to improve the interoperability among different countries, in particular as regards the train control and command systems, fixes some standard values for the odometric (ODO) performance, in terms of speed and travelled distance estimation. The reliability of the ODO estimation has to be taken into account basing on the allowed speed profiles. The results of the currently used ODO algorithms can be improved, especially in case of degraded adhesion conditions; it has been verified in the simulation environment that the results of the proposed localisation algorithm are always compliant with the ERTMS requirements. The estimation strategy has good performance also under degraded adhesion conditions and could be put on board of high-speed railway vehicles; it represents an accurate and reliable solution. The IMU board is tested via a dedicated Hardware in the Loop (HIL) test rig: it includes an industrial robot able to replicate the motion of the railway vehicle. Through the generated experimental outputs the performances of the innovative localisation algorithm have been evaluated: the HIL test rig permitted to test the proposed algorithm, avoiding expensive (in terms of time and cost) on-track tests, obtaining encouraging results. In fact, the preliminary results show a significant improvement of the position and speed estimation performances compared to those obtained with SCMT algorithms, currently in use on the Italian railway network.

  19. Honeybee Odometry: Performance in Varying Natural Terrain

    PubMed Central

    Tautz, Juergen; Zhang, Shaowu; Spaethe, Johannes; Brockmann, Axel; Si, Aung

    2004-01-01

    Recent studies have shown that honeybees flying through short, narrow tunnels with visually textured walls perform waggle dances that indicate a much greater flight distance than that actually flown. These studies suggest that the bee's “odometer” is driven by the optic flow (image motion) that is experienced during flight. One might therefore expect that, when bees fly to a food source through a varying outdoor landscape, their waggle dances would depend upon the nature of the terrain experienced en route. We trained honeybees to visit feeders positioned along two routes, each 580 m long. One route was exclusively over land. The other was initially over land, then over water and, finally, again over land. Flight over water resulted in a significantly flatter slope of the waggle-duration versus distance regression, compared to flight over land. The mean visual contrast of the scenes was significantly greater over land than over water. The results reveal that, in outdoor flight, the honeybee's odometer does not run at a constant rate; rather, the rate depends upon the properties of the terrain. The bee's perception of distance flown is therefore not absolute, but scene-dependent. These findings raise important and interesting questions about how these animals navigate reliably. PMID:15252454

  20. Piecewise-Planar StereoScan: Sequential Structure and Motion using Plane Primitives.

    PubMed

    Raposo, Carolina; Antunes, Michel; P Barreto, Joao

    2017-08-09

    The article describes a pipeline that receives as input a sequence of stereo images, and outputs the camera motion and a Piecewise-Planar Reconstruction (PPR) of the scene. The pipeline, named Piecewise-Planar StereoScan (PPSS), works as follows: the planes in the scene are detected for each stereo view using semi-dense depth estimation; the relative pose is computed by a new closed-form minimal algorithm that only uses point correspondences whenever plane detections do not fully constrain the motion; the camera motion and the PPR are jointly refined by alternating between discrete optimization and continuous bundle adjustment; and, finally, the detected 3D planes are segmented in images using a new framework that handles low texture and visibility issues. PPSS is extensively validated in indoor and outdoor datasets, and benchmarked against two popular point-based SfM pipelines. The experiments confirm that plane-based visual odometry is resilient to situations of small image overlap, poor texture, specularity, and perceptual aliasing where the fast LIBVISO2 pipeline fails. The comparison against VisualSfM+CMVS/PMVS shows that, for a similar computational complexity, PPSS is more accurate and provides much more compelling and visually pleasant 3D models. These results strongly suggest that plane primitives are an advantageous alternative to point correspondences for applications of SfM and 3D reconstruction in man-made environments.

  1. Localization of Mobile Robots Using Odometry and an External Vision Sensor

    PubMed Central

    Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina

    2010-01-01

    This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields. PMID:22319318

  2. Localization of mobile robots using odometry and an external vision sensor.

    PubMed

    Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina

    2010-01-01

    This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields.

  3. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  4. PRoViScout: a planetary scouting rover demonstrator

    NASA Astrophysics Data System (ADS)

    Paar, Gerhard; Woods, Mark; Gimkiewicz, Christiane; Labrosse, Frédéric; Medina, Alberto; Tyler, Laurence; Barnes, David P.; Fritz, Gerald; Kapellos, Konstantinos

    2012-01-01

    Mobile systems exploring Planetary surfaces in future will require more autonomy than today. The EU FP7-SPACE Project ProViScout (2010-2012) establishes the building blocks of such autonomous exploration systems in terms of robotics vision by a decision-based combination of navigation and scientific target selection, and integrates them into a framework ready for and exposed to field demonstration. The PRoViScout on-board system consists of mission management components such as an Executive, a Mars Mission On-Board Planner and Scheduler, a Science Assessment Module, and Navigation & Vision Processing modules. The platform hardware consists of the rover with the sensors and pointing devices. We report on the major building blocks and their functions & interfaces, emphasizing on the computer vision parts such as image acquisition (using a novel zoomed 3D-Time-of-Flight & RGB camera), mapping from 3D-TOF data, panoramic image & stereo reconstruction, hazard and slope maps, visual odometry and the recognition of potential scientifically interesting targets.

  5. Validation of Underwater Sensor Package Using Feature Based SLAM

    PubMed Central

    Cain, Christopher; Leonessa, Alexander

    2016-01-01

    Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package. PMID:26999142

  6. Precise visual navigation using multi-stereo vision and landmark matching

    NASA Astrophysics Data System (ADS)

    Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh

    2007-04-01

    Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.

  7. Allothetic and idiothetic sensor fusion in rat-inspired robot localization

    NASA Astrophysics Data System (ADS)

    Weitzenfeld, Alfredo; Fellous, Jean-Marc; Barrera, Alejandra; Tejera, Gonzalo

    2012-06-01

    We describe a spatial cognition model based on the rat's brain neurophysiology as a basis for new robotic navigation architectures. The model integrates allothetic (external visual landmarks) and idiothetic (internal kinesthetic information) cues to train either rat or robot to learn a path enabling it to reach a goal from multiple starting positions. It stands in contrast to most robotic architectures based on SLAM, where a map of the environment is built to provide probabilistic localization information computed from robot odometry and landmark perception. Allothetic cues suffer in general from perceptual ambiguity when trying to distinguish between places with equivalent visual patterns, while idiothetic cues suffer from imprecise motions and limited memory recalls. We experiment with both types of cues in different maze configurations by training rats and robots to find the goal starting from a fixed location, and then testing them to reach the same target from new starting locations. We show that the robot, after having pre-explored a maze, can find a goal with improved efficiency, and is able to (1) learn the correct route to reach the goal, (2) recognize places already visited, and (3) exploit allothetic and idiothetic cues to improve on its performance. We finally contrast our biologically-inspired approach to more traditional robotic approaches and discuss current work in progress.

  8. Position estimation and driving of an autonomous vehicle by monocular vision

    NASA Astrophysics Data System (ADS)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  9. NASA Tech Briefs, March 2014

    NASA Technical Reports Server (NTRS)

    2014-01-01

    Topics include: Data Fusion for Global Estimation of Forest Characteristics From Sparse Lidar Data; Debris and Ice Mapping Analysis Tool - Database; Data Acquisition and Processing Software - DAPS; Metal-Assisted Fabrication of Biodegradable Porous Silicon Nanostructures; Post-Growth, In Situ Adhesion of Carbon Nanotubes to a Substrate for Robust CNT Cathodes; Integrated PEMFC Flow Field Design for Gravity-Independent Passive Water Removal; Thermal Mechanical Preparation of Glass Spheres; Mechanistic-Based Multiaxial-Stochastic-Strength Model for Transversely-Isotropic Brittle Materials; Methods for Mitigating Space Radiation Effects, Fault Detection and Correction, and Processing Sensor Data; Compact Ka-Band Antenna Feed with Double Circularly Polarized Capability; Dual-Leadframe Transient Liquid Phase Bonded Power Semiconductor Module Assembly and Bonding Process; Quad First Stage Processor: A Four-Channel Digitizer and Digital Beam-Forming Processor; Protective Sleeve for a Pyrotechnic Reefing Line Cutter; Metabolic Heat Regenerated Temperature Swing Adsorption; CubeSat Deployable Log Periodic Dipole Array; Re-entry Vehicle Shape for Enhanced Performance; NanoRacks-Scale MEMS Gas Chromatograph System; Variable Camber Aerodynamic Control Surfaces and Active Wing Shaping Control; Spacecraft Line-of-Sight Stabilization Using LWIR Earth Signature; Technique for Finding Retro-Reflectors in Flash LIDAR Imagery; Novel Hemispherical Dynamic Camera for EVAs; 360 deg Visual Detection and Object Tracking on an Autonomous Surface Vehicle; Simulation of Charge Carrier Mobility in Conducting Polymers; Observational Data Formatter Using CMOR for CMIP5; Propellant Loading Physics Model for Fault Detection Isolation and Recovery; Probabilistic Guidance for Swarms of Autonomous Agents; Reducing Drift in Stereo Visual Odometry; Future Air-Traffic Management Concepts Evaluation Tool; Examination and A Priori Analysis of a Direct Numerical Simulation Database for High-Pressure Turbulent Flows; and Resource-Constrained Application of Support Vector Machines to Imagery.

  10. Trajectories for Locomotion Systems: A Geometric and Computational Approach via Series Expansions

    DTIC Science & Technology

    2004-10-11

    speed controller. The model is endowed with a 100 count per revolution optical encoder for odometry. (2) On-board computation is performed by a single...switching networks,” Automatica, July 2003. Submitted. [17] K. M. Passino, Biomimicry for Optimization, Control, and Automation. New York: Springer

  11. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle over or on the terrain correctly. For long traverses over terrain, the visualization can stream in terrain piecewise in order to maintain the current area of interest for the operator without incurring unreasonable resource constraints on the computing platform. The visualization software is designed to run on laptops that can operate in field-testing environments without Internet access, which is a frequently encountered situation when testing in remote locations that simulate planetary environments such as Mars and other planetary bodies.

  12. Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles.

    PubMed

    Xing, Boyang; Zhu, Quanmin; Pan, Feng; Feng, Xiaoxue

    2018-05-25

    A novel multi-sensor fusion indoor localization algorithm based on ArUco marker is designed in this paper. The proposed ArUco mapping algorithm can build and correct the map of markers online with Grubbs criterion and K-mean clustering, which avoids the map distortion due to lack of correction. Based on the conception of multi-sensor information fusion, the federated Kalman filter is utilized to synthesize the multi-source information from markers, optical flow, ultrasonic and the inertial sensor, which can obtain a continuous localization result and effectively reduce the position drift due to the long-term loss of markers in pure marker localization. The proposed algorithm can be easily implemented in a hardware of one Raspberry Pi Zero and two STM32 micro controllers produced by STMicroelectronics (Geneva, Switzerland). Thus, a small-size and low-cost marker-based localization system is presented. The experimental results show that the speed estimation result of the proposed system is better than Px4flow, and it has the centimeter accuracy of mapping and positioning. The presented system not only gives satisfying localization precision, but also has the potential to expand other sensors (such as visual odometry, ultra wideband (UWB) beacon and lidar) to further improve the localization performance. The proposed system can be reliably employed in Micro Aerial Vehicle (MAV) visual localization and robotics control.

  13. Human Odometry Verifies the Symmetry Perspective on Bipedal Gaits

    ERIC Educational Resources Information Center

    Turvey, M. T.; Harrison, Steven J.; Frank, Till D.; Carello, Claudia

    2012-01-01

    Bipedal gaits have been classified on the basis of the group symmetry of the minimal network of identical differential equations (alias "cells") required to model them. Primary gaits are characterized by dihedral symmetry, whereas secondary gaits are characterized by a lower, cyclic symmetry. This fact was used in a test of human…

  14. Squeezeposenet: Image Based Pose Regression with Small Convolutional Neural Networks for Real Time Uas Navigation

    NASA Astrophysics Data System (ADS)

    Müller, M. S.; Urban, S.; Jutzi, B.

    2017-08-01

    The number of unmanned aerial vehicles (UAVs) is increasing since low-cost airborne systems are available for a wide range of users. The outdoor navigation of such vehicles is mostly based on global navigation satellite system (GNSS) methods to gain the vehicles trajectory. The drawback of satellite-based navigation are failures caused by occlusions and multi-path interferences. Beside this, local image-based solutions like Simultaneous Localization and Mapping (SLAM) and Visual Odometry (VO) can e.g. be used to support the GNSS solution by closing trajectory gaps but are computationally expensive. However, if the trajectory estimation is interrupted or not available a re-localization is mandatory. In this paper we will provide a novel method for a GNSS-free and fast image-based pose regression in a known area by utilizing a small convolutional neural network (CNN). With on-board processing in mind, we employ a lightweight CNN called SqueezeNet and use transfer learning to adapt the network to pose regression. Our experiments show promising results for GNSS-free and fast localization.

  15. Going with the flow: a brief history of the study of the honeybee's navigational 'odometer'.

    PubMed

    Srinivasan, Mandyam V

    2014-06-01

    Honeybees navigate to a food source using a sky-based compass to determine their travel direction, and an odometer to register how far they have travelled. The past 20 years have seen a renewed interest in understanding the nature of the odometer. Early work, pioneered by von Frisch and colleagues, hypothesized that travel distance is measured in terms of the energy that is consumed during the journey. More recent studies suggest that visual cues play a role as well. Specifically, bees appear to gauge travel distance by sensing the extent to which the image of the environment moves in the eye during the journey from the hive to the food source. Most of the evidence indicates that travel distance is measured during the outbound journey. Accumulation of odometric errors is restricted by resetting the odometer every time a prominent landmark is passed. When making detours around large obstacles, the odometer registers the total distance of the path that is flown to the destination, and not the "bee-line" distance. Finally, recent studies are revealing that bees can perform odometry in three dimensions.

  16. An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks.

    PubMed

    Shamwell, E Jared; Nothwang, William D; Perlis, Donald

    2018-05-04

    Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.

  17. Continuous Mapping of Tunnel Walls in a Gnss-Denied Environment

    NASA Astrophysics Data System (ADS)

    Chapman, Michael A.; Min, Cao; Zhang, Deijin

    2016-06-01

    The need for reliable systems for capturing precise detail in tunnels has increased as the number of tunnels (e.g., for cars and trucks, trains, subways, mining and other infrastructure) has increased and the age of these structures and, subsequent, deterioration has introduced structural degradations and eventual failures. Due to the hostile environments encountered in tunnels, mobile mapping systems are plagued with various problems such as loss of GNSS signals, drift of inertial measurements systems, low lighting conditions, dust and poor surface textures for feature identification and extraction. A tunnel mapping system using alternate sensors and algorithms that can deliver precise coordinates and feature attributes from surfaces along the entire tunnel path is presented. This system employs image bridging or visual odometry to estimate precise sensor positions and orientations. The fundamental concept is the use of image sequences to geometrically extend the control information in the absence of absolute positioning data sources. This is a non-trivial problem due to changes in scale, perceived resolution, image contrast and lack of salient features. The sensors employed include forward-looking high resolution digital frame cameras coupled with auxiliary light sources. In addition, a high frequency lidar system and a thermal imager are included to offer three dimensional point clouds of the tunnel walls along with thermal images for moisture detection. The mobile mapping system is equipped with an array of 16 cameras and light sources to capture the tunnel walls. Continuous images are produced using a semi-automated mosaicking process. Results of preliminary experimentation are presented to demonstrate the effectiveness of the system for the generation of seamless precise tunnel maps.

  18. Integrated multi-sensor fusion for mapping and localization in outdoor environments for mobile robots

    NASA Astrophysics Data System (ADS)

    Emter, Thomas; Petereit, Janko

    2014-05-01

    An integrated multi-sensor fusion framework for localization and mapping for autonomous navigation in unstructured outdoor environments based on extended Kalman filters (EKF) is presented. The sensors for localization include an inertial measurement unit, a GPS, a fiber optic gyroscope, and wheel odometry. Additionally a 3D LIDAR is used for simultaneous localization and mapping (SLAM). A 3D map is built while concurrently a localization in a so far established 2D map is estimated with the current scan of the LIDAR. Despite of longer run-time of the SLAM algorithm compared to the EKF update, a high update rate is still guaranteed by sophisticatedly joining and synchronizing two parallel localization estimators.

  19. Control of a Quadcopter Aerial Robot Using Optic Flow Sensing

    NASA Astrophysics Data System (ADS)

    Hurd, Michael Brandon

    This thesis focuses on the motion control of a custom-built quadcopter aerial robot using optic flow sensing. Optic flow sensing is a vision-based approach that can provide a robot the ability to fly in global positioning system (GPS) denied environments, such as indoor environments. In this work, optic flow sensors are used to stabilize the motion of quadcopter robot, where an optic flow algorithm is applied to provide odometry measurements to the quadcopter's central processing unit to monitor the flight heading. The optic-flow sensor and algorithm are capable of gathering and processing the images at 250 frames/sec, and the sensor package weighs 2.5 g and has a footprint of 6 cm2 in area. The odometry value from the optic flow sensor is then used a feedback information in a simple proportional-integral-derivative (PID) controller on the quadcopter. Experimental results are presented to demonstrate the effectiveness of using optic flow for controlling the motion of the quadcopter aerial robot. The technique presented herein can be applied to different types of aerial robotic systems or unmanned aerial vehicles (UAVs), as well as unmanned ground vehicles (UGV).

  20. FLEXnav: a fuzzy logic expert dead-reckoning system for the Segway RMP

    NASA Astrophysics Data System (ADS)

    Ojeda, Lauro; Raju, Mukunda; Borenstein, Johann

    2004-09-01

    Most mobile robots use a combination of absolute and relative sensing techniques for position estimation. Relative positioning techniques are generally known as dead-reckoning. Many systems use odometry as their only dead-reckoning means. However, in recent years fiber optic gyroscopes have become more affordable and are being used on many platforms to supplement odometry, especially in indoor applications. Still, if the terrain is not level (i.e., rugged or rolling terrain), the tilt of the vehicle introduces errors into the conversion of gyro readings to vehicle heading. In order to overcome this problem vehicle tilt must be measured and factored into the heading computation. A unique new mobile robot is the Segway Robotics Mobility Platform (RMP). This functionally close relative of the innovative Segway Human Transporter (HT) stabilizes a statically unstable single-axle robot dynamically, based on the principle of the inverted pendulum. While this approach works very well for human transportation, it introduces as unique set of challenges to navigation equipment using an onboard gyro. This is due to the fact that in operation the Segway RMP constantly changes its forward tilt, to prevent dynamically falling over. This paper introduces our new Fuzzy Logic Expert rule-based navigation (FLEXnav) method for fusing data from multiple gyroscopes and accelerometers in order to estimate accurately the attitude (i.e., heading and tilt) of a mobile robot. The attitude information is then further fused with wheel encoder data to estimate the three-dimensional position of the mobile robot. We have further extended this approach to include the special conditions of operation on the Segway RMP. The paper presents experimental results of a Segway RMP equipped with our system and running over moderately rugged terrain.

  1. Occupancy Grid Map Merging Using Feature Maps

    DTIC Science & Technology

    2010-11-01

    each robot begins exploring at different starting points, once two robots can communicate, they send their odometry data, LIDAR observations, and maps...robots [11]. Moreover, it is relevant to mention that significant success has been achieved in solving SLAM problems when using hybrid maps [12...represents the environment by parametric features. Our method is capable of representing a LIDAR scanned environment map in a parametric fashion. In general

  2. Multi-Target Single Cycle Instrument Placement

    NASA Technical Reports Server (NTRS)

    Pedersen, Liam; Smith, David E.; Deans, Matthew; Sargent, Randy; Kunz, Clay; Lees, David; Rajagopalan, Srikanth; Bualat, Maria

    2005-01-01

    This presentation is about the robotic exploration of Mars using multiple targets command cycle, safe instrument placements, safe operation, and K9 Rover which has a 6 wheel steer rocket-bogey chassis (FIDO, MER), 70% MER size, 1.2 GHz Pentium M laptop running Linux OS, Odometry and compass/inclinometer, CLARAty architecture, 5 DOF manipulator w/CHAMP microscopic camera, SciCams, NavCams and HazCams.

  3. Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission

    NASA Technical Reports Server (NTRS)

    Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.

    2004-01-01

    In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.

  4. PointCom: semi-autonomous UGV control with intuitive interface

    NASA Astrophysics Data System (ADS)

    Rohde, Mitchell M.; Perlin, Victor E.; Iagnemma, Karl D.; Lupa, Robert M.; Rohde, Steven M.; Overholt, James; Fiorani, Graham

    2008-04-01

    Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI) techniques from the entertainment software industry are being used to develop video-game style interfaces that require little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less burdensome than many current generation systems. mand).

  5. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.

  6. A floor-map-aided WiFi/pseudo-odometry integration algorithm for an indoor positioning system.

    PubMed

    Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin

    2015-03-24

    This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The "go and back" phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The "cross-wall" problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning.

  7. Real-Time Identification of Wheel Terrain Interaction Models for Enhanced Autonomous Vehicle Mobility

    DTIC Science & Technology

    2014-04-24

    tim at io n Er ro r ( cm ) 0 2 4 6 8 10 Color Statistics Angelova...Color_Statistics_Error) / Average_Slip_Error Position Estimation Error: Global Pose Po si tio n Es tim at io n Er ro r ( cm ) 0 2 4 6 8 10 12 Color...get some kind of clearance for releasing pose and odometry data) collected at the following sites – Taylor, Gascola, Somerset, Fort Bliss and

  8. A Floor-Map-Aided WiFi/Pseudo-Odometry Integration Algorithm for an Indoor Positioning System

    PubMed Central

    Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin

    2015-01-01

    This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The “go and back” phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The “cross-wall” problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning. PMID:25811224

  9. Robust Parallel Motion Estimation and Mapping with Stereo Cameras in Underground Infrastructure

    NASA Astrophysics Data System (ADS)

    Liu, Chun; Li, Zhengning; Zhou, Yuan

    2016-06-01

    Presently, we developed a novel robust motion estimation method for localization and mapping in underground infrastructure using a pre-calibrated rigid stereo camera rig. Localization and mapping in underground infrastructure is important to safety. Yet it's also nontrivial since most underground infrastructures have poor lighting condition and featureless structure. Overcoming these difficulties, we discovered that parallel system is more efficient than the EKF-based SLAM approach since parallel system divides motion estimation and 3D mapping tasks into separate threads, eliminating data-association problem which is quite an issue in SLAM. Moreover, the motion estimation thread takes the advantage of state-of-art robust visual odometry algorithm which is highly functional under low illumination and provides accurate pose information. We designed and built an unmanned vehicle and used the vehicle to collect a dataset in an underground garage. The parallel system was evaluated by the actual dataset. Motion estimation results indicated a relative position error of 0.3%, and 3D mapping results showed a mean position error of 13cm. Off-line process reduced position error to 2cm. Performance evaluation by actual dataset showed that our system is capable of robust motion estimation and accurate 3D mapping in poor illumination and featureless underground environment.

  10. Tracking Control of Mobile Robots Localized via Chained Fusion of Discrete and Continuous Epipolar Geometry, IMU and Odometry.

    PubMed

    Tick, David; Satici, Aykut C; Shen, Jinglin; Gans, Nicholas

    2013-08-01

    This paper presents a novel navigation and control system for autonomous mobile robots that includes path planning, localization, and control. A unique vision-based pose and velocity estimation scheme utilizing both the continuous and discrete forms of the Euclidean homography matrix is fused with inertial and optical encoder measurements to estimate the pose, orientation, and velocity of the robot and ensure accurate localization and control signals. A depth estimation system is integrated in order to overcome the loss of scale inherent in vision-based estimation. A path following control system is introduced that is capable of guiding the robot along a designated curve. Stability analysis is provided for the control system and experimental results are presented that prove the combined localization and control system performs with high accuracy.

  11. Performance Analysis and Odometry Improvement of an Omnidirectional Mobile Robot for Outdoor Terrain

    DTIC Science & Technology

    2011-09-01

    sin cos , , =      +=         = iD v v i i b yi xi i ξ ξ φ xx (1) where ix and bx are the planer velocity vectors at the i-th...variables given by an operator. The wheel angular velocities, ωi,L and ωi,R, that yield the desired i-th ASOC planer velocity are formulated as follows

  12. Improving Odometric Accuracy for an Autonomous Electric Cart.

    PubMed

    Toledo, Jonay; Piñeiro, Jose D; Arnay, Rafael; Acosta, Daniel; Acosta, Leopoldo

    2018-01-12

    In this paper, a study of the odometric system for the autonomous cart Verdino, which is an electric vehicle based on a golf cart, is presented. A mathematical model of the odometric system is derived from cart movement equations, and is used to compute the vehicle position and orientation. The inputs of the system are the odometry encoders, and the model uses the wheels diameter and distance between wheels as parameters. With this model, a least square minimization is made in order to get the nominal best parameters. This model is updated, including a real time wheel diameter measurement improving the accuracy of the results. A neural network model is used in order to learn the odometric model from data. Tests are made using this neural network in several configurations and the results are compared to the mathematical model, showing that the neural network can outperform the first proposed model.

  13. UAV Research at NASA Langley: Towards Safe, Reliable, and Autonomous Operations

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.

    2016-01-01

    Unmanned Aerial Vehicles (UAV) are fundamental components in several aspects of research at NASA Langley, such as flight dynamics, mission-driven airframe design, airspace integration demonstrations, atmospheric science projects, and more. In particular, NASA Langley Research Center (Langley) is using UAVs to develop and demonstrate innovative capabilities that meet the autonomy and robotics challenges that are anticipated in science, space exploration, and aeronautics. These capabilities will enable new NASA missions such as asteroid rendezvous and retrieval (ARRM), Mars exploration, in-situ resource utilization (ISRU), pollution measurements in historically inaccessible areas, and the integration of UAVs into our everyday lives all missions of increasing complexity, distance, pace, and/or accessibility. Building on decades of NASA experience and success in the design, fabrication, and integration of robust and reliable automated systems for space and aeronautics, Langley Autonomy Incubator seeks to bridge the gap between automation and autonomy by enabling safe autonomous operations via onboard sensing and perception systems in both data-rich and data-deprived environments. The Autonomy Incubator is focused on the challenge of mobility and manipulation in dynamic and unstructured environments by integrating technologies such as computer vision, visual odometry, real-time mapping, path planning, object detection and avoidance, object classification, adaptive control, sensor fusion, machine learning, and natural human-machine teaming. These technologies are implemented in an architectural framework developed in-house for easy integration and interoperability of cutting-edge hardware and software.

  14. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-07-28

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.

  15. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-01-01

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982

  16. Speed Consistency in the Smart Tachograph.

    PubMed

    Borio, Daniele; Cano, Eduardo; Baldini, Gianmarco

    2018-05-16

    In the transportation sector, safety risks can be significantly reduced by monitoring the behaviour of drivers and by discouraging possible misconducts that entail fatigue and can increase the possibility of accidents. The Smart Tachograph (ST), the new revision of the Digital Tachograph (DT), has been designed with this purpose: to verify that speed limits and compulsory rest periods are respected by drivers. In order to operate properly, the ST periodically checks the consistency of data from different sensors, which can be potentially manipulated to avoid the monitoring of the driver behaviour. In this respect, the ST regulation specifies a test procedure to detect motion conflicts originating from inconsistencies between Global Navigation Satellite System (GNSS) and odometry data. This paper provides an experimental evaluation of the speed verification procedure specified by the ST regulation. Several hours of data were collected using three vehicles and considering light urban and highway environments. The vehicles were equipped with an On-Board Diagnostics (OBD) data reader and a GPS/Galileo receiver. The tests prescribed by the regulation were implemented with specific focus on synchronization aspects. The experimental analysis also considered aspects such as the impact of tunnels and the presence of data gaps. The analysis shows that the metrics selected for the tests are resilient to data gaps, latencies between GNSS and odometry data and simplistic manipulations such as data scaling. The new ST forces an attacker to falsify data from both sensors at the same time and in a coherent way. This makes more difficult the implementation of frauds in comparison to the current version of the DT.

  17. Mars Science Laboratory Mission and Science Investigation

    NASA Astrophysics Data System (ADS)

    Grotzinger, John P.; Crisp, Joy; Vasavada, Ashwin R.; Anderson, Robert C.; Baker, Charles J.; Barry, Robert; Blake, David F.; Conrad, Pamela; Edgett, Kenneth S.; Ferdowski, Bobak; Gellert, Ralf; Gilbert, John B.; Golombek, Matt; Gómez-Elvira, Javier; Hassler, Donald M.; Jandura, Louise; Litvak, Maxim; Mahaffy, Paul; Maki, Justin; Meyer, Michael; Malin, Michael C.; Mitrofanov, Igor; Simmonds, John J.; Vaniman, David; Welch, Richard V.; Wiens, Roger C.

    2012-09-01

    Scheduled to land in August of 2012, the Mars Science Laboratory (MSL) Mission was initiated to explore the habitability of Mars. This includes both modern environments as well as ancient environments recorded by the stratigraphic rock record preserved at the Gale crater landing site. The Curiosity rover has a designed lifetime of at least one Mars year (˜23 months), and drive capability of at least 20 km. Curiosity's science payload was specifically assembled to assess habitability and includes a gas chromatograph-mass spectrometer and gas analyzer that will search for organic carbon in rocks, regolith fines, and the atmosphere (SAM instrument); an x-ray diffractometer that will determine mineralogical diversity (CheMin instrument); focusable cameras that can image landscapes and rock/regolith textures in natural color (MAHLI, MARDI, and Mastcam instruments); an alpha-particle x-ray spectrometer for in situ determination of rock and soil chemistry (APXS instrument); a laser-induced breakdown spectrometer to remotely sense the chemical composition of rocks and minerals (ChemCam instrument); an active neutron spectrometer designed to search for water in rocks/regolith (DAN instrument); a weather station to measure modern-day environmental variables (REMS instrument); and a sensor designed for continuous monitoring of background solar and cosmic radiation (RAD instrument). The various payload elements will work together to detect and study potential sampling targets with remote and in situ measurements; to acquire samples of rock, soil, and atmosphere and analyze them in onboard analytical instruments; and to observe the environment around the rover. The 155-km diameter Gale crater was chosen as Curiosity's field site based on several attributes: an interior mountain of ancient flat-lying strata extending almost 5 km above the elevation of the landing site; the lower few hundred meters of the mountain show a progression with relative age from clay-bearing to sulfate-bearing strata, separated by an unconformity from overlying likely anhydrous strata; the landing ellipse is characterized by a mixture of alluvial fan and high thermal inertia/high albedo stratified deposits; and a number of stratigraphically/geomorphically distinct fluvial features. Samples of the crater wall and rim rock, and more recent to currently active surface materials also may be studied. Gale has a well-defined regional context and strong evidence for a progression through multiple potentially habitable environments. These environments are represented by a stratigraphic record of extraordinary extent, and insure preservation of a rich record of the environmental history of early Mars. The interior mountain of Gale Crater has been informally designated at Mount Sharp, in honor of the pioneering planetary scientist Robert Sharp. The major subsystems of the MSL Project consist of a single rover (with science payload), a Multi-Mission Radioisotope Thermoelectric Generator, an Earth-Mars cruise stage, an entry, descent, and landing system, a launch vehicle, and the mission operations and ground data systems. The primary communication path for downlink is relay through the Mars Reconnaissance Orbiter. The primary path for uplink to the rover is Direct-from-Earth. The secondary paths for downlink are Direct-to-Earth and relay through the Mars Odyssey orbiter. Curiosity is a scaled version of the 6-wheel drive, 4-wheel steering, rocker bogie system from the Mars Exploration Rovers (MER) Spirit and Opportunity and the Mars Pathfinder Sojourner. Like Spirit and Opportunity, Curiosity offers three primary modes of navigation: blind-drive, visual odometry, and visual odometry with hazard avoidance. Creation of terrain maps based on HiRISE (High Resolution Imaging Science Experiment) and other remote sensing data were used to conduct simulated driving with Curiosity in these various modes, and allowed selection of the Gale crater landing site which requires climbing the base of a mountain to achieve its primary science goals. The Sample Acquisition, Processing, and Handling (SA/SPaH) subsystem is responsible for the acquisition of rock and soil samples from the Martian surface and the processing of these samples into fine particles that are then distributed to the analytical science instruments. The SA/SPaH subsystem is also responsible for the placement of the two contact instruments (APXS, MAHLI) on rock and soil targets. SA/SPaH consists of a robotic arm and turret-mounted devices on the end of the arm, which include a drill, brush, soil scoop, sample processing device, and the mechanical and electrical interfaces to the two contact science instruments. SA/SPaH also includes drill bit boxes, the organic check material, and an observation tray, which are all mounted on the front of the rover, and inlet cover mechanisms that are placed over the SAM and CheMin solid sample inlet tubes on the rover top deck.

  18. NASA Tech Briefs, October 2012

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Topics discussed include: Detection of Chemical Precursors of Explosives; Detecting Methane From Leaking Pipelines and as Greenhouse Gas in the Atmosphere; Onboard Sensor Data Qualification in Human-Rated Launch Vehicles; Rugged, Portable, Real-Time Optical Gaseous Analyzer for Hydrogen Fluoride; A Probabilistic Mass Estimation Algorithm for a Novel 7-Channel Capacitive Sample Verification Sensor; Low-Power Architecture for an Optical Life Gas Analyzer; Online Cable Tester and Rerouter; A Three-Frequency Feed for Millimeter-Wave Radiometry; Capacitance Probe Resonator for Multichannel Electrometer; Inverted Three-Junction Tandem Thermophotovoltaic Modules; Fabrication of Single Crystal MgO Capsules; Inflatable Hangar for Assembly of Large Structures in Space; Mars Aqueous Processing System; Hybrid Filter Membrane; Design for the Structure and the Mechanics of Moballs; Pressure Dome for High-Pressure Electrolyzer; Cascading Tesla Oscillating Flow Diode for Stirling Engine Gas Bearings; Compact, Low-Force, Low-Noise Linear Actuator; Ultra-Compact Motor Controller; Extreme Ionizing-Radiation-Resistant Bacterium; Wideband Single-Crystal Transducer for Bone Characterization; Fluorescence-Activated Cell Sorting of Live Versus Dead Bacterial Cells and Spores; Nonhazardous Urine Pretreatment Method; Laser-Ranging Transponders for Science Investigations of the Moon and Mars; Ka-Band Waveguide Three-Way Serial Combiner for MMIC Amplifiers; Structural Health Monitoring with Fiber Bragg Grating and Piezo Arrays; Low-Gain Circularly Polarized Antenna with Torus-Shaped Pattern; Stereo and IMU- Assisted Visual Odometry for Small Robots; Global Swath and Gridded Data Tiling; GOES-R: Satellite Insight; Aquarius iPhone Application; Monitoring of International Space Station Telemetry Using Shewhart Control Charts; Theory of a Traveling Wave Feed for a Planar Slot Array Antenna; Time Manager Software for a Flight Processor; Simulation of Oxygen Disintegration and Mixing With Hydrogen or Helium at Supercritical Pressure; A Superfluid Pulse Tube Refrigerator Without Moving Parts for Sub-Kelvin Cooling; Sapphire Viewports for a Venus Probe; The Mobile Chamber; Electric Propulsion Induced Secondary Mass Spectroscopy; and Radiation-Tolerant DC-DC Converters.

  19. NASA Tech Briefs, December 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics include: Ka-Band TWT High-Efficiency Power Combiner for High-Rate Data Transmission; Reusable, Extensible High-Level Data-Distribution Concept; Processing Satellite Imagery To Detect Waste Tire Piles; Monitoring by Use of Clusters of Sensor-Data Vectors; Circuit and Method for Communication Over DC Power Line; Switched Band-Pass Filters for Adaptive Transceivers; Noncoherent DTTLs for Symbol Synchronization; High-Voltage Power Supply With Fast Rise and Fall Times; Waveguide Calibrator for Multi-Element Probe Calibration; Four-Way Ka-Band Power Combiner; Loss-of-Control-Inhibitor Systems for Aircraft; Improved Underwater Excitation-Emission Matrix Fluorometer; Metrology Camera System Using Two-Color Interferometry; Design and Fabrication of High-Efficiency CMOS/CCD Imagers; Foam Core Shielding for Spacecraft CHEM-Based Self-Deploying Planetary Storage Tanks Sequestration of Single-Walled Carbon Nanotubes in a Polymer PPC750 Performance Monitor Application-Program-Installer Builder Using Visual Odometry to Estimate Position and Attitude Design and Data Management System Simple, Script-Based Science Processing Archive Automated Rocket Propulsion Test Management Online Remote Sensing Interface Fusing Image Data for Calculating Position of an Object Implementation of a Point Algorithm for Real-Time Convex Optimization Handling Input and Output for COAMPS Modeling and Grid Generation of Iced Airfoils Automated Identification of Nucleotide Sequences Balloon Design Software Rocket Science 101 Interactive Educational Program Creep Forming of Carbon-Reinforced Ceramic-Matrix Composites Dog-Bone Horns for Piezoelectric Ultrasonic/Sonic Actuators Benchtop Detection of Proteins Recombinant Collagenlike Proteins Remote Sensing of Parasitic Nematodes in Plants Direct Coupling From WGM Resonator Disks to Photodetectors Using Digital Radiography To Image Liquid Nitrogen in Voids Multiple-Parameter, Low-False-Alarm Fire-Detection Systems Mosaic-Detector-Based Fluorescence Spectral Imager Plasmoid Thruster for High Specific-Impulse Propulsion Analysis Method for Quantifying Vehicle Design Goals Improved Tracking of Targets by Cameras on a Mars Rover Sample Caching Subsystem Multistage Passive Cooler for Spaceborne Instruments GVIPS Models and Software Stowable Energy-Absorbing Rocker-Bogie Suspensions

  20. Maximally Informative Statistics for Localization and Mapping

    NASA Technical Reports Server (NTRS)

    Deans, Matthew C.

    2001-01-01

    This paper presents an algorithm for localization and mapping for a mobile robot using monocular vision and odometry as its means of sensing. The approach uses the Variable State Dimension filtering (VSDF) framework to combine aspects of Extended Kalman filtering and nonlinear batch optimization. This paper describes two primary improvements to the VSDF. The first is to use an interpolation scheme based on Gaussian quadrature to linearize measurements rather than relying on analytic Jacobians. The second is to replace the inverse covariance matrix in the VSDF with its Cholesky factor to improve the computational complexity. Results of applying the filter to the problem of localization and mapping with omnidirectional vision are presented.

  1. iGRaND: an invariant frame for RGBD sensor feature detection and descriptor extraction with applications

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a new 3D RGBD image feature, referred to as iGRaND, for use in real-time systems that use these sensors for tracking, motion capture, or robotic vision applications. iGRaND features use a novel local reference frame derived from the image gradient and depth normal (hence iGRaND) that is invariant to scale and viewpoint for Lambertian surfaces. Using this reference frame, Euclidean invariant feature components are computed at keypoints which fuse local geometric shape information with surface appearance information. The performance of the feature for real-time odometry is analyzed and its computational complexity and accuracy is compared with leading alternative 3D features.

  2. MIMOS II on MER One Year of Mossbauer Spectroscopy on the Surface of Mars: From Jarosite at Meridiani Planum to Goethite at Gusev Crater

    NASA Technical Reports Server (NTRS)

    Klingelhoefer, G.; Rodionov, D. S.; Morris, R. V.; Schroeder, C.; deSouza, P. A.; Ming, D. W.; Yen, A. S.; Bernhardt, B.; Renz, F.; Fleischer, I.

    2005-01-01

    The miniaturized Mossbauer (MB) spectrometer MIMOS II [1] is part of the Athena payload of NASA s twin Mars Exploration Rovers "Spirit" (MER-A) and "Opportunity" (MER-B). It determines the Fe-bearing mineralogy of Martian soils and rocks at the Rovers respective landing sites, Gusev crater and Meridiani Planum. Both spectrometers performed successfully during first year of operation. Total integration time is about 49 days for MERA (79 samples) and 34 days for MER-B (85 samples). For curiosity it might be interesting to mention that the total odometry of the oscillating part of the MB drive exceeds 35 km for both rovers.

  3. LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval

    NASA Astrophysics Data System (ADS)

    Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan

    2013-01-01

    As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.

  4. Opportunity's View After Drive on Sol 1806 (Polar)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings just after driving 60.86 meters (200 feet) on the 1,806th Martian day, or sol, of Opportunity's surface mission (Feb. 21, 2009). North is at the center; south at both ends.

    Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    Engineers designed the Sol 1806 drive to be driven backwards as a strategy to redistribute lubricant in the rovers wheels. The right-front wheel had been showing signs of increased friction.

    The rover's position after the Sol 1806 drive was about 2 kilometer (1.2 miles) south southwest of Victoria Crater. Cumulative odometry was 14.74 kilometers (9.16 miles) since landing in January 2004, including 2.96 kilometers (1.84 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008).

    This view is presented as a polar projection with geometric seam correction.

  5. Opportunity's View After Drive on Sol 1806 (Vertical)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings just after driving 60.86 meters (200 feet) on the 1,806th Martian day, or sol, of Opportunity's surface mission (Feb. 21, 2009). North is at the center; south at both ends.

    Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    Engineers designed the Sol 1806 drive to be driven backwards as a strategy to redistribute lubricant in the rovers wheels. The right-front wheel had been showing signs of increased friction.

    The rover's position after the Sol 1806 drive was about 2 kilometer (1.2 miles) south southwest of Victoria Crater. Cumulative odometry was 14.74 kilometers (9.16 miles) since landing in January 2004, including 2.96 kilometers (1.84 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008).

    This view is presented as a vertical projection with geometric seam correction.

  6. Opportunity's View After Drive on Sol 1806

    NASA Technical Reports Server (NTRS)

    2009-01-01

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings just after driving 60.86 meters (200 feet) on the 1,806th Martian day, or sol, of Opportunity's surface mission (Feb. 21, 2009). North is at the center; south at both ends.

    Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    Engineers designed the Sol 1806 drive to be driven backwards as a strategy to redistribute lubricant in the rovers wheels. The right-front wheel had been showing signs of increased friction.

    The rover's position after the Sol 1806 drive was about 2 kilometer (1.2 miles) south southwest of Victoria Crater. Cumulative odometry was 14.74 kilometers (9.16 miles) since landing in January 2004, including 2.96 kilometers (1.84 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008).

    This view is presented as a cylindrical projection with geometric seam correction.

  7. Current state of the art of vision based SLAM

    NASA Astrophysics Data System (ADS)

    Muhammad, Naveed; Fofi, David; Ainouz, Samia

    2009-02-01

    The ability of a robot to localise itself and simultaneously build a map of its environment (Simultaneous Localisation and Mapping or SLAM) is a fundamental characteristic required for autonomous operation of the robot. Vision Sensors are very attractive for application in SLAM because of their rich sensory output and cost effectiveness. Different issues are involved in the problem of vision based SLAM and many different approaches exist in order to solve these issues. This paper gives a classification of state-of-the-art vision based SLAM techniques in terms of (i) imaging systems used for performing SLAM which include single cameras, stereo pairs, multiple camera rigs and catadioptric sensors, (ii) features extracted from the environment in order to perform SLAM which include point features and line/edge features, (iii) initialisation of landmarks which can either be delayed or undelayed, (iv) SLAM techniques used which include Extended Kalman Filtering, Particle Filtering, biologically inspired techniques like RatSLAM, and other techniques like Local Bundle Adjustment, and (v) use of wheel odometry information. The paper also presents the implementation and analysis of stereo pair based EKF SLAM for synthetic data. Results prove the technique to work successfully in the presence of considerable amounts of sensor noise. We believe that state of the art presented in the paper can serve as a basis for future research in the area of vision based SLAM. It will permit further research in the area to be carried out in an efficient and application specific way.

  8. A Low Cost Mobile Robot Based on Proportional Integral Derivative (PID) Control System and Odometer for Education

    NASA Astrophysics Data System (ADS)

    Haq, R.; Prayitno, H.; Dzulkiflih; Sucahyo, I.; Rahmawati, E.

    2018-03-01

    In this article, the development of a low cost mobile robot based on PID controller and odometer for education is presented. PID controller and odometer is applied for controlling mobile robot position. Two-dimensional position vector in cartesian coordinate system have been inserted to robot controller as an initial and final position. Mobile robot has been made based on differential drive and sensor magnetic rotary encoder which measured robot position from a number of wheel rotation. Odometry methode use data from actuator movements for predicting change of position over time. The mobile robot is examined to get final position with three different heading angle 30°, 45° and 60° by applying various value of KP, KD and KI constant.

  9. Optic flow odometry operates independently of stride integration in carried ants.

    PubMed

    Pfeffer, Sarah E; Wittlinger, Matthias

    2016-09-09

    Cataglyphis desert ants are impressive navigators. When the foragers roam the desert, they employ path integration. For these ants, distance estimation is one key challenge. Distance information was thought to be provided by optic flow (OF)-that is, image motion experienced during travel-but this idea was abandoned when stride integration was discovered as an odometer mechanism in ants. We show that ants transported by nest mates are capable of measuring travel distance exclusively by the use of OF cues. Furthermore, we demonstrate that the information gained from the optic flowmeter cannot be transferred to the stride integrator. Our results suggest a dual information channel that allows the ants to measure distances by strides and OF cues, although both systems operate independently and in a redundant manner. Copyright © 2016, American Association for the Advancement of Science.

  10. A Universal Vacant Parking Slot Recognition System Using Sensors Mounted on Off-the-Shelf Vehicles.

    PubMed

    Suhr, Jae Kyu; Jung, Ho Gi

    2018-04-16

    An automatic parking system is an essential part of autonomous driving, and it starts by recognizing vacant parking spaces. This paper proposes a method that can recognize various types of parking slot markings in a variety of lighting conditions including daytime, nighttime, and underground. The proposed method can readily be commercialized since it uses only those sensors already mounted on off-the-shelf vehicles: an around-view monitor (AVM) system, ultrasonic sensors, and in-vehicle motion sensors. This method first detects separating lines by extracting parallel line pairs from AVM images. Parking slot candidates are generated by pairing separating lines based on the geometric constraints of the parking slot. These candidates are confirmed by recognizing their entrance positions using line and corner features and classifying their occupancies using ultrasonic sensors. For more reliable recognition, this method uses the separating lines and parking slots not only found in the current image but also found in previous images by tracking their positions using the in-vehicle motion-sensor-based vehicle odometry. The proposed method was quantitatively evaluated using a dataset obtained during the day, night, and underground, and it outperformed previous methods by showing a 95.24% recall and a 97.64% precision.

  11. A Universal Vacant Parking Slot Recognition System Using Sensors Mounted on Off-the-Shelf Vehicles

    PubMed Central

    2018-01-01

    An automatic parking system is an essential part of autonomous driving, and it starts by recognizing vacant parking spaces. This paper proposes a method that can recognize various types of parking slot markings in a variety of lighting conditions including daytime, nighttime, and underground. The proposed method can readily be commercialized since it uses only those sensors already mounted on off-the-shelf vehicles: an around-view monitor (AVM) system, ultrasonic sensors, and in-vehicle motion sensors. This method first detects separating lines by extracting parallel line pairs from AVM images. Parking slot candidates are generated by pairing separating lines based on the geometric constraints of the parking slot. These candidates are confirmed by recognizing their entrance positions using line and corner features and classifying their occupancies using ultrasonic sensors. For more reliable recognition, this method uses the separating lines and parking slots not only found in the current image but also found in previous images by tracking their positions using the in-vehicle motion-sensor-based vehicle odometry. The proposed method was quantitatively evaluated using a dataset obtained during the day, night, and underground, and it outperformed previous methods by showing a 95.24% recall and a 97.64% precision. PMID:29659512

  12. Ground truth and benchmarks for performance evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  13. Cheap or Robust? The practical realization of self-driving wheelchair technology.

    PubMed

    Burhanpurkar, Maya; Labbe, Mathieu; Guan, Charlie; Michaud, Francois; Kelly, Jonathan

    2017-07-01

    To date, self-driving experimental wheelchair technologies have been either inexpensive or robust, but not both. Yet, in order to achieve real-world acceptance, both qualities are fundamentally essential. We present a unique approach to achieve inexpensive and robust autonomous and semi-autonomous assistive navigation for existing fielded wheelchairs, of which there are approximately 5 million units in Canada and United States alone. Our prototype wheelchair platform is capable of localization and mapping, as well as robust obstacle avoidance, using only a commodity RGB-D sensor and wheel odometry. As a specific example of the navigation capabilities, we focus on the single most common navigation problem: the traversal of narrow doorways in arbitrary environments. The software we have developed is generalizable to corridor following, desk docking, and other navigation tasks that are either extremely difficult or impossible for people with upper-body mobility impairments.

  14. Anisotropic encoding of three-dimensional space by place cells and grid cells

    PubMed Central

    Hayman, R.; Verriotis, M.; Jovalekic, A.; Fenton, A.A.; Jeffery, K.J.

    2011-01-01

    The subjective sense of space may result in part from the combined activity of place cells, in the hippocampus, and grid cells in posterior cortical regions such as entorhinal cortex and pre/parasubiculum. In horizontal planar environments, place cells provide focal positional information while grid cells supply odometric (distance-measuring) information. How these cells operate in three dimensions is unknown, even though the real world is three–dimensional. The present study explored this issue in rats exploring two different kinds of apparatus, a climbing wall (the “pegboard”) and a helix. Place and grid cell firing fields had normal horizontal characteristics but were elongated vertically, with grid fields forming stripes. It appears that grid cell odometry (and by implication path integration) is impaired/absent in the vertical domain, at least when the animal itself remains horizontal. These findings suggest that the mammalian encoding of three-dimensional space is anisotropic. PMID:21822271

  15. Robust position estimation of a mobile vehicle

    NASA Astrophysics Data System (ADS)

    Conan, Vania; Boulanger, Pierre; Elgazzar, Shadia

    1994-11-01

    The ability to estimate the position of a mobile vehicle is a key task for navigation over large distances in complex indoor environments such as nuclear power plants. Schematics of the plants are available, but they are incomplete, as real settings contain many objects, such as pipes, cables or furniture, that mask part of the model. The position estimation method described in this paper matches 3-D data with a simple schematic of a plant. It is basically independent of odometry information and viewpoint, robust to noisy data and spurious points and largely insensitive to occlusions. The method is based on a hypothesis/verification paradigm and its complexity is polynomial; it runs in (Omicron) (m4n4), where m represents the number of model patches and n the number of scene patches. Heuristics are presented to speed up the algorithm. Results on real 3-D data show good behavior even when the scene is very occluded.

  16. Laser- and Multi-Spectral Monitoring of Natural Objects from UAVs

    NASA Astrophysics Data System (ADS)

    Reiterer, Alexander; Frey, Simon; Koch, Barbara; Stemmler, Simon; Weinacker, Holger; Hoffmann, Annemarie; Weiler, Markus; Hergarten, Stefan

    2016-04-01

    The paper describes the research, development and evaluation of a lightweight sensor system for UAVs. The system is composed of three main components: (1) a laser scanning module, (2) a multi-spectral camera system, and (3) a processing/storage unit. All three components are newly developed. Beside measurement precision and frequency, the low weight has been one of the challenging tasks. The current system has a total weight of about 2.5 kg and is designed as a self-contained unit (incl. storage and battery units). The main features of the system are: laser-based multi-echo 3D measurement by a wavelength of 905 nm (totally eye save), measurement range up to 200 m, measurement frequency of 40 kHz, scanning frequency of 16 Hz, relative distance accuracy of 10 mm. The system is equipped with both GNSS and IMU. Alternatively, a multi-visual-odometry system has been integrated to estimate the trajectory of the UAV by image features (based on this system a calculation of 3D-coordinates without GNSS is possible). The integrated multi-spectral camera system is based on conventional CMOS-image-chips equipped with a special sets of band-pass interference filters with a full width half maximum (FWHM) of 50 nm. Good results for calculating the normalized difference vegetation index (NDVI) and the wide dynamic range vegetation index (WDRVI) have been achieved using the band-pass interference filter-set with a FWHM of 50 nm and an exposure times between 5.000 μs and 7.000 μs. The system is currently used for monitoring of natural objects and surfaces, like forest, as well as for geo-risk analysis (landslides). By measuring 3D-geometric and multi-spectral information a reliable monitoring and interpretation of the data-set is possible. The paper gives an overview about the development steps, the system, the evaluation and first results.

  17. Opportunity's View After Long Drive on Sol 1770 (Polar)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings just after driving 104 meters (341 feet) on the 1,770th Martian day, or sol, of Opportunity's surface mission (January 15, 2009).

    This view is presented as a polar projection with geometric seam correction. North is at the top.

    Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    Prior to the Sol 1770 drive, Opportunity had driven less than a meter since Sol 1713 (November 17, 2008), while it used the tools on its robotic arm first to examine a meteorite called 'Santorini' during weeks of restricted communication while the sun was nearly in line between Mars and Earth, then to examine bedrock and soil targets near Santorini.

    The rover's position after the Sol 1770 drive was about 1.1 kilometer (two-thirds of a mile) south southwest of Victoria Crater. Cumulative odometry was 13.72 kilometers (8.53 miles) since landing in January 2004, including 1.94 kilometers (1.21 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008).

  18. Opportunity's View After Long Drive on Sol 1770

    NASA Technical Reports Server (NTRS)

    2009-01-01

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings just after driving 104 meters (341 feet) on the 1,770th Martian day, or sol, of Opportunity's surface mission (January 15, 2009).

    Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    Prior to the Sol 1770 drive, Opportunity had driven less than a meter since Sol 1713 (November 17, 2008), while it used the tools on its robotic arm first to examine a meteorite called 'Santorini' during weeks of restricted communication while the sun was nearly in line between Mars and Earth, then to examine bedrock and soil targets near Santorini.

    The rover's position after the Sol 1770 drive was about 1.1 kilometer (two-thirds of a mile) south southwest of Victoria Crater. Cumulative odometry was 13.72 kilometers (8.53 miles) since landing in January 2004, including 1.94 kilometers (1.21 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008).

    This view is presented as a cylindrical projection with geometric seam correction.

  19. Opportunity's View After Long Drive on Sol 1770 (Vertical)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings just after driving 104 meters (341 feet) on the 1,770th Martian day, or sol, of Opportunity's surface mission (January 15, 2009).

    This view is presented as a vertical projection with geometric seam correction. North is at the top.

    Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    Prior to the Sol 1770 drive, Opportunity had driven less than a meter since Sol 1713 (November 17, 2008), while it used the tools on its robotic arm first to examine a meteorite called 'Santorini' during weeks of restricted communication while the sun was nearly in line between Mars and Earth, then to examine bedrock and soil targets near Santorini.

    The rover's position after the Sol 1770 drive was about 1.1 kilometer (two-thirds of a mile) south southwest of Victoria Crater. Cumulative odometry was 13.72 kilometers (8.53 miles) since landing in January 2004, including 1.94 kilometers (1.21 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008).

  20. Toward a generic UGV autopilot

    NASA Astrophysics Data System (ADS)

    Moore, Kevin L.; Whitehorn, Mark; Weinstein, Alejandro J.; Xia, Junjun

    2009-05-01

    Much of the success of small unmanned air vehicles (UAVs) has arguably been due to the widespread availability of low-cost, portable autopilots. While the development of unmanned ground vehicles (UGVs) has led to significant achievements, as typified by recent grand challenge events, to date the UGV equivalent of the UAV autopilot is not available. In this paper we describe our recent research aimed at the development of a generic UGV autopilot. Assuming we are given a drive-by-wire vehicle that accepts as inputs steering, brake, and throttle commands, we present a system that adds sonar ranging sensors, GPS/IMU/odometry, stereo camera, and scanning laser sensors, together with a variety of interfacing and communication hardware. The system also includes a finite state machine-based software architecture as well as a graphical user interface for the operator control unit (OCU). Algorithms are presented that enable an end-to-end scenario whereby an operator can view stereo images as seen by the vehicle and can input GPS waypoints either from a map or in the vehicle's scene-view image, at which point the system uses the environmental sensors as inputs to a Kalman filter for pose estimation and then computes control actions to move through the waypoint list, while avoiding obstacles. The long-term goal of the research is a system that is generically applicable to any drive-by-wire unmanned ground vehicle.

  1. Algorithms and Sensors for Small Robot Path Following

    NASA Technical Reports Server (NTRS)

    Hogg, Robert W.; Rankin, Arturo L.; Roumeliotis, Stergios I.; McHenry, Michael C.; Helmick, Daniel M.; Bergh, Charles F.; Matthies, Larry

    2002-01-01

    Tracked mobile robots in the 20 kg size class are under development for applications in urban reconnaissance. For efficient deployment, it is desirable for teams of robots to be able to automatically execute path following behaviors, with one or more followers tracking the path taken by a leader. The key challenges to enabling such a capability are (l) to develop sensor packages for such small robots that can accurately determine the path of the leader and (2) to develop path following algorithms for the subsequent robots. To date, we have integrated gyros, accelerometers, compass/inclinometers, odometry, and differential GPS into an effective sensing package. This paper describes the sensor package, sensor processing algorithm, and path tracking algorithm we have developed for the leader/follower problem in small robots and shows the result of performance characterization of the system. We also document pragmatic lessons learned about design, construction, and electromagnetic interference issues particular to the performance of state sensors on small robots.

  2. Terrain Model Registration for Single Cycle Instrument Placement

    NASA Technical Reports Server (NTRS)

    Deans, Matthew; Kunz, Clay; Sargent, Randy; Pedersen, Liam

    2003-01-01

    This paper presents an efficient and robust method for registration of terrain models created using stereo vision on a planetary rover. Our approach projects two surface models into a virtual depth map, rendering the models as they would be seen from a single range sensor. Correspondence is established based on which points project to the same location in the virtual range sensor. A robust norm of the deviations in observed depth is used as the objective function, and the algorithm searches for the rigid transformation which minimizes the norm. An initial coarse search is done using rover pose information from odometry and orientation sensing. A fine search is done using Levenberg-Marquardt. Our method enables a planetary rover to keep track of designated science targets as it moves, and to hand off targets from one set of stereo cameras to another. These capabilities are essential for the rover to autonomously approach a science target and place an instrument in contact in a single command cycle.

  3. The Ascendancy of the Visual and Issues of Gender: Equality versus Difference.

    ERIC Educational Resources Information Center

    Damarin, Suzanne K.

    1993-01-01

    Discussion of visual literacy, visual cognition, visual thinking and learning, and visual knowledge focuses on women and gender differences. Topics addressed include educational equality and the visual, including equality versus difference; women and mass culture; difference and the design of visual instruction; and feminist education and the…

  4. Evaluating walking in patients with multiple sclerosis: which assessment tools are useful in clinical practice?

    PubMed

    Bethoux, Francois; Bennett, Susan

    2011-01-01

    Walking limitations are among the most visible manifestations of multiple sclerosis (MS). Regular walking assessments should be a component of patient management and require instruments that are appropriate from the clinician's and the patient's perspectives. This article reviews frequently used instruments to assess walking in patients with MS, with emphasis on their validity, reliability, and practicality in the clinical setting. Relevant articles were identified based on PubMed searches using the following terms: "multiple sclerosis AND (walking OR gait OR mobility OR physical activity) AND (disability evaluation)"; references of relevant articles were also searched. Although many clinician- and patient-driven instruments are available, not all have been validated in MS, and some are not sensitive enough to detect small but clinically important changes. Choosing among these depends on what needs to be measured, psychometric properties, the clinical relevance of results, and practicality with respect to space, time, and patient burden. Of the instruments available, the clinician-observed Timed 25-Foot Walk and patient self-report 12-Item Multiple Sclerosis Walking Scale have properties that make them suitable for routine evaluation of walking performance. The Dynamic Gait Index and the Timed Up and Go test involve other aspects of mobility, including balance. Tests of endurance, such as the 2- or 6-Minute Walk, may provide information on motor fatigue not captured by other tests. Quantitative measurement of gait kinetics and kinematics, and recordings of mobility in the patient's environment via accelerometry or Global Positioning System odometry, are currently not routinely used in the clinical setting.

  5. Evaluating Walking in Patients with Multiple Sclerosis

    PubMed Central

    Bennett, Susan

    2011-01-01

    Walking limitations are among the most visible manifestations of multiple sclerosis (MS). Regular walking assessments should be a component of patient management and require instruments that are appropriate from the clinician's and the patient's perspectives. This article reviews frequently used instruments to assess walking in patients with MS, with emphasis on their validity, reliability, and practicality in the clinical setting. Relevant articles were identified based on PubMed searches using the following terms: “multiple sclerosis AND (walking OR gait OR mobility OR physical activity) AND (disability evaluation)”; references of relevant articles were also searched. Although many clinician- and patient-driven instruments are available, not all have been validated in MS, and some are not sensitive enough to detect small but clinically important changes. Choosing among these depends on what needs to be measured, psychometric properties, the clinical relevance of results, and practicality with respect to space, time, and patient burden. Of the instruments available, the clinician-observed Timed 25-Foot Walk and patient self-report 12-Item Multiple Sclerosis Walking Scale have properties that make them suitable for routine evaluation of walking performance. The Dynamic Gait Index and the Timed Up and Go test involve other aspects of mobility, including balance. Tests of endurance, such as the 2- or 6-Minute Walk, may provide information on motor fatigue not captured by other tests. Quantitative measurement of gait kinetics and kinematics, and recordings of mobility in the patient's environment via accelerometry or Global Positioning System odometry, are currently not routinely used in the clinical setting. PMID:24453700

  6. Traveling in the dark: the legibility of a regular and predictable structure of the environment extends beyond its borders.

    PubMed

    Yaski, Osnat; Portugali, Juval; Eilam, David

    2012-04-01

    The physical structure of the surrounding environment shapes the paths of progression, which in turn reflect the structure of the environment and the way that it shapes behavior. A regular and coherent physical structure results in paths that extend over the entire environment. In contrast, irregular structure results in traveling over a confined sector of the area. In this study, rats were tested in a dark arena in which half the area contained eight objects in a regular grid layout, and the other half contained eight objects in an irregular layout. In subsequent trials, a salient landmark was placed first within the irregular half, and then within the grid. We hypothesized that rats would favor travel in the area with regular order, but found that activity in the area with irregular object layout did not differ from activity in the area with grid layout, even when the irregular half included a salient landmark. Thus, the grid impact in one arena half extended to the other half and overshadowed the presumed impact of the salient landmark. This could be explained by mechanisms that control spatial behavior, such as grid cells and odometry. However, when objects were spaced irregularly over the entire arena, the salient landmark became dominant and the paths converged upon it, especially from objects with direct access to the salient landmark. Altogether, three environmental properties: (i) regular and predictable structure; (ii) salience of landmarks; and (iii) accessibility, hierarchically shape the paths of progression in a dark environment. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Multidimensional structured data visualization method and apparatus, text visualization method and apparatus, method and apparatus for visualizing and graphically navigating the world wide web, method and apparatus for visualizing hierarchies

    DOEpatents

    Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA

    2012-03-06

    A method of displaying correlations among information objects includes receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.

  8. Visual dysfunction in Parkinson’s disease

    PubMed Central

    Weil, Rimona S.; Schrag, Anette E.; Warren, Jason D.; Crutch, Sebastian J.; Lees, Andrew J.; Morris, Huw R.

    2016-01-01

    Patients with Parkinson’s disease have a number of specific visual disturbances. These include changes in colour vision and contrast sensitivity and difficulties with complex visual tasks such as mental rotation and emotion recognition. We review changes in visual function at each stage of visual processing from retinal deficits, including contrast sensitivity and colour vision deficits to higher cortical processing impairments such as object and motion processing and neglect. We consider changes in visual function in patients with common Parkinson’s disease-associated genetic mutations including GBA and LRRK2. We discuss the association between visual deficits and clinical features of Parkinson’s disease such as rapid eye movement sleep behavioural disorder and the postural instability and gait disorder phenotype. We review the link between abnormal visual function and visual hallucinations, considering current models for mechanisms of visual hallucinations. Finally, we discuss the role of visuo-perceptual testing as a biomarker of disease and predictor of dementia in Parkinson’s disease. PMID:27412389

  9. Creating Visual Materials for Multi-Handicapped Deaf Learners.

    ERIC Educational Resources Information Center

    Hack, Carole; Brosmith, Susan

    1980-01-01

    The article describes two groups of visual materials developed for multiply handicapped deaf teenagers. The daily living skills project includes vocabulary lists, visuals, games and a model related to household cleaning, personal grooming, or consumer skills. The occupational information project includes visuals of tools, materials, and clothing…

  10. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2015-03-03

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes multiple operand names, each operand corresponding to one or more fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first operands with the columns shelf and to associate one or more second operands with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first operands, and each pane has a y-axis defined based on data for the one or more second operands.

  11. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2015-11-10

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes a plurality of fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first fields with the columns shelf and to associate one or more second fields with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first fields, and each pane has a y-axis defined based on data for the one or more second fields.

  12. Screening methods for post-stroke visual impairment: a systematic review.

    PubMed

    Hanna, Kerry Louise; Hepworth, Lauren Rachel; Rowe, Fiona

    2017-12-01

    To provide a systematic overview of the various tools available to screen for post-stroke visual impairment. A review of the literature was conducted including randomised controlled trials, controlled trials, cohort studies, observational studies, systematic reviews and retrospective medical note reviews. All languages were included and translation was obtained. Participants included adults ≥18 years old diagnosed with a visual impairment as a direct cause of a stroke. We searched a broad range of scholarly online resources and hand-searched articles registers of published, unpublished and on-going trials. Search terms included a variety of MESH terms and alternatives in relation to stroke and visual conditions. Study selection was performed by two authors independently. The quality of the evidence and risk of bias were assessed using the STROBE, GRACE and PRISMA statements. A total of 25 articles (n = 2924) were included in this review. Articles appraised reported on tools screening solely for visual impairments or for general post-stroke disabilities inclusive of vision. The majority of identified tools screen for visual perception including visual neglect (VN), with few screening for visual acuity (VA), visual field (VF) loss or ocular motility (OM) defects. Six articles reported on nine screening tools which combined visual screening assessment alongside screening for general stroke disabilities. Of these, three included screening for VA; three screened for VF loss; three screened for OM defects and all screened for VN. Two tools screened for all visual impairments. A further 19 articles were found which reported on individual vision screening tests in stroke populations; two for VF loss; 11 for VN and six for other visual perceptual defects. Most tools cannot accurately account for those with aphasia or communicative deficits, which are common problems following a stroke. There is currently no standardised visual screening tool which can accurately assess all potential post-stroke visual impairments. The current tools screen for only a number of potential stroke-related impairments, which means many visual defects may be missed. The sensitivity of those which screen for all impairments is significantly lowered when patients are unable to report their visual symptoms. Future research is required to develop a tool capable of assessing stroke patients which encompasses all potential visual deficits and can also be easily performed by both the patients and administered by health care professionals in order to ensure all stroke survivors with visual impairment are accurately identified and managed. Implications for Rehabilitation Over 65% of stroke survivors will suffer from a visual impairment, whereas 45% of stroke units do not assess vision. Visual impairment significantly reduces the quality of life, such as being unable to return to work, driving and depression. This review outlines the available screening methods to accurately identify stroke survivors with visual impairments. Identifying visual impairment after stroke can aid general rehabilitation and thus, improve the quality of life for these patients.

  13. An intelligent space for mobile robot localization using a multi-camera system.

    PubMed

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  14. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    PubMed Central

    Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.

    2014-01-01

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009

  15. Active optical sensors for tree stem detection and classification in nurseries.

    PubMed

    Garrido, Miguel; Perez-Ruiz, Manuel; Valero, Constantino; Gliever, Chris J; Hanson, Bradley D; Slaughter, David C

    2014-06-19

    Active optical sensing (LIDAR and light curtain transmission) devices mounted on a mobile platform can correctly detect, localize, and classify trees. To conduct an evaluation and comparison of the different sensors, an optical encoder wheel was used for vehicle odometry and provided a measurement of the linear displacement of the prototype vehicle along a row of tree seedlings as a reference for each recorded sensor measurement. The field trials were conducted in a juvenile tree nursery with one-year-old grafted almond trees at Sierra Gold Nurseries, Yuba City, CA, United States. Through these tests and subsequent data processing, each sensor was individually evaluated to characterize their reliability, as well as their advantages and disadvantages for the proposed task. Test results indicated that 95.7% and 99.48% of the trees were successfully detected with the LIDAR and light curtain sensors, respectively. LIDAR correctly classified, between alive or dead tree states at a 93.75% success rate compared to 94.16% for the light curtain sensor. These results can help system designers select the most reliable sensor for the accurate detection and localization of each tree in a nursery, which might allow labor-intensive tasks, such as weeding, to be automated without damaging crops.

  16. Localization Based on Magnetic Markers for an All-Wheel Steering Vehicle

    PubMed Central

    Byun, Yeun Sub; Kim, Young Chol

    2016-01-01

    Real-time continuous localization is a key technology in the development of intelligent transportation systems. In these systems, it is very important to have accurate information about the position and heading angle of the vehicle at all times. The most widely implemented methods for positioning are the global positioning system (GPS), vision-based system, and magnetic marker system. Among these methods, the magnetic marker system is less vulnerable to indoor and outdoor environment conditions; moreover, it requires minimal maintenance expenses. In this paper, we present a position estimation scheme based on magnetic markers and odometry sensors for an all-wheel-steering vehicle. The heading angle of the vehicle is determined by using the position coordinates of the last two detected magnetic markers and odometer data. The instant position and heading angle of the vehicle are integrated with an extended Kalman filter to estimate the continuous position. GPS data with the real-time kinematics mode was obtained to evaluate the performance of the proposed position estimation system. The test results show that the performance of the proposed localization algorithm is accurate (mean error: 3 cm; max error: 9 cm) and reliable under unexpected missing markers or incorrect markers. PMID:27916827

  17. Visual Environments for CFD Research

    NASA Technical Reports Server (NTRS)

    Watson, Val; George, Michael W. (Technical Monitor)

    1994-01-01

    This viewgraph presentation gives an overview of the visual environments for computational fluid dynamics (CFD) research. It includes details on critical needs from the future computer environment, features needed to attain this environment, prospects for changes in and the impact of the visualization revolution on the human-computer interface, human processing capabilities, limits of personal environment and the extension of that environment with computers. Information is given on the need for more 'visual' thinking (including instances of visual thinking), an evaluation of the alternate approaches for and levels of interactive computer graphics, a visual analysis of computational fluid dynamics, and an analysis of visualization software.

  18. Opportunity's View After Drive on Sol 1806 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11816 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11816

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 60.86 meters (200 feet) on the 1,806th Martian day, or sol, of Opportunity's surface mission (Feb. 21, 2009). North is at the center; south at both ends.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    Engineers designed the Sol 1806 drive to be driven backwards as a strategy to redistribute lubricant in the rovers wheels. The right-front wheel had been showing signs of increased friction.

    The rover's position after the Sol 1806 drive was about 2 kilometer (1.2 miles) south southwest of Victoria Crater. Cumulative odometry was 14.74 kilometers (9.16 miles) since landing in January 2004, including 2.96 kilometers (1.84 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008).

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  19. Opportunity's View After Long Drive on Sol 1770 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11791 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11791

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings just after driving 104 meters (341 feet) on the 1,770th Martian day, or sol, of Opportunity's surface mission (January 15, 2009).

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    Tracks from the drive extend northward across dark-toned sand ripples and light-toned patches of exposed bedrock in the Meridiani Planum region of Mars. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    Prior to the Sol 1770 drive, Opportunity had driven less than a meter since Sol 1713 (November 17, 2008), while it used the tools on its robotic arm first to examine a meteorite called 'Santorini' during weeks of restricted communication while the sun was nearly in line between Mars and Earth, then to examine bedrock and soil targets near Santorini.

    The rover's position after the Sol 1770 drive was about 1.1 kilometer (two-thirds of a mile) south southwest of Victoria Crater. Cumulative odometry was 13.72 kilometers (8.53 miles) since landing in January 2004, including 1.94 kilometers (1.21 miles) since climbing out of Victoria Crater on the west side of the crater on Sol 1634 (August 28, 2008).

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  20. Stroke survivors' views and experiences on impact of visual impairment.

    PubMed

    Rowe, Fiona J

    2017-09-01

    We sought to determine stroke survivors' views on impact of stroke-related visual impairment to quality of life. Stroke survivors with visual impairment, more than 1 year post stroke onset, were recruited. Semistructured biographical narrative interviews were audio-recorded and transcribed verbatim. A thematic approach to analysis of the qualitative data was adopted. Transcripts were systematically coded using NVivo10 software. Thirty-five stroke survivors were interviewed across the UK: 16 females, 19 males; aged 20-75 years at stroke onset. Five qualitative themes emerged: "Formal care," "Symptoms and self," "Adaptations," "Daily life," and "Information." Where visual problems existed, they were often not immediately recognized as part of the stroke syndrome and attributed to other causes such as migraine. Many participants did not receive early vision assessment or treatment for their visual problems. Visual problems included visual field loss, double vision, and perceptual problems. Impact of visual problems included loss in confidence, being a burden to others, increased collisions/accidents, and fear of falling. They made many self-identified adaptations to compensate for visual problems: magnifiers, large print, increased lighting, use of white sticks. There was a consistent lack of support and provision of information about visual problems. Poststroke visual impairment causes considerable impact to daily life which could be substantially improved by simple measures including early formal visual assessment, management and advice on adaptive strategies and self-management options. Improved education about poststroke visual impairment for the public and clinicians could aid earlier diagnosis of visual impairments.

  1. Visual Search Performance in Patients with Vision Impairment: A Systematic Review.

    PubMed

    Senger, Cassia; Margarido, Maria Rita Rodrigues Alves; De Moraes, Carlos Gustavo; De Fendi, Ligia Issa; Messias, André; Paula, Jayter Silva

    2017-11-01

    Patients with visual impairment are constantly facing challenges to achieve an independent and productive life, which depends upon both a good visual discrimination and search capacities. Given that visual search is a critical skill for several daily tasks and could be used as an index of the overall visual function, we investigated the relationship between vision impairment and visual search performance. A comprehensive search was undertaken using electronic PubMed, EMBASE, LILACS, and Cochrane databases from January 1980 to December 2016, applying the following terms: "visual search", "visual search performance", "visual impairment", "visual exploration", "visual field", "hemianopia", "search time", "vision lost", "visual loss", and "low vision". Two hundred seventy six studies from 12,059 electronic database files were selected, and 40 of them were included in this review. Studies included participants of all ages, both sexes, and the sample sizes ranged from 5 to 199 participants. Visual impairment was associated with worse visual search performance in several ophthalmologic conditions, which were either artificially induced, or related to specific eye and neurological diseases. This systematic review details all the described circumstances interfering with visual search tasks, highlights the need for developing technical standards, and outlines patterns for diagnosis and therapy using visual search capabilities.

  2. Optic neuritis

    MedlinePlus

    ... optic neuritis is unknown. The optic nerve carries visual information from your eye to the brain. The nerve can swell when it becomes suddenly ... may include: Color vision testing MRI of the brain , including special images of the optic nerve Visual acuity testing Visual field testing Examination of the ...

  3. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data

    PubMed Central

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-01-01

    Background Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: . Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine. PMID:17937818

  4. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data.

    PubMed

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-10-15

    Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.

  5. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2014-04-29

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  6. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA

    2011-02-01

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  7. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA

    2012-03-20

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  8. A systematic review of visual processing and associated treatments in body dysmorphic disorder.

    PubMed

    Beilharz, F; Castle, D J; Grace, S; Rossell, S L

    2017-07-01

    Recent advances in body dysmorphic disorder (BDD) have explored abnormal visual processing, yet it is unclear how this relates to treatment. The aim of this study was to summarize our current understanding of visual processing in BDD and review associated treatments. The literature was collected through PsycInfo and PubMed. Visual processing articles were included if written in English after 1970, had a specific BDD group compared to healthy controls and were not case studies. Due to the lack of research regarding treatments associated with visual processing, case studies were included. A number of visual processing abnormalities are present in BDD, including face recognition, emotion identification, aesthetics, object recognition and gestalt processing. Differences to healthy controls include a dominance of detailed local processing over global processing and associated changes in brain activation in visual regions. Perceptual mirror retraining and some forms of self-exposure have demonstrated improved treatment outcomes, but have not been examined in isolation from broader treatments. Despite these abnormalities in perception, particularly concerning face and emotion recognition, few BDD treatments attempt to specifically remediate this. The development of a novel visual training programme which addresses these widespread abnormalities may provide an effective treatment modality. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. 37 CFR 202.3 - Registration of copyright.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Class VA: Works of the visual arts. This class includes all published and unpublished pictorial, graphic... recordings), or VA (works of the visual arts, including architectural works). Copies of the generic... published photographs after consultation and with the permission and under the direction of the Visual Arts...

  10. Imagery and Visual Literacy: Selected Readings from the Annual Conference of the International Visual Literacy Association (26th, Tempe, Arizona, October 12-16, 1994).

    ERIC Educational Resources Information Center

    Beauchamp, Darrell G.; And Others

    This document contains selected conference papers all relating to visual literacy. The topics include: process issues in visual literacy; interpreting visual statements; what teachers need to know; multimedia presentations; distance education materials for correctional use; visual culture; audio-visual interaction in desktop multimedia; the…

  11. The Effects of Training on a Young Child with Cortical Visual Impairment: An Exploratory Study.

    ERIC Educational Resources Information Center

    Lueck, Amanda Hall; Dornbusch, Helen; Hart, Jeri

    1999-01-01

    This exploratory study investigated the effects of the components of visual environmental management, visual skills training, and visually dependent task training on the performance of visual behaviors of a toddler with multiple disabilities including cortical visual impairment. Training components were implemented by the mother during daily…

  12. Why Teach Visual Culture?

    ERIC Educational Resources Information Center

    Passmore, Kaye

    2007-01-01

    Visual culture is a hot topic in art education right now as some teachers are dedicated to teaching it and others are adamant that it has no place in a traditional art class. Visual culture, the author asserts, can include just about anything that is visually represented. Although people often think of visual culture as contemporary visuals such…

  13. Statistics on Children with Visual Impairments.

    ERIC Educational Resources Information Center

    Viisola, Michelle

    This report summarizes statistical data relating to children with visual impairments, including incidence, causes, and education. Data include: (1) prevalence estimates that indicate 1 percent of persons under the age of 18 in the United States have a visual impairment that cannot be corrected with glasses; (2) the leading cause of childhood…

  14. Developing Verbal and Visual Literacy through Experiences in the Visual Arts: 25 Tips for Teachers

    ERIC Educational Resources Information Center

    Johnson, Margaret H.

    2008-01-01

    Including talk about art--conversing with children about artwork, their own and others'--as a component of visual art activities extends children's experiences in and understanding of visual messages. Johnson discusses practices that help children develop visual and verbal expression through active experiences with the visual arts. She offers 25…

  15. Three-dimensional visualization and display technologies; Proceedings of the Meeting, Los Angeles, CA, Jan. 18-20, 1989

    NASA Technical Reports Server (NTRS)

    Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)

    1989-01-01

    Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.

  16. Visual impairment in children with congenital Zika syndrome.

    PubMed

    Ventura, Liana O; Ventura, Camila V; Lawrence, Linda; van der Linden, Vanessa; van der Linden, Ana; Gois, Adriana L; Cavalcanti, Milena M; Barros, Eveline A; Dias, Natalia C; Berrocal, Audina M; Miller, Marilyn T

    2017-08-01

    To describe the visual impairment associated with ocular and neurological abnormalities in a cohort of children with congenital Zika syndrome (CZS). This cross-sectional study included infants with microcephaly born in Pernambuco, Brazil, from May to December 2015. Immunoglobulin M antibody capture enzyme-linked immunosorbent assay for the Zika virus on the cerebrospinal fluid samples was positive for all infants. Clinical evaluation consisted of comprehensive ophthalmologic examination including visual acuity, visual function assessment, visual developmental milestone, neurologic examination, and neuroimaging. A total of 32 infants (18 males [56%]) were included. Mean age at examination was 5.7 ± 0.9 months (range, 4-7 months). Visual function and visual developmental milestone could not be tested in 1 child (3%). Visual impairment was detected in 32 infants (100%). Retinal and/or optic nerve findings were observed in 14 patients (44%). There was no statistical difference between the patients with ocular findings and those without (P = 0.180). All patients (100%) demonstrated neurological and neuroimaging abnormalities; 3 (9%) presented with late-onset of microcephaly. Children with CZS demonstrated visual impairment regardless of retina and/or optic nerve abnormalities. This finding suggests that cortical/cerebral visual impairment may be the most common cause of blindness identified in children with CZS. Copyright © 2017 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.

  17. Cerebral Visual Impairment: Which Perceptive Visual Dysfunctions Can Be Expected in Children with Brain Damage? A Systematic Review

    ERIC Educational Resources Information Center

    Boot, F. H.; Pel, J. J. M.; van der Steen, J.; Evenhuis, H. M.

    2010-01-01

    The current definition of Cerebral Visual Impairment (CVI) includes all visual dysfunctions caused by damage to, or malfunctioning of, the retrochiasmatic visual pathways in the absence of damage to the anterior visual pathways or any major ocular disease. CVI is diagnosed by exclusion and the existence of many different causes and symptoms make…

  18. Perceptions Concerning Visual Culture Dialogues of Visual Art Pre-Service Teachers

    ERIC Educational Resources Information Center

    Mamur, Nuray

    2012-01-01

    The visual art which is commented by the visual art teachers to help processing of the visual culture is important. In this study it is tried to describe the effect of visual culture based on the usual aesthetic experiences to be included in the learning process art education. The action research design, which is a qualitative study, is conducted…

  19. Active Optical Sensors for Tree Stem Detection and Classification in Nurseries

    PubMed Central

    Garrido, Miguel; Perez-Ruiz, Manuel; Valero, Constantino; Gliever, Chris J.; Hanson, Bradley D.; Slaughter, David C.

    2014-01-01

    Active optical sensing (LIDAR and light curtain transmission) devices mounted on a mobile platform can correctly detect, localize, and classify trees. To conduct an evaluation and comparison of the different sensors, an optical encoder wheel was used for vehicle odometry and provided a measurement of the linear displacement of the prototype vehicle along a row of tree seedlings as a reference for each recorded sensor measurement. The field trials were conducted in a juvenile tree nursery with one-year-old grafted almond trees at Sierra Gold Nurseries, Yuba City, CA, United States. Through these tests and subsequent data processing, each sensor was individually evaluated to characterize their reliability, as well as their advantages and disadvantages for the proposed task. Test results indicated that 95.7% and 99.48% of the trees were successfully detected with the LIDAR and light curtain sensors, respectively. LIDAR correctly classified, between alive or dead tree states at a 93.75% success rate compared to 94.16% for the light curtain sensor. These results can help system designers select the most reliable sensor for the accurate detection and localization of each tree in a nursery, which might allow labor-intensive tasks, such as weeding, to be automated without damaging crops. PMID:24949638

  20. Afocal optical flow sensor for reducing vertical height sensitivity in indoor robot localization and navigation.

    PubMed

    Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il Dan

    2015-05-13

    This paper introduces a novel afocal optical flow sensor (OFS) system for odometry estimation in indoor robotic navigation. The OFS used in computer optical mouse has been adopted for mobile robots because it is not affected by wheel slippage. Vertical height variance is thought to be a dominant factor in systematic error when estimating moving distances in mobile robots driving on uneven surfaces. We propose an approach to mitigate this error by using an afocal (infinite effective focal length) system. We conducted experiments in a linear guide on carpet and three other materials with varying sensor heights from 30 to 50 mm and a moving distance of 80 cm. The same experiments were repeated 10 times. For the proposed afocal OFS module, a 1 mm change in sensor height induces a 0.1% systematic error; for comparison, the error for a conventional fixed-focal-length OFS module is 14.7%. Finally, the proposed afocal OFS module was installed on a mobile robot and tested 10 times on a carpet for distances of 1 m. The average distance estimation error and standard deviation are 0.02% and 17.6%, respectively, whereas those for a conventional OFS module are 4.09% and 25.7%, respectively.

  1. Applications of image processing and visualization in the evaluation of murder and assault

    NASA Astrophysics Data System (ADS)

    Oliver, William R.; Rosenman, Julian G.; Boxwala, Aziz; Stotts, David; Smith, John; Soltys, Mitchell; Symon, James; Cullip, Tim; Wagner, Glenn

    1994-09-01

    Recent advances in image processing and visualization are of increasing use in the investigation of violent crime. The Digital Image Processing Laboratory at the Armed Forces Institute of Pathology in collaboration with groups at the University of North Carolina at Chapel Hill are actively exploring visualization applications including image processing of trauma images, 3D visualization, forensic database management and telemedicine. Examples of recent applications are presented. Future directions of effort include interactive consultation and image manipulation tools for forensic data exploration.

  2. University Students' Visual Cognitive Styles with Respect to Majors and Years

    ERIC Educational Resources Information Center

    Kibar, Pinar Nuhoglu; Akkoyunlu, Buket

    2016-01-01

    Visual cognitive style is an individual difference that is related to the preference or visual imagery tendency of an individual of processing visual information. This study examines the visual cognitive styles of university students according to their study subject, study year and genders and includes 448 first- and third-year university students…

  3. Helping Children with Visual and Motor Impairments Make the Most of Their Visual Abilities.

    ERIC Educational Resources Information Center

    Amerson, Marie J.

    1999-01-01

    Lists strategies for promoting functional vision use in children with visual and motor impairments, including providing postural stability, presenting visual attention tasks when energy level is the highest, using a slanted work surface, placing target items in varied locations within reach, and determining the most effective visual adaptations.…

  4. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  5. Visualizers, Visualizations, and Visualizees: Differences in Meaning-Making by Scientific Experts and Novices from Global Visualizations of Ocean Data

    ERIC Educational Resources Information Center

    Stofer, Kathryn A.

    2013-01-01

    Data visualizations designed for academic scientists are not immediately meaningful to everyday scientists. Communicating between a specialized, expert audience and a general, novice public is non-trivial; it requires careful translation. However, more widely available visualization technologies and platforms, including new three-dimensional…

  6. Visual Literacy and Visual Thinking.

    ERIC Educational Resources Information Center

    Hortin, John A.

    It is proposed that visual literacy be defined as the ability to understand (read) and use (write) images and to think and learn in terms of images. This definition includes three basic principles: (1) visuals are a language and thus analogous to verbal language; (2) a visually literate person should be able to understand (read) images and use…

  7. Dementia

    MedlinePlus

    ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ...

  8. STRAD Wheel: Web-Based Library for Visualizing Temporal Data.

    PubMed

    Fernondez-Prieto, Diana; Naranjo-Valero, Carol; Hernandez, Jose Tiberio; Hagen, Hans

    2017-01-01

    Recent advances in web development, including the introduction of HTML5, have opened a door for visualization researchers and developers to quickly access larger audiences worldwide. Open source libraries for the creation of interactive visualizations are becoming more specialized but also modular, which makes them easy to incorporate in domain-specific applications. In this context, the authors developed STRAD (Spatio-Temporal-Radar) Wheel, a web-based library that focuses on the visualization and interactive query of temporal data in a compact view with multiple temporal granularities. This article includes two application examples in urban planning to help illustrate the proposed visualization's use in practice.

  9. Dynamic visualization of data streams

    DOEpatents

    Wong, Pak Chung [Richalnd, WA; Foote, Harlan P [Richland, WA; Adams, Daniel R [Kennewick, WA; Cowley, Wendy E [Richland, WA; Thomas, James J [Richland, WA

    2009-07-07

    One embodiment of the present invention includes a data communication subsystem to receive a data stream, and a data processing subsystem responsive to the data communication subsystem to generate a visualization output based on a group of data vectors corresponding to a first portion of the data stream. The processing subsystem is further responsive to a change in rate of receipt of the data to modify the visualization output with one or more other data vectors corresponding to a second portion of the data stream as a function of eigenspace defined with the group of data vectors. The system further includes a display device responsive to the visualization output to provide a corresponding visualization.

  10. Vision

    NASA Technical Reports Server (NTRS)

    Taylor, J. H.

    1973-01-01

    Some data on human vision, important in present and projected space activities, are presented. Visual environment and performance and structure of the visual system are also considered. Visual perception during stress is included.

  11. Educational Research and the Sight of Inquiry: Visual Methodologies before Visual Methods

    ERIC Educational Resources Information Center

    Metcalfe, Amy Scott

    2016-01-01

    As visual methods are increasingly validated in the social sciences, including educational research, we must interrogate the sight of inquiry as we employ visual methods to examine our research sites. In this essay, three layers of engagement with the visual are examined in relation to educational research: looking, seeing, and envisioning. This…

  12. The Effects of Visual Art Integration on Reading at the Elementary Level. A Review of Literature

    ERIC Educational Resources Information Center

    McCarty, Kristine A.

    2007-01-01

    Although visual art is considered a subject deemed by the federal government as part of the core curriculum, many elementary schools do not include this subject into the current core curriculum of studies. This review of literature provides insight through current qualitative and quantitative studies on the effectiveness of including visual art…

  13. VISUAL ACUITY IN PSEUDOXANTHOMA ELASTICUM.

    PubMed

    Risseeuw, Sara; Ossewaarde-van Norel, Jeannette; Klaver, Caroline C W; Colijn, Johanna M; Imhof, Saskia M; van Leeuwen, Redmer

    2018-04-12

    To assess the age-specific proportion of visual impairment in patients with pseudoxanthoma elasticum (PXE) and to compare this with foveal abnormality and similar data of late age-related macular degeneration patients. Cross-sectional data of 195 patients with PXE were reviewed, including best-corrected visual acuity and imaging. The World Health Organisation criteria were used to categorize bilateral visual impairment. These results were compared with similar data of 131 patients with late age-related macular degeneration from the Rotterdam study. Overall, 50 PXE patients (26.0%) were visually impaired, including 21 (11%) with legal blindness. Visual functioning declined with increasing age. In patients older than 50 years, 37% was visually impaired and 15% legally blind. Foveal choroidal neovascularization was found in 84% of eyes with a best-corrected visual acuity lower than 20/70 (0.30) and macular atrophy in the fovea in 16%. In late age-related macular degeneration patients, 40% were visually impaired and 13% legally blind. Visual impairment started approximately 20 years later as compared with PXE patients. Visual impairment and blindness are frequent in PXE, particularly in patients older than 50 years. Although choroidal neovascularization is associated with the majority of vision loss, macular atrophy is also common. The proportion of visual impairment in PXE is comparable with late age-related macular degeneration but manifests earlier in life.

  14. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  15. Visual attention capacity: a review of TVA-based patient studies.

    PubMed

    Habekost, Thomas; Starrfelt, Randi

    2009-02-01

    Psychophysical studies have identified two distinct limitations of visual attention capacity: processing speed and apprehension span. Using a simple test, these cognitive factors can be analyzed by Bundesen's Theory of Visual Attention (TVA). The method has strong specificity and sensitivity, and measurements are highly reliable. As the method is theoretically founded, it also has high validity. TVA-based assessment has recently been used to investigate a broad range of neuropsychological and neurological conditions. We present the method, including the experimental paradigm and practical guidelines to patient testing, and review existing TVA-based patient studies organized by lesion anatomy. Lesions in three anatomical regions affect visual capacity: The parietal lobes, frontal cortex and basal ganglia, and extrastriate cortex. Visual capacity thus depends on large, bilaterally distributed anatomical networks that include several regions outside the visual system. The two visual capacity parameters are functionally separable, but seem to rely on largely overlapping brain areas.

  16. Retinal ganglion cell maps in the brain: implications for visual processing.

    PubMed

    Dhande, Onkar S; Huberman, Andrew D

    2014-02-01

    Everything the brain knows about the content of the visual world is built from the spiking activity of retinal ganglion cells (RGCs). As the output neurons of the eye, RGCs include ∼20 different subtypes, each responding best to a specific feature in the visual scene. Here we discuss recent advances in identifying where different RGC subtypes route visual information in the brain, including which targets they connect to and how their organization within those targets influences visual processing. We also highlight examples where causal links have been established between specific RGC subtypes, their maps of central connections and defined aspects of light-mediated behavior and we suggest the use of techniques that stand to extend these sorts of analyses to circuits underlying visual perception. Copyright © 2013. Published by Elsevier Ltd.

  17. Mapping visual cortex in monkeys and humans using surface-based atlases

    NASA Technical Reports Server (NTRS)

    Van Essen, D. C.; Lewis, J. W.; Drury, H. A.; Hadjikhani, N.; Tootell, R. B.; Bakircioglu, M.; Miller, M. I.

    2001-01-01

    We have used surface-based atlases of the cerebral cortex to analyze the functional organization of visual cortex in humans and macaque monkeys. The macaque atlas contains multiple partitioning schemes for visual cortex, including a probabilistic atlas of visual areas derived from a recent architectonic study, plus summary schemes that reflect a combination of physiological and anatomical evidence. The human atlas includes a probabilistic map of eight topographically organized visual areas recently mapped using functional MRI. To facilitate comparisons between species, we used surface-based warping to bring functional and geographic landmarks on the macaque map into register with corresponding landmarks on the human map. The results suggest that extrastriate visual cortex outside the known topographically organized areas is dramatically expanded in human compared to macaque cortex, particularly in the parietal lobe.

  18. Multidimensional structured data visualization method and apparatus, text visualization method and apparatus, method and apparatus for visualizing and graphically navigating the world wide web, method and apparatus for visualizing hierarchies

    DOEpatents

    Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA; Hart, Michelle L [Richland, WA; Hatley, Wes L [Kennewick, WA

    2008-05-13

    A method of displaying correlations among information objects comprises receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.

  19. Mobile device geo-localization and object visualization in sensor networks

    NASA Astrophysics Data System (ADS)

    Lemaire, Simon; Bodensteiner, Christoph; Arens, Michael

    2014-10-01

    In this paper we present a method to visualize geo-referenced objects on modern smartphones using a multi- functional application design. The application applies different localization and visualization methods including the smartphone camera image. The presented application copes well with different scenarios. A generic application work flow and augmented reality visualization techniques are described. The feasibility of the approach is experimentally validated using an online desktop selection application in a network with a modern of-the-shelf smartphone. Applications are widespread and include for instance crisis and disaster management or military applications.

  20. Visual Representations of the Water Cycle in Science Textbooks

    ERIC Educational Resources Information Center

    Vinisha, K.; Ramadas, J.

    2013-01-01

    Visual representations, including photographs, sketches and schematic diagrams, are a valuable yet often neglected aspect of textbooks. Visual means of communication are particularly helpful in introducing abstract concepts in science. For effective communication, visuals and text need to be appropriately integrated within the textbook. This study…

  1. [Review of visual display system in flight simulator].

    PubMed

    Xie, Guang-hui; Wei, Shao-ning

    2003-06-01

    Visual display system is the key part and plays a very important role in flight simulators and flight training devices. The developing history of visual display system is recalled and the principle and characters of some visual display systems including collimated display systems and back-projected collimated display systems are described. The future directions of visual display systems are analyzed.

  2. Does Differential Visual Exploration Contribute to Visual Memory Impairments in 22Q11.2 Microdeletion Syndrome?

    ERIC Educational Resources Information Center

    Bostelmann, M.; Glaser, B.; Zaharia, A.; Eliez, S.; Schneider, M.

    2017-01-01

    Background: Chromosome 22q11.2 microdeletion syndrome (22q11.2DS) is a genetic syndrome characterised by a unique cognitive profile. Individuals with the syndrome present several non-verbal deficits, including visual memory impairments and atypical exploration of visual information. In this study, we seek to understand how visual attention may…

  3. Acquired Codes of Meaning in Data Visualization and Infographics: Beyond Perceptual Primitives.

    PubMed

    Byrne, Lydia; Angus, Daniel; Wiles, Janet

    2016-01-01

    While information visualization frameworks and heuristics have traditionally been reluctant to include acquired codes of meaning, designers are making use of them in a wide variety of ways. Acquired codes leverage a user's experience to understand the meaning of a visualization. They range from figurative visualizations which rely on the reader's recognition of shapes, to conventional arrangements of graphic elements which represent particular subjects. In this study, we used content analysis to codify acquired meaning in visualization. We applied the content analysis to a set of infographics and data visualizations which are exemplars of innovative and effective design. 88% of the infographics and 71% of data visualizations in the sample contain at least one use of figurative visualization. Conventions on the arrangement of graphics are also widespread in the sample. In particular, a comparison of representations of time and other quantitative data showed that conventions can be specific to a subject. These results suggest that there is a need for information visualization research to expand its scope beyond perceptual channels, to include social and culturally constructed meaning. Our paper demonstrates a viable method for identifying figurative techniques and graphic conventions and integrating them into heuristics for visualization design.

  4. WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets

    NASA Astrophysics Data System (ADS)

    Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.

    2010-12-01

    WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface of the web application. These features are all in addition to a full range of essential visualization functions including 3-D camera and object orientation, position manipulation, time-stepping control, and custom color/alpha mapping.

  5. Analysis of landscape character for visual resource management

    Treesearch

    Paul F. Anderson

    1979-01-01

    Description, classification and delineation of visual landscape character are initial steps in developing visual resource management plans. Landscape characteristics identified as key factors in visual landscape analysis include land cover/land use and landform. Landscape types, which are combinations of landform and surface features, were delineated for management...

  6. Reconfigurable Image Generator

    NASA Technical Reports Server (NTRS)

    Archdeacon, John L. (Inventor); Iwai, Nelson H. (Inventor); Kato, Kenji H. (Inventor); Sweet, Barbara T. (Inventor)

    2017-01-01

    A RiG may simulate visual conditions of a real world environment, and generate the necessary amount of pixels in a visual simulation at rates up to 120 frames per second. RiG may also include a database generation system capable of producing visual databases suitable to drive the visual fidelity required by the RiG.

  7. Special Problems of People with Diabetes and Visual Impairment.

    ERIC Educational Resources Information Center

    Rosenthal, J. L.

    1993-01-01

    This article addresses the types of visual impairment caused by diabetes and the unique problems that people with diabetes and visual impairment face. Diabetic retinopathy, cataracts, glaucoma, and diabetic optic neuropathy are discussed as causes of visual impairment, and specific problems in basic living are identified, including diet,…

  8. Visual Scripting.

    ERIC Educational Resources Information Center

    Halas, John

    Visual scripting is the coordination of words with pictures in sequence. This book presents the methods and viewpoints on visual scripting of fourteen film makers, from nine countries, who are involved in animated cinema; it contains concise examples of how a storybook and preproduction script can be prepared in visual terms; and it includes a…

  9. Clothing Construction: An Instructional Package with Adaptations for Visually Impaired Individuals.

    ERIC Educational Resources Information Center

    Crawford, Glinda B.; And Others

    Developed for the home economics teacher of mainstreamed visually impaired students, this guide provides clothing instruction lesson plans for the junior high level. First, teacher guidelines are given, including characteristics of the visually impaired, orienting such students to the classroom, orienting class members to the visually impaired,…

  10. Visual Immersion for Cultural Understanding and Multimodal Literacy

    ERIC Educational Resources Information Center

    Smilan, Cathy

    2017-01-01

    When considering inclusive art curriculum that accommodates all learners, including English language learners, two distinct yet inseparable issues come to mind. The first is that English language learner students can use visual language and visual literacy skills inherent in visual arts curriculum to scaffold learning in and through the arts.…

  11. A Prospective Curriculum Using Visual Literacy.

    ERIC Educational Resources Information Center

    Hortin, John A.

    This report describes the uses of visual literacy programs in the schools and outlines four categories for incorporating training in visual thinking into school curriculums as part of the back to basics movement in education. The report recommends that curriculum writers include materials pertaining to: (1) reading visual language and…

  12. Causes of visual impairment in children with low vision.

    PubMed

    Shah, Mufarriq; Khan, Mirzaman; Khan, Muhammad Tariq; Khan, Mohammad Younas; Saeed, Nasir

    2011-02-01

    To determine the main causes of visual impairment in children with low vision. To assess the need of spectacles and low vision devices (LVDs) in children and to evaluate visual outcome after using their LVDs for far and near distance. Observational study. Khyber Institute of Ophthalmic Medical Sciences, Peshawar, Pakistan, from June 2006 to December 2007. The clinical record of 270 children with low vision age 4-16 years attending the Low Vision Clinic were included. All those children, aged 4-16 years, who had corrected visual acuity (VA) less than 6/18 in the better eye after medical or surgical treatment, were included in the study. WHO low vision criteria were used to classify into visually impaired, severe visually impaired and blind. Results were described as percentage frequencies. One hundred and eighty nine (70%) were males and 81 (30%) were females. The male to female ratio was 2.3:1. The main causes of visual impairment included nystagmus (15%), Stargardt's disease (14%), maculopathies (13%), myopic macular degeneration (11%) and oculocutaneous albinism (7%). The percentages of visually impaired, severe visually impaired and blind were 33.8%, 27.2% and 39.0% respectively. Spectacles were prescribed to 146 patients and telescopes were prescribed to 75 patients. Spectacles and telescope both were prescribed to 179 patients while Ocutech telescope was prescribed to 4 patients. Retinal diseases nystagmus and macular conditions were mainly responsible for low vision in children. Visually impaired children especially with hereditary/congenital ocular anomalies benefit from refraction and low vision services which facilitate vision enhancement and inclusive education.

  13. Migraine with Aura

    MedlinePlus

    ... processes visual signals (visual cortex) and causes these visual hallucinations. Many of the same factors that trigger migraine can also trigger migraine with aura, including stress, bright lights, some foods and medications, too much or too little sleep, ...

  14. Attitudes towards and perceptions of visual loss and its causes among Hong Kong Chinese adults.

    PubMed

    Lau, Joseph Tak Fai; Lee, Vincent; Fan, Dorothy; Lau, Mason; Michon, John

    2004-06-01

    As part of a study of visual function among Hong Kong Chinese adults, their attitudes and perceptions related to visual loss were examined. These included fear of visual loss, negative functional impacts of visual loss, the relationship between ageing and visual loss and help-seeking behaviours related to visual loss. Demographic factors associated with these variables were also studied. The study population were people aged 40 and above randomly selected from the Shatin district of Hong Kong. The participants underwent eye examinations that included visual acuity, intraocular pressure measurement, visual field, slit-lamp biomicroscopy and ophthalmoscopy. The primary cause of visual disability was recorded. The participants were also asked about their attitudes and perceptions regarding visual loss using a structured questionnaire. The prevalence of bilateral visual disability was 2.2% among adults aged 40 or above and 6.4% among adults aged 60 or above. Nearly 36% of the participants selected blindness as the most feared disabling medical condition, which was substantially higher than conditions such as dementia, loss of limbs, deafness or aphasia. Inability to take care of oneself (21.0%), inconvenience related to mobility (20.2%) and inability to work (14.8%) were the three most commonly mentioned 'worst impact' effects of visual loss. Fully 68% of the participants believed that loss of vision is related to ageing. A majority of participants would seek help and advice from family members in case of visual loss. Visual function is perceived to be very important by Hong Kong Chinese adults. The fear of visual loss is widespread and particularly affects self-care and functional abilities. Visual loss is commonly seen as related to ageing. Attitudes and perceptions in this population may be modified by educational and outreach efforts in order to take advantage of preventive measures.

  15. Visual examination apparatus

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)

    1976-01-01

    An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location including a projection system for displaying to a patient a series of visual stimuli. A response switch enables him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system thereby provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.

  16. Perceptions of Visual Literacy. Selected Readings from the Annual Conference of the International Visual Literacy Association (21st, Scottsdale, Arizona, October 1989).

    ERIC Educational Resources Information Center

    Braden, Roberts A., Ed.; And Others

    These proceedings contain 37 papers from 51 authors noted for their expertise in the field of visual literacy. The collection is divided into three sections: (1) "Examining Visual Literacy" (including, in addition to a 7-year International Visual Literacy Association bibliography covering the period from 1983-1989, papers on the perception of…

  17. The impact of visual impairment on self-reported visual functioning in Latinos: The Los Angeles Latino Eye Study.

    PubMed

    Globe, Denise R; Wu, Joanne; Azen, Stanley P; Varma, Rohit

    2004-06-01

    To assess the association between presenting binocular visual acuity (VA) and self-reported visual function as measured by the 25-item National Eye Institute Visual Function Questionnaire (NEI-VFQ-25). A population-based, prevalence study of eye disease in Latinos 40 years and older residing in La Puente, California (Los Angeles Latino Eye Study [LALES]). Six thousand three hundred fifty-seven Latinos 40 years and older from 6 census tracts in La Puente. All participants completed a standardized interview, including the NEI-VFQ-25 to measure visual functioning, and a detailed eye examination. Two definitions of visual impairment were used: (1) presenting binocular distance VA of 20/40 or worse and (2) presenting binocular distance VA worse than 20/40. Analysis of variance was used to determine any systematic differences in mean NEI-VFQ-25 scores by visual impairment. Regression analyses were completed (1) to determine the association of age, gender, number of systemic comorbidities, depression, and VA with self-reported visual function and (2) to estimate a visual impairment-related difference for each subscale based on differences in VA. The NEI-VFQ-25 scores in persons with visual impairment. Of the 5287 LALES participants with complete NEI-VFQ-25 data, 6.3% (including 20/40) and 4.2% (excluding 20/40) were visually impaired. In the visually impaired participants, the NEI-VFQ-25 subscale scores ranged from 46.2 (General Health) to 93.8 (Color Vision). In the regression model, only VA, depression, and number of comorbidities were significantly associated with all subscale scores (R(2) ranged from 0.09 for Ocular Pain to 0.33 for the composite score). For 9 of 11 subscales, a 5-point change was equivalent to a 1- or 2-line difference in VA. Relationships were similar regardless of the definition of visual impairment. In this population-based study of Latinos, the NEI-VFQ-25 was sensitive to differences in VA. A 5-point difference on the NEI-VFQ-25 seems to be a minimal criterion for a visual impairment-related difference. Self-reported visual function is essentially unchanged if the definition of visual impairment includes or excludes a VA of 20/40.

  18. Visual consciousness and bodily self-consciousness.

    PubMed

    Faivre, Nathan; Salomon, Roy; Blanke, Olaf

    2015-02-01

    In recent years, consciousness has become a central topic in cognitive neuroscience. This review focuses on the relation between bodily self-consciousness - the feeling of being a subject in a body - and visual consciousness - the subjective experience associated with the perception of visual signals. Findings from clinical and experimental work have shown that bodily self-consciousness depends on specific brain networks and is related to the integration of signals from multiple sensory modalities including vision. In addition, recent experiments have shown that visual consciousness is shaped by the body, including vestibular, tactile, proprioceptive, and motor signals. Several lines of evidence suggest reciprocal relationships between vision and bodily signals, indicating that a comprehensive understanding of visual and bodily self-consciousness requires studying them in unison.

  19. The Development of a Visual-Perceptual Chemistry Specific (VPCS) Assessment Tool

    ERIC Educational Resources Information Center

    Oliver-Hoyo, Maria; Sloan, Caroline

    2014-01-01

    The development of the Visual-Perceptual Chemistry Specific (VPCS) assessment tool is based on items that align to eight visual-perceptual skills considered as needed by chemistry students. This tool includes a comprehensive range of visual operations and presents items within a chemistry context without requiring content knowledge to solve…

  20. A Content-Driven Approach to Visual Literacy: Gestalt Rediscovered.

    ERIC Educational Resources Information Center

    Schamber, Linda

    The goal of an introductory graphics course is fundamental visual literacy, which includes learning to appreciate the power of visuals in communication and to express ideas visually. Traditional principles of design--the focus of the course--are based on more fundamental gestalt theory, which relates to human pattern-seeking behavior, particularly…

  1. Visual Organizers as Scaffolds in Teaching English as a Foreign Language

    ERIC Educational Resources Information Center

    Chang, Yu-Liang

    2006-01-01

    This thesis deals with using visual organizers as scaffolds in teaching English as a foreign language (EFL). Based on the findings of scientific researches, the review of literature explicates the effectiveness and fruitfulness in employing visuals organizers in EFL instructions. It includes the five following components. First, visual organizers…

  2. Scopic Regime Change: The War of Terror, Visual Culture, and Art Education

    ERIC Educational Resources Information Center

    Darts, David; Tavin, Kevin; Sweeny, Robert W.; Derby, John

    2008-01-01

    This study examines visual dimensions and pedagogical repercussions of the war of terror. Iconographies of threat and prophylaxis are explored through a discussion of the actuarial gaze and the terr(or)itorialization of the visual field. Specific visual culture fallout from the war of terror is examined, including artistic responses and…

  3. Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.

    ERIC Educational Resources Information Center

    Shama, Gilli; Dreyfus, Tommy

    1994-01-01

    Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…

  4. Visualizing a High Recall Search Strategy Output for Undergraduates in an Exploration Stage of Researching a Term Paper.

    ERIC Educational Resources Information Center

    Cole, Charles; Mandelblatt, Bertie; Stevenson, John

    2002-01-01

    Discusses high recall search strategies for undergraduates and how to overcome information overload that results. Highlights include word-based versus visual-based schemes; five summarization and visualization schemes for presenting information retrieval citation output; and results of a study that recommend visualization schemes geared toward…

  5. Visual perception system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  6. Visual imagery and functional connectivity in blindness: a single-case study

    PubMed Central

    Boucard, Christine C.; Rauschecker, Josef P.; Neufang, Susanne; Berthele, Achim; Doll, Anselm; Manoliu, Andrej; Riedl, Valentin; Sorg, Christian; Wohlschläger, Afra; Mühlau, Mark

    2016-01-01

    We present a case report on visual brain plasticity after total blindness acquired in adulthood. SH lost her sight when she was 27. Despite having been totally blind for 43 years, she reported to strongly rely on her vivid visual imagery. Three-Tesla magnetic resonance imaging (MRI) of SH and age-matched controls was performed. The MRI sequence included anatomical MRI, resting-state functional MRI, and task-related functional MRI where SH was instructed to imagine colours, faces, and motion. Compared to controls, voxel-based analysis revealed white matter loss along SH's visual pathway as well as grey matter atrophy in the calcarine sulci. Yet we demonstrated activation in visual areas, including V1, using functional MRI. Of the four identified visual resting-state networks, none showed alterations in spatial extent; hence, SH's preserved visual imagery seems to be mediated by intrinsic brain networks of normal extent. Time courses of two of these networks showed increased correlation with that of the inferior posterior default mode network, which may reflect adaptive changes supporting SH's strong internal visual representations. Overall, our findings demonstrate that conscious visual experience is possible even after years of absence of extrinsic input. PMID:25690326

  7. Visual imagery and functional connectivity in blindness: a single-case study.

    PubMed

    Boucard, Christine C; Rauschecker, Josef P; Neufang, Susanne; Berthele, Achim; Doll, Anselm; Manoliu, Andrej; Riedl, Valentin; Sorg, Christian; Wohlschläger, Afra; Mühlau, Mark

    2016-05-01

    We present a case report on visual brain plasticity after total blindness acquired in adulthood. SH lost her sight when she was 27. Despite having been totally blind for 43 years, she reported to strongly rely on her vivid visual imagery. Three-Tesla magnetic resonance imaging (MRI) of SH and age-matched controls was performed. The MRI sequence included anatomical MRI, resting-state functional MRI, and task-related functional MRI where SH was instructed to imagine colours, faces, and motion. Compared to controls, voxel-based analysis revealed white matter loss along SH's visual pathway as well as grey matter atrophy in the calcarine sulci. Yet we demonstrated activation in visual areas, including V1, using functional MRI. Of the four identified visual resting-state networks, none showed alterations in spatial extent; hence, SH's preserved visual imagery seems to be mediated by intrinsic brain networks of normal extent. Time courses of two of these networks showed increased correlation with that of the inferior posterior default mode network, which may reflect adaptive changes supporting SH's strong internal visual representations. Overall, our findings demonstrate that conscious visual experience is possible even after years of absence of extrinsic input.

  8. What Do Patients With Glaucoma See? Visual Symptoms Reported by Patients With Glaucoma

    PubMed Central

    Hu, Cindy X.; Zangalli, Camila; Hsieh, Michael; Gupta, Lalita; Williams, Alice L.; Richman, Jesse

    2014-01-01

    Abstract: Background: Vision loss from glaucoma has traditionally been described as loss of “peripheral vision.” In this prospective study, we aimed to improve our clinical understanding of the visual symptoms caused by glaucoma by asking patients specific detailed questions about how they see. Methods: Patients who were clinically diagnosed with various types and stages of glaucoma were included. All had a comprehensive ocular examination, including Octopus visual field testing. Patients were excluded if they had other ocular conditions that affected their vision, including cornea, lens or retina pathologies. Patients responded to an oral questionnaire about their visual symptoms. We investigated the visual symptoms described by patients with glaucoma and correlated the severity of visual field loss with visual symptoms reported. Results: Ninety-nine patients completed the questionnaire. Most patients (76%) were diagnosed with primary open-angle glaucoma. The most common symptoms reported by all patients, including patients with early or moderate glaucoma, were needing more light and blurry vision. Patients with a greater amount of field loss (Octopus mean defect >+9.4 dB) were more likely to report difficulty seeing objects to one or both sides, as if looking through dirty glasses and trouble differentiating boundaries and colors. Conclusions: Vision loss in patients with glaucoma is not as simple as the traditional view of loss of peripheral vision. Needing more light and blurry vision were the most common symptoms reported by patients with glaucoma. PMID:24992392

  9. Visual examination apparatus

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)

    1973-01-01

    An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location is described. The apparatus includes a projection system for displaying to a patient a series of visual stimuli, a response switch enabling him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springmeyer, R R; Brugger, E; Cook, R

    The Data group provides data analysis and visualization support to its customers. This consists primarily of the development and support of VisIt, a data analysis and visualization tool. Support ranges from answering questions about the tool, providing classes on how to use the tool, and performing data analysis and visualization for customers. The Information Management and Graphics Group supports and develops tools that enhance our ability to access, display, and understand large, complex data sets. Activities include applying visualization software for large scale data exploration; running video production labs on two networks; supporting graphics libraries and tools for end users;more » maintaining PowerWalls and assorted other displays; and developing software for searching and managing scientific data. Researchers in the Center for Applied Scientific Computing (CASC) work on various projects including the development of visualization techniques for large scale data exploration that are funded by the ASC program, among others. The researchers also have LDRD projects and collaborations with other lab researchers, academia, and industry. The IMG group is located in the Terascale Simulation Facility, home to Dawn, Atlas, BGL, and others, which includes both classified and unclassified visualization theaters, a visualization computer floor and deployment workshop, and video production labs. We continued to provide the traditional graphics group consulting and video production support. We maintained five PowerWalls and many other displays. We deployed a 576-node Opteron/IB cluster with 72 TB of memory providing a visualization production server on our classified network. We continue to support a 128-node Opteron/IB cluster providing a visualization production server for our unclassified systems and an older 256-node Opteron/IB cluster for the classified systems, as well as several smaller clusters to drive the PowerWalls. The visualization production systems includes NFS servers to provide dedicated storage for data analysis and visualization. The ASC projects have delivered new versions of visualization and scientific data management tools to end users and continue to refine them. VisIt had 4 releases during the past year, ending with VisIt 2.0. We released version 2.4 of Hopper, a Java application for managing and transferring files. This release included a graphical disk usage view which works on all types of connections and an aggregated copy feature for quickly transferring massive datasets quickly and efficiently to HPSS. We continue to use and develop Blockbuster and Telepath. Both the VisIt and IMG teams were engaged in a variety of movie production efforts during the past year in addition to the development tasks.« less

  11. Cranial Nerve II

    PubMed Central

    Gillig, Paulette Marie; Sanders, Richard D.

    2009-01-01

    This article contains a brief review of the anatomy of the visual system, a survey of diseases of the retina, optic nerve and lesions of the optic chiasm, and other visual field defects of special interest to the psychiatrist. It also includes a presentation of the corticothalamic mechanisms, differential diagnosis, and various manifestations of visual illusions, and simple and complex visual hallucinations, as well as the differential diagnoses of these various visual phenomena. PMID:19855858

  12. Electropysiologic evaluation of the visual pathway in patients with multiple sclerosis.

    PubMed

    Rodriguez-Mena, Diego; Almarcegui, Carmen; Dolz, Isabel; Herrero, Raquel; Bambo, Maria P; Fernandez, Javier; Pablo, Luis E; Garcia-Martin, Elena

    2013-08-01

    To evaluate the ability of visual evoked potentials and pattern electroretinograms (PERG) to detect subclinical axonal damage in patients during the early diagnostic stage of multiple sclerosis (MS). The authors also compared the ability of optical coherence tomography (OCT), PERG, and visual evoked potentials to detect axonal loss in MS patients and correlated the functional and structural properties of the retinal nerve fiber layer. Two hundred twenty-eight eyes of 114 subjects (57 MS patients and 57 age- and sex-matched healthy controls) were included. The visual pathway was evaluated based on functional and structural assessments. All patients underwent a complete ophthalmic examination that included assessment of visual acuity, ocular motility, intraocular pressure, visual field, papillary morphology, OCT, visual evoked potentials, and PERG. Visual evoked potentials (P100 latency and amplitude), PERG (N95 amplitude and N95/P50 ratio), and OCT parameters differed significantly between MS patients and healthy subjects. Moderate significant correlations were found between visual evoked potentials or PERG parameters and OCT measurements. Axonal damage in ganglion cells of the visual pathway can be detected based on structural measures provided by OCT in MS patients and by the N95 component and N95/P50 index of PERG, thus providing good correlation between function and structure.

  13. Visual Theorems.

    ERIC Educational Resources Information Center

    Davis, Philip J.

    1993-01-01

    Argues for a mathematics education that interprets the word "theorem" in a sense that is wide enough to include the visual aspects of mathematical intuition and reasoning. Defines the term "visual theorems" and illustrates the concept using the Marigold of Theodorus. (Author/MDH)

  14. Visualization case studies : a summary of three transportation applications of visualization

    DOT National Transportation Integrated Search

    2007-11-30

    The three case studies presented in "Visualization Case Studies" are intended to be helpful to transportation agencies in identifying effective techniques for enhancing and streamlining the project development process, including public outreach activ...

  15. Designing a visualization system for hydrological data

    NASA Astrophysics Data System (ADS)

    Fuhrmann, Sven

    2000-02-01

    The field of hydrology is, as any other scientific field, strongly affected by a massive technological evolution. The spread of modern information and communication technology within the last three decades has led to an increased collection, availability and use of spatial and temporal digital hydrological data. In a two-year research period a working group in Muenster applied and developed methods for the visualization of digital hydrological data and the documentation of hydrological models. A low-cost multimedial, hydrological visualization system (HydroVIS) for the Weser river catchment was developed. The research group designed HydroVIS under freeware constraints and tried to show what kind of multimedia visualization techniques can be effectively used with a nonprofit hydrological visualization system. The system's visual components include features such as electronic maps, temporal and nontemporal cartographic animations, the display of geologic profiles, interactive diagrams and hypertext, including photographs and tables.

  16. Data Visualization Challenges and Opportunities in User-Oriented Application Development

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Quinn, P.; Mitchell, A. E.; Baynes, K.; Shum, D.

    2015-12-01

    This talk introduces the audience to some of the very real challenges associated with visualizing data from disparate data sources as encountered during the development of real world applications. In addition to the fundamental challenges of dealing with the data and imagery, this talk discusses usability problems encountered while trying to provide interactive and user-friendly visualization tools. At the end of this talk the audience will be aware of some of the pitfalls of data visualization along with tools and techniques to help mitigate them. There are many sources of variable resolution visualizations of science data available to application developers including NASA's Global Imagery Browse Services (GIBS), however integrating and leveraging visualizations in modern applications faces a number of challenges, including: - Varying visualized Earth "tile sizes" resulting in challenges merging disparate sources - Multiple visualization frameworks and toolkits with varying strengths and weaknesses - Global composite imagery vs. imagery matching EOSDIS granule distribution - Challenges visualizing geographically overlapping data with different temporal bounds - User interaction with overlapping or collocated data - Complex data boundaries and shapes combined with multi-orbit data and polar projections - Discovering the availability of visualizations and the specific parameters, color palettes, and configurations used to produce them In addition to discussing the challenges and approaches involved in visualizing disparate data, we will discuss solutions and components we'll be making available as open source to encourage reuse and accelerate application development.

  17. Autonomous mobile platform with simultaneous localisation and mapping system for patrolling purposes

    NASA Astrophysics Data System (ADS)

    Mitka, Łukasz; Buratowski, Tomasz

    2017-10-01

    This work describes an autonomous mobile platform for supervision and surveillance purposes. The system can be adapted for mounting on different types of vehicles. The platform is based on a SLAM navigation system which performs a localization task. Sensor fusion including laser scanners, inertial measurement unit (IMU), odometry and GPS lets the system determine its position in a certain and precise way. The platform is able to create a 3D model of a supervised area and export it as a point cloud. The system can operate both inside and outside as the navigation algorithm is resistant to typical localization errors caused by wheel slippage or temporal GPS signal loss. The system is equipped with a path-planning module which allows operating in two modes. The first mode is for periodical observation of points in a selected area. The second mode is turned on in case of an alarm. When it is called, the platform moves with the fastest route to the place of the alert. The path planning is always performed online with use of the most current scans, therefore the platform is able to adjust its trajectory to the environment changes or obstacles that are in the motion. The control algorithms are developed under the Robot Operating System (ROS) since it comes with drivers for many devices used in robotics. Such a solution allows for extending the system with any type of sensor in order to incorporate its data into a created area model. Proposed appliance can be ported to other existing robotic platforms or used to develop a new platform dedicated to a specific kind of surveillance. The platform use cases are to patrol an area, such as airport or metro station, in search for dangerous substances or suspicious objects and in case of detection instantly inform security forces. Second use case is a tele-operation in hazardous area for an inspection purposes.

  18. Ergonomics for Online Searching.

    ERIC Educational Resources Information Center

    Wright, Carol; Friend, Linda

    1992-01-01

    Describes factors to be considered in the design of ergonomically correct workstations for online searchers. Topics discussed include visual factors, including lighting; acoustical factors; radiation and visual display terminals (VDTs); screen image characteristics; static electricity; hardware and equipment; workstation configuration; chairs;…

  19. The effect of linguistic and visual salience in visual world studies.

    PubMed

    Cavicchio, Federica; Melcher, David; Poesio, Massimo

    2014-01-01

    Research using the visual world paradigm has demonstrated that visual input has a rapid effect on language interpretation tasks such as reference resolution and, conversely, that linguistic material-including verbs, prepositions and adjectives-can influence fixations to potential referents. More recent research has started to explore how this effect of linguistic input on fixations is mediated by properties of the visual stimulus, in particular by visual salience. In the present study we further explored the role of salience in the visual world paradigm manipulating language-driven salience and visual salience. Specifically, we tested how linguistic salience (i.e., the greater accessibility of linguistically introduced entities) and visual salience (bottom-up attention grabbing visual aspects) interact. We recorded participants' eye-movements during a MapTask, asking them to look from landmark to landmark displayed upon a map while hearing direction-giving instructions. The landmarks were of comparable size and color, except in the Visual Salience condition, in which one landmark had been made more visually salient. In the Linguistic Salience conditions, the instructions included references to an object not on the map. Response times and fixations were recorded. Visual Salience influenced the time course of fixations at both the beginning and the end of the trial but did not show a significant effect on response times. Linguistic Salience reduced response times and increased fixations to landmarks when they were associated to a Linguistic Salient entity not present itself on the map. When the target landmark was both visually and linguistically salient, it was fixated longer, but fixations were quicker when the target item was linguistically salient only. Our results suggest that the two types of salience work in parallel and that linguistic salience affects fixations even when the entity is not visually present.

  20. A Cross-sectional Study of Prevalence and Etiology of Childhood Visual Impairment in Auckland, New Zealand.

    PubMed

    Chong, Chee Foong; McGhee, Charles N J; Dai, Shuan

    2014-01-01

    Childhood visual impairment has significant individual and socioeconomic costs with global differences in etiology and prevalence. This study aimed to determine prevalence, etiology, and avoidable causes of childhood visual impairment in New Zealand. Retrospective data analysis from a national referral center, the Blind and Low Vision Education Network New Zealand, Auckland. The World Health Organization Program for Prevention of Blindness eye examination records for visually impaired children, 16 years or younger, registered with the Auckland Visual Resource Centre, were included. Data analyzed included demographics, etiology, visual acuity, visual fields, educational setting, and rehabilitation plan. Charts of 340 children were examined, of which 267 children (144 blind, 123 low vision) were included in the analysis, whereas the remaining 73 charts of children with no visual impairment were excluded. The calculated prevalence of blindness and low vision was 0.05% and 0.04%, respectively, in the Auckland region. Principal causes of blindness affecting 91 children (63.9%) were cerebral visual impairment in 61 children (42.4%), optic nerve atrophy in 18 children (12.5%), and retinal dystrophy in 13 children (9.0%). The main potentially avoidable causes of blindness in 27 children (19%) were neonatal trauma, asphyxia in 9 children (33%), and nonaccidental injury 6 children (22%). This first report of prevalence for childhood blindness and low vision in New Zealand is similar to data from Established Market Economy countries. The leading causes of blindness are also comparable to other high-income countries; however, proportions of avoidable causes differ significantly.

  1. Blind Spots: The Communicative Performance of Visual Impairment in Relationships and Social Interaction

    ERIC Educational Resources Information Center

    Frame, Melissa J.

    2004-01-01

    The purpose of this book is to understand the experiences of persons who are visually impaired, including those who are invisibly visually impaired. Through the use of a survey questionnaire and interviews and employing a cross-sectional survey design, the study examines the experiences of a large number of visually impaired respondents. The…

  2. Narrating the Visual: Accounting for and Projecting Actions in Webinar Q&As

    ERIC Educational Resources Information Center

    Yu, Di; Tadic, Nadja

    2018-01-01

    Visual conduct, including the use of gaze to attend to bodily-visual cues and other semiotic resources in interaction, has long been a topic of interest in ethnomethodology and conversation analysis (EMCA). Past EMCA work has examined visual conduct in face-to-face interaction, shedding light on the use of gaze to secure recipiency, facilitate…

  3. A Uniform Identity: Schoolgirl Snapshots and the Spoken Visual

    ERIC Educational Resources Information Center

    Spencer, Stephanie

    2007-01-01

    This article discusses the possibility for expanding our understanding of the visual to include the "spoken visual" within oral history analysis. It suggests that adding a further reading, that of the visualized body, to the voice-centred relational method we can consider the meaning of the uniformed body for the individual. It uses as a…

  4. A visualization environment for supercomputing-based applications in computational mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.

    1993-06-01

    In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.

  5. VISUAL FUNCTION CHANGES AFTER SUBCHRONIC TOLUENE INHALATION IN LONG-EVANS RATS.

    EPA Science Inventory

    Chronic exposure to volatile organic compounds, including toluene, has been associated with visual deficits such as reduced visual contrast sensitivity or impaired color discrimination in studies of occupational or residential exposure. These reports remain controversial, howeve...

  6. Distorted images of one's own body activates the prefrontal cortex and limbic/paralimbic system in young women: a functional magnetic resonance imaging study.

    PubMed

    Kurosaki, Mitsuhaya; Shirao, Naoko; Yamashita, Hidehisa; Okamoto, Yasumasa; Yamawaki, Shigeto

    2006-02-15

    Our aim was to study the gender differences in brain activation upon viewing visual stimuli of distorted images of one's own body. We performed functional magnetic resonance imaging on 11 healthy young men and 11 healthy young women using the "body image tasks" which consisted of fat, real, and thin shapes of the subject's own body. Comparison of the brain activation upon performing the fat-image task versus real-image task showed significant activation of the bilateral prefrontal cortex and left parahippocampal area including the amygdala in the women, and significant activation of the right occipital lobe including the primary and secondary visual cortices in the men. Comparison of brain activation upon performing the thin-image task versus real-image task showed significant activation of the left prefrontal cortex, left limbic area including the cingulate gyrus and paralimbic area including the insula in women, and significant activation of the occipital lobe including the left primary and secondary visual cortices in men. These results suggest that women tend to perceive distorted images of their own bodies by complex cognitive processing of emotion, whereas men tend to perceive distorted images of their own bodies by object visual processing and spatial visual processing.

  7. Virtual Environments in Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Lisinski, T. A. (Technical Monitor)

    1994-01-01

    Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.

  8. Visual probes and methods for placing visual probes into subsurface areas

    DOEpatents

    Clark, Don T.; Erickson, Eugene E.; Casper, William L.; Everett, David M.

    2004-11-23

    Visual probes and methods for placing visual probes into subsurface areas in either contaminated or non-contaminated sites are described. In one implementation, the method includes driving at least a portion of a visual probe into the ground using direct push, sonic drilling, or a combination of direct push and sonic drilling. Such is accomplished without providing an open pathway for contaminants or fugitive gases to reach the surface. According to one implementation, the invention includes an entry segment configured for insertion into the ground or through difficult materials (e.g., concrete, steel, asphalt, metals, or items associated with waste), at least one extension segment configured to selectively couple with the entry segment, at least one push rod, and a pressure cap. Additional implementations are contemplated.

  9. GROTTO visualization for decision support

    NASA Astrophysics Data System (ADS)

    Lanzagorta, Marco O.; Kuo, Eddy; Uhlmann, Jeffrey K.

    1998-08-01

    In this paper we describe the GROTTO visualization projects being carried out at the Naval Research Laboratory. GROTTO is a CAVE-like system, that is, a surround-screen, surround- sound, immersive virtual reality device. We have explored the GROTTO visualization in a variety of scientific areas including oceanography, meteorology, chemistry, biochemistry, computational fluid dynamics and space sciences. Research has emphasized the applications of GROTTO visualization for military, land and sea-based command and control. Examples include the visualization of ocean current models for the simulation and stud of mine drifting and, inside our computational steering project, the effects of electro-magnetic radiation on missile defense satellites. We discuss plans to apply this technology to decision support applications involving the deployment of autonomous vehicles into contaminated battlefield environments, fire fighter control and hostage rescue operations.

  10. Timing the impact of literacy on visual processing

    PubMed Central

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas

    2014-01-01

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460

  11. Timing the impact of literacy on visual processing.

    PubMed

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas

    2014-12-09

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.

  12. Functional neural substrates of posterior cortical atrophy patients.

    PubMed

    Shames, H; Raz, N; Levin, Netta

    2015-07-01

    Posterior cortical atrophy (PCA) is a neurodegenerative syndrome in which the most pronounced pathologic involvement is in the occipito-parietal visual regions. Herein, we aimed to better define the cortical reflection of this unique syndrome using a thorough battery of behavioral and functional MRI (fMRI) tests. Eight PCA patients underwent extensive testing to map their visual deficits. Assessments included visual functions associated with lower and higher components of the cortical hierarchy, as well as dorsal- and ventral-related cortical functions. fMRI was performed on five patients to examine the neuronal substrate of their visual functions. The PCA patient cohort exhibited stereopsis, saccadic eye movements and higher dorsal stream-related functional impairments, including simultant perception, image orientation, figure-from-ground segregation, closure and spatial orientation. In accordance with the behavioral findings, fMRI revealed intact activation in the ventral visual regions of face and object perception while more dorsal aspects of perception, including motion and gestalt perception, revealed impaired patterns of activity. In most of the patients, there was a lack of activity in the word form area, which is known to be linked to reading disorders. Finally, there was evidence of reduced cortical representation of the peripheral visual field, corresponding to the behaviorally assessed peripheral visual deficit. The findings are discussed in the context of networks extending from parietal regions, which mediate navigationally related processing, visually guided actions, eye movement control and working memory, suggesting that damage to these networks might explain the wide range of deficits in PCA patients.

  13. [The status quo and expectation of optometry research in China].

    PubMed

    Qu, Jia

    2015-01-01

    The eye care problems related to optometry involve a wide range, including visual problems during eye disease recovery, visual quality in surgical or non-surgical refractive corrections, and the etiological investigation of functional eye diseases like myopia. This article covers the current challenges to visual health care and the academic developments and contributions of optometry in China, including fundamental researches of myopia, refractive surgery and visual quality, and functional eye disease investigations. Some of the researches have certain impacts both domestically and overseas. Furthermore, scientific evidences to solve clinical problems and the current academic focuses that we should pay attention to are provided.

  14. Information Visualization: The State of the Art for Maritime Domain Awareness

    DTIC Science & Technology

    2006-08-01

    les auteurs et aux évaluations de la qualité de certains documents, mots-clés et liens. La version sur papier de cette base de données se trouve à...interface design principles, psychological studies, and perception research • include a review of visualization theory including current visualization...builds on that a theory of how maps are understood (knowledge schemata and cognitive representations), and then analyses the use of symbols and

  15. Advancements to Visualization Control System (VCS, part of UV-CDAT), a Visualization Package Designed for Climate Scientists

    NASA Astrophysics Data System (ADS)

    Lipsa, D.; Chaudhary, A.; Williams, D. N.; Doutriaux, C.; Jhaveri, S.

    2017-12-01

    Climate Data Analysis Tools (UV-CDAT, https://uvcdat.llnl.gov) is a data analysis and visualization software package developed at Lawrence Livermore National Laboratory and designed for climate scientists. Core components of UV-CDAT include: 1) Community Data Management System (CDMS) which provides I/O support and a data model for climate data;2) CDAT Utilities (GenUtil) that processes data using spatial and temporal averaging and statistic functions; and 3) Visualization Control System (VCS) for interactive visualization of the data. VCS is a Python visualization package primarily built for climate scientists, however, because of its generality and breadth of functionality, it can be a useful tool to other scientific applications. VCS provides 1D, 2D and 3D visualization functions such as scatter plot and line graphs for 1d data, boxfill, meshfill, isofill, isoline for 2d scalar data, vector glyphs and streamlines for 2d vector data and 3d_scalar and 3d_vector for 3d data. Specifically for climate data our plotting routines include projections, Skew-T plots and Taylor diagrams. While VCS provided a user-friendly API, the previous implementation of VCS relied on slow performing vector graphics (Cairo) backend which is suitable for smaller dataset and non-interactive graphics. LLNL and Kitware team has added a new backend to VCS that uses the Visualization Toolkit (VTK) as its visualization backend. VTK is one of the most popular open source, multi-platform scientific visualization library written in C++. Its use of OpenGL and pipeline processing architecture results in a high performant VCS library. Its multitude of available data formats and visualization algorithms results in easy adoption of new visualization methods and new data formats in VCS. In this presentation, we describe recent contributions to VCS that includes new visualization plots, continuous integration testing using Conda and CircleCI, tutorials and examples using Jupyter notebooks as well as upgrades that we are planning in the near future which will improve its ease of use and reliability and extend its capabilities.

  16. System Identification and Steering Control Characteristic of Rice Combine Harvester Model

    NASA Astrophysics Data System (ADS)

    Sutisna, S. P.; Setiawan, R. P. A.; Subrata, I. D. M.; Mandang, T.

    2018-05-01

    This study is a preliminary research of rice combine harvester trajectory. A vehicle model of rice combine used crawler with differential steering. Turning process of differential steering used speed difference of right and left tracks This study aims to learn of rice combine harvester steering control. In real condition, the hydraulic break on each track produced the speed difference. The model used two DC motors with maximum speed 100 rpm for each tracks. A rotary encoder with resolution 600 pulse/rotation was connected to each DC motors shaft to monitor the speed of tracks and connected to the input shaft of a gearbox with ratio 1:46. The motor speed control for each track used pulse width modulation to produce the speed difference. A gyroscope sensor with resolution 0.01° was used to determine the model orientation angle. Like the real rice combine, the tracks can not rotate to the opposite direction at the same time so it makes the model can not perform the pivot turn. The turn radius of the model was 28 cm and the forward maximum speed was 17.8 cm/s. The model trajectory control used PID odometry controller. Parameters input were the speed of each track and the orientation of the vehicle. The straight line test showed the controller can control the rice combine model trajectory with the average error 0.67 cm.

  17. Constructing an Indoor Floor Plan Using Crowdsourcing Based on Magnetic Fingerprinting

    PubMed Central

    Zhao, Fang; Jiang, Mengling; Ma, Hao; Zhang, Yuexia

    2017-01-01

    A large number of indoor positioning systems have recently been developed to cater for various location-based services. Indoor maps are a prerequisite of such indoor positioning systems; however, indoor maps are currently non-existent for most indoor environments. Construction of an indoor map by external experts excludes quick deployment and prevents widespread utilization of indoor localization systems. Here, we propose an algorithm for the automatic construction of an indoor floor plan, together with a magnetic fingerprint map of unmapped buildings using crowdsourced smartphone data. For floor plan construction, our system combines the use of dead reckoning technology, an observation model with geomagnetic signals, and trajectory fusion based on an affinity propagation algorithm. To obtain the indoor paths, the magnetic trajectory data obtained through crowdsourcing were first clustered using dynamic time warping similarity criteria. The trajectories were inferred from odometry tracing, and those belonging to the same cluster in the magnetic trajectory domain were then fused. Fusing these data effectively eliminates the inherent tracking errors originating from noisy sensors; as a result, we obtained highly accurate indoor paths. One advantage of our system is that no additional hardware such as a laser rangefinder or wheel encoder is required. Experimental results demonstrate that our proposed algorithm successfully constructs indoor floor plans with 0.48 m accuracy, which could benefit location-based services which lack indoor maps. PMID:29156639

  18. Unhealthy behaviours and risk of visual impairment: The CONSTANCES population-based cohort.

    PubMed

    Merle, Bénédicte M J; Moreau, Gwendoline; Ozguler, Anna; Srour, Bernard; Cougnard-Grégoire, Audrey; Goldberg, Marcel; Zins, Marie; Delcourt, Cécile

    2018-04-26

    Unhealthy behaviours are linked to a higher risk of eye diseases, but their combined effect on visual function is unknown. We aimed to examine the individual and combined associations of diet, physical activity, smoking and alcohol consumption with visual impairment among French adults. 38 903 participants aged 18-73 years from the CONSTANCES nationwide cohort (2012-2016) with visual acuity measured and who completed, lifestyle, medical and food frequency questionnaires were included. Visual impairment was defined as a presenting visual acuity <20/40 in the better eye. After full multivariate adjustment, the odds for visual impairment increased with decreasing diet quality (p for trend = 0.04), decreasing physical activity (p for trend = 0.02) and increasing smoking pack-years (p for trend = 0.03), whereas no statistically significant association with alcohol consumption was found. Combination of several unhealthy behaviours was associated with increasing odds for visual impairment (p for trend = 0.0002), with a fully-adjusted odds ratio of 1.81 (95% CI 1.18 to 2.79) for participants reporting 2 unhealthy behaviours and 2.92 (95% CI 1.60 to 5.32) for those reporting 3 unhealthy behaviours. An unhealthy lifestyle including low/intermediate diet quality, low physical activity and heavy smoking was associated with visual impairment in this large population-based study.

  19. Visual skills involved in decision making by expert referees.

    PubMed

    Ghasemi, Abdollah; Momeni, Maryam; Jafarzadehpur, Ebrahim; Rezaee, Meysam; Taheri, Hamid

    2011-02-01

    Previous studies have compared visual skills of expert and novice athletes; referees' performance has not been addressed. Visual skills of two groups of expert referees, successful and unsuccessful in decision making, were compared. Using video clips of soccer matches to assess decision-making success of 41 national and international referees from 31 to 42 years of age, 10 top referees were selected as the Successful group and 10 as the Unsuccessful group. Visual tests included visual memory, visual reaction time, peripheral vision, recognition speed, saccadic eye movement, and facility of accommodation. The Successful group had better visual skills than the Unsuccessful group. Such visual skills enhance soccer referees' performance and may be recommended for young referees.

  20. Integrating visualization and interaction research to improve scientific workflows.

    PubMed

    Keefe, Daniel F

    2010-01-01

    Scientific-visualization research is, nearly by necessity, interdisciplinary. In addition to their collaborators in application domains (for example, cell biology), researchers regularly build on close ties with disciplines related to visualization, such as graphics, human-computer interaction, and cognitive science. One of these ties is the connection between visualization and interaction research. This isn't a new direction for scientific visualization (see the "Early Connections" sidebar). However, momentum recently seems to be increasing toward integrating visualization research (for example, effective visual presentation of data) with interaction research (for example, innovative interactive techniques that facilitate manipulating and exploring data). We see evidence of this trend in several places, including the visualization literature and conferences.

  1. VisionQuest: Journeys toward Visual Literacy. Selected Readings from the Annual Conference of the International Visual Literacy Association (28th, Cheyenne, Wyoming, October, 1996).

    ERIC Educational Resources Information Center

    Griffin, Robert E., Ed.; And Others

    This document contains 59 selected papers from the 1996 International Visual Literacy Association (IVLA) conference. Topics include: learning to think visually; information design via the Internet; a program for inner-city at-risk children; dubbing versus subtitling television programs; connecting advertisements and classroom reading through…

  2. Visual Perception and Visual-Motor Integration in Very Preterm and/or Very Low Birth Weight Children: A Meta-Analysis

    ERIC Educational Resources Information Center

    Geldof, C. J. A.; van Wassenaer, A. G.; de Kieviet, J. F.; Kok, J. H.; Oosterlaan, J.

    2012-01-01

    A range of neurobehavioral impairments, including impaired visual perception and visual-motor integration, are found in very preterm born children, but reported findings show great variability. We aimed to aggregate the existing literature using meta-analysis, in order to provide robust estimates of the effect of very preterm birth on visual…

  3. Astronomy, Visual Literacy, and Liberal Arts Education

    NASA Astrophysics Data System (ADS)

    Crider, Anthony

    2016-01-01

    With the exponentially growing amount of visual content that twenty-first century students will face throughout their lives, teaching them to respond to it with visual and information literacy skills should be a clear priority for liberal arts education. While visual literacy is more commonly covered within humanities curricula, I will argue that because astronomy is inherently a visual science, it is a fertile academic discipline for the teaching and learning of visual literacy. Astronomers, like many scientists, rely on three basic types of visuals to convey information: images, qualitative diagrams, and quantitative plots. In this talk, I will highlight classroom methods that can be used to teach students to "read" and "write" these three separate visuals. Examples of "reading" exercises include questioning the authorship and veracity of images, confronting the distorted scales of many diagrams published in astronomy textbooks, and extracting quantitative information from published plots. Examples of "writing" exercises include capturing astronomical images with smartphones, re-sketching textbook diagrams on whiteboards, and plotting data with Google Motion Charts or iPython notebooks. Students can be further pushed to synthesize these skills with end-of-semester slide presentations that incorporate relevant images, diagrams, and plots rather than relying solely on bulleted lists.

  4. Chromatic and achromatic visual fields in relation to choroidal thickness in patients with high myopia: A pilot study.

    PubMed

    García-Domene, M C; Luque, M J; Díez-Ajenjo, M A; Desco-Esteban, M C; Artigas, J M

    2018-02-01

    To analyse the relationship between the choroidal thickness and the visual perception of patients with high myopia but without retinal damage. All patients underwent ophthalmic evaluation including a slit lamp examination and dilated ophthalmoscopy, subjective refraction, best corrected visual acuity, axial length, optical coherence tomography, contrast sensitivity function and sensitivity of the visual pathways. We included eleven eyes of subjects with high myopia. There are statistical correlations between choroidal thickness and almost all the contrast sensitivity values. The sensitivity of magnocellular and koniocellular pathways is the most affected, and the homogeneity of the sensibility of the magnocellular pathway depends on the choroidal thickness; when the thickness decreases, the sensitivity impairment extends from the center to the periphery of the visual field. Patients with high myopia without any fundus changes have visual impairments. We have found that choroidal thickness correlates with perceptual parameters such as contrast sensitivity or mean defect and pattern standard deviation of the visual fields of some visual pathways. Our study shows that the magnocellular and koniocellular pathways are the most affected, so that these patients have impairment in motion perception and blue-yellow contrast perception. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  5. Near Real Time Integration of Satellite and Radar Data for Probabilistic Nearcasting of Severe Weather

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Quinn, P.; Mitchell, A. E.; Baynes, K.; Shum, D.

    2014-12-01

    This talk introduces the audience to some of the very real challenges associated with visualizing data from disparate data sources as encountered during the development of real world applications. In addition to the fundamental challenges of dealing with the data and imagery, this talk discusses usability problems encountered while trying to provide interactive and user-friendly visualization tools. At the end of this talk the audience will be aware of some of the pitfalls of data visualization along with tools and techniques to help mitigate them. There are many sources of variable resolution visualizations of science data available to application developers including NASA's Global Imagery Browse Services (GIBS), however integrating and leveraging visualizations in modern applications faces a number of challenges, including: - Varying visualized Earth "tile sizes" resulting in challenges merging disparate sources - Multiple visualization frameworks and toolkits with varying strengths and weaknesses - Global composite imagery vs. imagery matching EOSDIS granule distribution - Challenges visualizing geographically overlapping data with different temporal bounds - User interaction with overlapping or collocated data - Complex data boundaries and shapes combined with multi-orbit data and polar projections - Discovering the availability of visualizations and the specific parameters, color palettes, and configurations used to produce them In addition to discussing the challenges and approaches involved in visualizing disparate data, we will discuss solutions and components we'll be making available as open source to encourage reuse and accelerate application development.

  6. Mainstreaming the Visually Impaired Child.

    ERIC Educational Resources Information Center

    Calovini, Gloria, Ed.

    Intended for school administrators and regular classroom teachers, the document presents guidelines for working with visually impaired students being integrated into regular classes. Included is a description of the special education program in Illinois. Sections cover the following topics: identification and referral of visually impaired…

  7. Visualization in Science and the Arts.

    ERIC Educational Resources Information Center

    Roth, Susan King

    Visualization as a factor of intelligence includes the mental manipulation of spatial configurations and has been associated with spatial abilities, creative thinking, and conceptual problem solving. There are numerous reports of scientists and mathematicians using visualization to anticipate transformation of the external world. Artists and…

  8. Visualization Tools for Teaching Computer Security

    ERIC Educational Resources Information Center

    Yuan, Xiaohong; Vega, Percy; Qadah, Yaseen; Archer, Ricky; Yu, Huiming; Xu, Jinsheng

    2010-01-01

    Using animated visualization tools has been an important teaching approach in computer science education. We have developed three visualization and animation tools that demonstrate various information security concepts and actively engage learners. The information security concepts illustrated include: packet sniffer and related computer network…

  9. What Do You See?

    ERIC Educational Resources Information Center

    Coleman, Julianne Maner; Goldston, M. Jenice

    2011-01-01

    When students draw observations or interpret and draw a diagram, they're communicating their understandings of science and demonstrating visual literacy abilities. Visual literacy includes skills needed to accurately interpret and produce visual and graphical information such as drawings, diagrams, tables, charts, maps, and graphs. Communication…

  10. D-Star Panorama by Opportunity (False Color)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    NASA's twin Mars Exploration Rovers have been getting smarter as they get older. This view from Opportunity shows the tracks left by a drive executed with more onboard autonomy than has been used on any other drive by a Mars rover.

    Opportunity made the curving, 15.8-meter (52-foot) drive during its 1,160th Martian day, or sol (April 29, 2007). It was testing a navigational capability called 'Field D-star,' which enables the rover to plan optimal long-range drives around any obstacles in order to travel the most direct safe route to the drive's designated destination. Opportunity and its twin, Spirit, did not have this capability until the third year after their January 2004 landings on Mars. Earlier, they could recognize hazards when they approached them closely, then back away and try another angle, but could not always find a safe route away from hazards. Field D-Star and several other upgrades were part of new onboard software uploaded from Earth in 2006. The Sol 1,160 drive by Opportunity was a Martian field test of Field D-Star and also used several other features of autonomy, including visual odometry to track the rover's actual position after each segment of the drive, avoidance of designated keep-out zones, and combining information from two sets of stereo images to consider a wide swath of terrain in analyzing the route.

    Two days later, on Sol 1,162, (May 1, 2007), Opportunity was still at the location it reached during that drive, and the rover's panoramic camera (Pancam) took the exposures combined into this image.

    Victoria Crater is in the background, at the top of the image. The Sol 1,160 drive began at the place near the center of the image where tracks overlap each other. Tracks farther away were left by earlier drives nearer to the northern rim of the crater. For scale, the distance between the parallel tracks left by the rover's wheels is about 1 meter (39 inches) from the middle of one track to the middle of the other. The rocks in the center foreground are roughly 7 to 10 centimeters (3 to 4 inches) tall. The rover could actually drive over them easily, but for this test, settings in the onboard hazard-detection software were adjusted to make these smaller rocks be considered dangerous to the rover. The patch of larger rocks to the right was set as a keep-out zone. The location from which this image was taken is where the rover stopped driving to communicate with Earth. A straight line from the starting point to the destination would be 11 meters (36 feet). Opportunity plotted and followed a smoothly curved, efficient path around the rocks, always keeping the rover in safe areas.

    This view combines separate images taken through the Pancam filters centered on wavelengths of 753 nanometers, 535 nanometers and 432 nanometers. It is presented in a false-color stretch to bring out subtle color differences in the scene.

  11. Deep Learning Predicts Correlation between a Functional Signature of Higher Visual Areas and Sparse Firing of Neurons.

    PubMed

    Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin

    2017-01-01

    Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas.

  12. Deep Learning Predicts Correlation between a Functional Signature of Higher Visual Areas and Sparse Firing of Neurons

    PubMed Central

    Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin

    2017-01-01

    Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas. PMID:29163117

  13. Feast for the Eyes: An Introduction to Data Visualization.

    PubMed

    Brigham, Tara J

    2016-01-01

    Data visualization is defined as the use of data presented in a graphical or pictorial manner. While data visualization is not a new concept, the ease with which anyone can create a data-drive chart, image, or visual has encouraged its growth. The increase of free sources of data and need for user-created content on social media has also led to a rise in data visualization's popularity. This column will explore what data visualization is and how it is currently being used. It will also discuss the benefits, potential problems, and uses in libraries. A brief list of visualization guides is included.

  14. The multiple sclerosis visual pathway cohort: understanding neurodegeneration in MS.

    PubMed

    Martínez-Lapiscina, Elena H; Fraga-Pumar, Elena; Gabilondo, Iñigo; Martínez-Heras, Eloy; Torres-Torres, Ruben; Ortiz-Pérez, Santiago; Llufriu, Sara; Tercero, Ana; Andorra, Magi; Roca, Marc Figueras; Lampert, Erika; Zubizarreta, Irati; Saiz, Albert; Sanchez-Dalmau, Bernardo; Villoslada, Pablo

    2014-12-15

    Multiple Sclerosis (MS) is an immune-mediated disease of the Central Nervous System with two major underlying etiopathogenic processes: inflammation and neurodegeneration. The latter determines the prognosis of this disease. MS is the main cause of non-traumatic disability in middle-aged populations. The MS-VisualPath Cohort was set up to study the neurodegenerative component of MS using advanced imaging techniques by focusing on analysis of the visual pathway in a middle-aged MS population in Barcelona, Spain. We started the recruitment of patients in the early phase of MS in 2010 and it remains permanently open. All patients undergo a complete neurological and ophthalmological examination including measurements of physical and disability (Expanded Disability Status Scale; Multiple Sclerosis Functional Composite and neuropsychological tests), disease activity (relapses) and visual function testing (visual acuity, color vision and visual field). The MS-VisualPath protocol also assesses the presence of anxiety and depressive symptoms (Hospital Anxiety and Depression Scale), general quality of life (SF-36) and visual quality of life (25-Item National Eye Institute Visual Function Questionnaire with the 10-Item Neuro-Ophthalmic Supplement). In addition, the imaging protocol includes both retinal (Optical Coherence Tomography and Wide-Field Fundus Imaging) and brain imaging (Magnetic Resonance Imaging). Finally, multifocal Visual Evoked Potentials are used to perform neurophysiological assessment of the visual pathway. The analysis of the visual pathway with advance imaging and electrophysilogical tools in parallel with clinical information will provide significant and new knowledge regarding neurodegeneration in MS and provide new clinical and imaging biomarkers to help monitor disease progression in these patients.

  15. MRIVIEW: An interactive computational tool for investigation of brain structure and function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranken, D.; George, J.

    MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.

  16. Merging Psychophysical and Psychometric Theory to Estimate Global Visual State Measures from Forced-Choices

    NASA Astrophysics Data System (ADS)

    Massof, Robert W.; Schmidt, Karen M.; Laby, Daniel M.; Kirschen, David; Meadows, David

    2013-09-01

    Visual acuity, a forced-choice psychophysical measure of visual spatial resolution, is the sine qua non of clinical visual impairment testing in ophthalmology and optometry patients with visual system disorders ranging from refractive error to retinal, optic nerve, or central visual system pathology. Visual acuity measures are standardized against a norm, but it is well known that visual acuity depends on a variety of stimulus parameters, including contrast and exposure duration. This paper asks if it is possible to estimate a single global visual state measure from visual acuity measures as a function of stimulus parameters that can represent the patient's overall visual health state with a single variable. Psychophysical theory (at the sensory level) and psychometric theory (at the decision level) are merged to identify the conditions that must be satisfied to derive a global visual state measure from parameterised visual acuity measures. A global visual state measurement model is developed and tested with forced-choice visual acuity measures from 116 subjects with no visual impairments and 560 subjects with uncorrected refractive error. The results are in agreement with the expectations of the model.

  17. Visual cues for data mining

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.

    1996-04-01

    This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.

  18. The effect of two different visual presentation modalities on the narratives of mainstream grade 3 children.

    PubMed

    Klop, D; Engelbrecht, L

    2013-12-01

    This study investigated whether a dynamic visual presentation method (a soundless animated video presentation) would elicit better narratives than a static visual presentation method (a wordless picture book). Twenty mainstream grade 3 children were randomly assigned to two groups and assessed with one of the visual presentation methods. Narrative performance was measured in terms of micro- and macrostructure variables. Microstructure variables included productivity (total number of words, total number of T-units), syntactic complexity (mean length of T-unit) and lexical diversity measures (number of different words). Macrostructure variables included episodic structure in terms of goal-attempt-outcome (GAO) sequences. Both visual presentation modalities elicited narratives of similar quantity and quality in terms of the micro- and macrostructure variables that were investigated. Animation of picture stimuli did not elicit better narratives than static picture stimuli.

  19. Visual Data Analysis for Satellites

    NASA Technical Reports Server (NTRS)

    Lau, Yee; Bhate, Sachin; Fitzpatrick, Patrick

    2008-01-01

    The Visual Data Analysis Package is a collection of programs and scripts that facilitate visual analysis of data available from NASA and NOAA satellites, as well as dropsonde, buoy, and conventional in-situ observations. The package features utilities for data extraction, data quality control, statistical analysis, and data visualization. The Hierarchical Data Format (HDF) satellite data extraction routines from NASA's Jet Propulsion Laboratory were customized for specific spatial coverage and file input/output. Statistical analysis includes the calculation of the relative error, the absolute error, and the root mean square error. Other capabilities include curve fitting through the data points to fill in missing data points between satellite passes or where clouds obscure satellite data. For data visualization, the software provides customizable Generic Mapping Tool (GMT) scripts to generate difference maps, scatter plots, line plots, vector plots, histograms, timeseries, and color fill images.

  20. Innovative Visualization Techniques applied to a Flood Scenario

    NASA Astrophysics Data System (ADS)

    Falcão, António; Ho, Quan; Lopes, Pedro; Malamud, Bruce D.; Ribeiro, Rita; Jern, Mikael

    2013-04-01

    The large and ever-increasing amounts of multi-dimensional, time-varying and geospatial digital information from multiple sources represent a major challenge for today's analysts. We present a set of visualization techniques that can be used for the interactive analysis of geo-referenced and time sampled data sets, providing an integrated mechanism and that aids the user to collaboratively explore, present and communicate visually complex and dynamic data. Here we present these concepts in the context of a 4 hour flood scenario from Lisbon in 2010, with data that includes measures of water column (flood height) every 10 minutes at a 4.5 m x 4.5 m resolution, topography, building damage, building information, and online base maps. Techniques we use include web-based linked views, multiple charts, map layers and storytelling. We explain two of these in more detail that are not currently in common use for visualization of data: storytelling and web-based linked views. Visual storytelling is a method for providing a guided but interactive process of visualizing data, allowing more engaging data exploration through interactive web-enabled visualizations. Within storytelling, a snapshot mechanism helps the author of a story to highlight data views of particular interest and subsequently share or guide others within the data analysis process. This allows a particular person to select relevant attributes for a snapshot, such as highlighted regions for comparisons, time step, class values for colour legend, etc. and provide a snapshot of the current application state, which can then be provided as a hyperlink and recreated by someone else. Since data can be embedded within this snapshot, it is possible to interactively visualize and manipulate it. The second technique, web-based linked views, includes multiple windows which interactively respond to the user selections, so that when selecting an object and changing it one window, it will automatically update in all the other windows. These concepts can be part of a collaborative platform, where multiple people share and work together on the data, via online access, which also allows its remote usage from a mobile platform. Storytelling augments analysis and decision-making capabilities allowing to assimilate complex situations and reach informed decisions, in addition to helping the public visualize information. In our visualization scenario, developed in the context of the VA-4D project for the European Space Agency (see http://www.ca3-uninova.org/project_va4d), we make use of the GAV (GeoAnalytics Visualization) framework, a web-oriented visual analytics application based on multiple interactive views. The final visualization that we produce includes multiple interactive views, including a dynamic multi-layer map surrounded by other visualizations such as bar charts, time graphs and scatter plots. The map provides flood and building information, on top of a base city map (street maps and/or satellite imagery provided by online map services such as Google Maps, Bing Maps etc.). Damage over time for selected buildings, damage for all buildings at a chosen time period, correlation between damage and water depth can be analysed in the other views. This interactive web-based visualization that incorporates the ideas of storytelling, web-based linked views, and other visualization techniques, for a 4 hour flood event in Lisbon in 2010, can be found online at http://www.ncomva.se/flash/projects/esa/flooding/.

  1. Presentation of Information on Visual Displays.

    ERIC Educational Resources Information Center

    Pettersson, Rune

    This discussion of factors involved in the presentation of text, numeric data, and/or visuals using video display devices describes in some detail the following types of presentation: (1) visual displays, with attention to additive color combination; measurements, including luminance, radiance, brightness, and lightness; and standards, with…

  2. Visual Arts Research, 1995.

    ERIC Educational Resources Information Center

    Gardner, Nancy C., Ed.; Thompson, Christine, Ed.

    1995-01-01

    This document consists of the two issues of the journal "Visual Arts Research" published in 1995. This journal focuses on the theory and practice of visual arts education from educational, historical, philosophical, and psychological perspectives. Number 1 of this volume includes the following contributions: (1) "Children's Sensitivity to…

  3. Computers in the Classroom--Bane or Boon.

    ERIC Educational Resources Information Center

    Getman, G. N.

    1983-01-01

    The author cautions that far from being an educational panacea, computers may actually be frustrating or even harmful for children with visual problems, including difficulties with visual-tactual development, visual attention span, and nearsightedness. A coordinated program implemented by education and clinicians is advocated. (CL)

  4. 45 CFR 1308.13 - Eligibility criteria: Visual impairment including blindness.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... A child is visually impaired if: (1) The vision loss meets the definition of legal blindness in the... vision such that the widest diameter of the visual field subtends an angle no greater than 20 degrees. (b..., limited field of vision, cataracts, etc. ...

  5. 45 CFR 1308.13 - Eligibility criteria: Visual impairment including blindness.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... A child is visually impaired if: (1) The vision loss meets the definition of legal blindness in the... vision such that the widest diameter of the visual field subtends an angle no greater than 20 degrees. (b..., limited field of vision, cataracts, etc. ...

  6. RELEVANCE OF VISUAL EFFECTS OF VOLATILE ORGANIC COMPOUNDS TO HUMAN HEALTH RISK ASSESSMENT

    EPA Science Inventory

    Traditional measures of neurotoxicity have included assessment of sensory, cognitive, and motor function. Visual system function and the neurobiological substrates are well characterized across species. Dysfunction in the visual system may be specific or may be surrogate for mor...

  7. [Case of pediatric chronic myeloid leukemia with bilateral visual loss onset].

    PubMed

    Hara, Yusuke; Kamura, Yumi; Oikawa, Aki; Shichino, Hiroyuki; Mugishima, Hideo; Goto, Hiroshi

    2010-05-01

    Chronic myeloid leukemia (CML) during childhood is rare, and only been a few cases showed visual disturbances as an initial symptom. We report a pediatric CML case diagnosed by bilateral visual loss. An 11-year-old boy complained of visual loss in both eyes. His best corrected visual acuity was 0.5 in the right eye and 0.2 in the left. Fundus examination showed disc swelling, dilated and tortuous retinal veins and multiple elevated retinal lesions with hemorrhages of various size from one-forth to four disc diameters in both eyes. He was diagnosed as having CML by leucocytosis and systematic work-up including Philadelphia chromosome-positive, BCR-ABL kinase domain in peripheral blood and bone marrow. The ocular findings improved after treatment with hydroxyurea, leukocytaphresis and imatinib. His best corrected visual acuity improved to 0.7 in both eyes. Recent leukemia therapy including imatinib is effective not only for ocular lesions but also to induce hematological remission in childhood CML.

  8. AstroBlend: An astrophysical visualization package for Blender

    NASA Astrophysics Data System (ADS)

    Naiman, J. P.

    2016-04-01

    The rapid growth in scale and complexity of both computational and observational astrophysics over the past decade necessitates efficient and intuitive methods for examining and visualizing large datasets. Here, I present AstroBlend, an open-source Python library for use within the three dimensional modeling software, Blender. While Blender has been a popular open-source software among animators and visual effects artists, in recent years it has also become a tool for visualizing astrophysical datasets. AstroBlend combines the three dimensional capabilities of Blender with the analysis tools of the widely used astrophysical toolset, yt, to afford both computational and observational astrophysicists the ability to simultaneously analyze their data and create informative and appealing visualizations. The introduction of this package includes a description of features, work flow, and various example visualizations. A website - www.astroblend.com - has been developed which includes tutorials, and a gallery of example images and movies, along with links to downloadable data, three dimensional artistic models, and various other resources.

  9. The visual cliff's forgotten menagerie: rats, goats, babies, and myth-making in the history of psychology.

    PubMed

    Rodkey, Elissa N

    2015-01-01

    Eleanor Gibson and Richard Walk's famous visual cliff experiment is one of psychology's classic studies, included in most introductory textbooks. Yet the famous version which centers on babies is actually a simplification, the result of disciplinary myth-making. In fact the visual cliff's first subjects were rats, and a wide range of animals were tested on the cliff, including chicks, turtles, lambs, kid goats, pigs, kittens, dogs, and monkeys. The visual cliff experiment was more accurately a series of experiments, employing varying methods and a changing apparatus, modified to test different species. This paper focuses on the initial, nonhuman subjects of the visual cliff, resituating the study in its original experimental logic, connecting it to the history of comparative psychology, Gibson's interest in comparative psychology, as well as gender-based discrimination. Recovering the visual cliff's forgotten menagerie helps to counter the romanticization of experimentation by focusing on the role of extrascientific factors, chance, complexity, and uncertainty in the experimental process. © 2015 Wiley Periodicals, Inc.

  10. Shallow Habitat Air Dive Series (SHAD I and II): The Effects on Visual Performance and Physiology

    DTIC Science & Technology

    1974-10-02

    APPLICATION Since the tests employed cover all the major, known visual symptoms of oxygen toxicity , the data indicate that man can live under...included a number of measures of visual physiology and visual performance, since many of the symptoms of oxygen toxicity involve the visual system. The...oxygen toxic - ity. Nitrogen narcosis, which normally occurs at 200 to 300 ft, is the lesser of the two problems for shaUow habitat divers, since

  11. A Comparative Study of Emotional Stability of Visually Impaired Students Studying at Secondary Level in Inclusive Setup and Special Schools

    ERIC Educational Resources Information Center

    Pant, Pankaj; Joshi, P. K.

    2016-01-01

    Visual impairment as an umbrella term includes all levels of vision loss. Researches in the field of visual disability are far from satisfactory in India. Some attempts have been made to study different aspects of the lives of visually disabled children. Such attempts help, revealing the facts of their life, characteristics, activities,…

  12. Digital Images and Human Vision

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    Processing of digital images destined for visual consumption raises many interesting questions regarding human visual sensitivity. This talk will survey some of these questions, including some that have been answered and some that have not. There will be an emphasis upon visual masking, and a distinction will be drawn between masking due to contrast gain control processes, and due to processes such as hypothesis testing, pattern recognition, and visual search.

  13. The Influence of Manifest Strabismus and Stereoscopic Vision on Non-Verbal Abilities of Visually Impaired Children

    ERIC Educational Resources Information Center

    Gligorovic, Milica; Vucinic, Vesna; Eskirovic, Branka; Jablan, Branka

    2011-01-01

    This research was conducted in order to examine the influence of manifest strabismus and stereoscopic vision on non-verbal abilities of visually impaired children aged between 7 and 15. The sample included 55 visually impaired children from the 1st to the 6th grade of elementary schools for visually impaired children in Belgrade. RANDOT stereotest…

  14. Visual Literacy in the Digital Age: Selected Readings from the Annual Conference of the International Visual Literacy Association (25th, Rochester, New York, October 13-17, 1993).

    ERIC Educational Resources Information Center

    Beauchamp, Darrel G.; And Others

    This document contains selected papers from the 25th annual conference of the International Visual Literacy Association (IVLA). Topics addressed in the papers include the following: visual literacy; graphic information in research and education; evaluation criteria for instructional media; understanding symbols in business presentations;…

  15. A real-time articulatory visual feedback approach with target presentation for second language pronunciation learning.

    PubMed

    Suemitsu, Atsuo; Dang, Jianwu; Ito, Takayuki; Tiede, Mark

    2015-10-01

    Articulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /æ/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning.

  16. Discussing State-of-the-Art Spatial Visualization Techniques Applicable for the Epidemiological Surveillance Data on the Example of Campylobacter spp. in Raw Chicken Meat.

    PubMed

    Plaza-Rodríguez, C; Appel, B; Kaesbohrer, A; Filter, M

    2016-08-01

    Within the European activities for the 'Monitoring and Collection of Information on Zoonoses', annually EFSA publishes a European report, including information related to the prevalence of Campylobacter spp. in Germany. Spatial epidemiology becomes here a fundamental tool for the generation of these reports, including the representation of prevalence as an essential element. Until now, choropleth maps are the default visualization technique applied in epidemiological monitoring and surveillance reports made by EFSA and German authorities. However, due to its limitations, it seems to be reasonable to explore alternative chart type. Four maps including choropleth, cartogram, graduated symbols and dot-density maps were created to visualize real-world sample data on the prevalence of Campylobacter spp. in raw chicken meat samples in Germany in 2011. In addition, adjacent and coincident maps were created to visualize also the associated uncertainty. As an outcome, we found that there is not a single data visualization technique that encompasses all the necessary features to visualize prevalence data alone or prevalence data together with their associated uncertainty. All the visualization techniques contemplated in this study demonstrated to have both advantages and disadvantages. To determine which visualization technique should be used for future reports, we recommend to create a dialogue between end-users and epidemiologists on the basis of sample data and charts. The final decision should also consider the knowledge and experience of end-users as well as the specific objective to be achieved with the charts. © 2015 The Authors. Zoonoses and Public Health Published by Blackwell Verlag GmbH.

  17. Moving beyond the White Cane: Building an Online Learning Environment for the Visually Impaired Professional.

    ERIC Educational Resources Information Center

    Mitchell, Donald P.; Scigliano, John A.

    2000-01-01

    Describes the development of an online learning environment for a visually impaired professional. Topics include physical barriers, intellectual barriers, psychological barriers, and technological barriers; selecting appropriate hardware and software; and combining technologies that include personal computers, Web-based resources, network…

  18. Landscape values in public decisions

    Treesearch

    Richard N. L. Andrews

    1979-01-01

    The National Environmental Policy Act requires all agencies to develop techniques to insure appropriate consideration of all environmental amenities and values, including those presently unquantified, by all federal agencies in all their activities. These obviously include the values associated with the landscape and its visual resources. The visual resource, however,...

  19. Biology for the Visually Impaired Student.

    ERIC Educational Resources Information Center

    Cooperman, Susan

    1980-01-01

    This is a description of a beginning college biology course for visually impaired students. Equipment for instruction is discussed and methods for using the materials are included. Topics included in the course are chemical bonding, diffusion and osmosis, cell structure, meiosis and mitosis, reproduction, behavior, nutrition, and circulation. (SA)

  20. Fall with and without fracture in elderly: what's different?

    PubMed

    Kantayaporn, Choochat

    2012-10-01

    Falling fracture was one of the health problems in elderly. This presentation aimed to identify the factors of fall that caused fractures. The retrospective case-control study was designed. Samples were all who experienced fall within 1 year in Lamphun. Factors included age, gender underlying diseases, chronic drugs used, history of parent fragility fracture, age of menopause, steroid used, body mass index, visual acuity and time up and go test were studied. Multivariate regression analysis was used. 336 cases of fractures in 1,244 cases of fall were found. Significant factors of falling fracture group that were different from fall without fracture group included age, female gender, menopause before age of 45 and visual impairment. Visual impairment was the other key factor rather than osteoporosis that caused fall with fracture. The author suggested that falling fracture prevention programs should be included correction of visual impairment other than osteoporosis treatment.

  1. Adapting for Impaired Patrons.

    ERIC Educational Resources Information Center

    Schuyler, Michael

    1999-01-01

    Describes how a library, with an MCI Corporation grant, approached the process of setting up computers for the visually impaired. Discusses preparations, which included hiring a visually-impaired user as a consultant and contacting the VIP (Visually Impaired Persons) group; equipment; problems with the graphical user interface; and training.…

  2. Visual Arts Research, 1994.

    ERIC Educational Resources Information Center

    Gardner, Nancy C., Ed.; Thompson, Christine, Ed.

    1994-01-01

    This document consists of the two issues of the journal "Visual Arts in Research" published in 1994. This journal focuses on the theory and practice of visual arts education from educational, historical, philosophical, and psychological perspectives. Number 1 of this volume includes the following contributions: (1) "Zooming in on the Qualitative…

  3. 37 CFR 202.3 - Registration of copyright.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Class VA: Works of the visual arts. This class includes all published and unpublished pictorial, graphic... permission and under the direction of the Visual Arts Division, the application may be submitted... published photographs after consultation and with the permission and under the direction of the Visual Arts...

  4. Fitting the Jigsaw of Citation: Information Visualization in Domain Analysis.

    ERIC Educational Resources Information Center

    Chen, Chaomei; Paul, Ray J.; O'Keefe, Bob

    2001-01-01

    Discusses the role of information visualization in modeling and representing intellectual structures associated with scientific disciplines and visualizes the domain of computer graphics based on bibliographic data from author cocitation patterns. Highlights include author cocitation maps, citation time lines, animation of a high-dimensional…

  5. Comparison of visual acuity of the patients on the first day after sub-Bowman keratomileusis or laser in situ keratomileusis.

    PubMed

    Zhao, Wei; Wu, Ting; Dong, Ze-Hong; Feng, Jie; Ren, Yu-Feng; Wang, Yu-Sheng

    2016-01-01

    To compare recovery of the visual acuity in patients one day after sub-Bowman keratomileusis (SBK) or laser in situ keratomileusis (LASIK). Data from 5923 eyes in 2968 patients that received LASIK (2755 eyes) or SBK (3168 eyes) were retrospectively analyzed. The eyes were divided into 4 groups according to preoperative spherical equivalent: between -12.00 to -9.00 D, extremely high myopia (n=396, including 192 and 204 in SBK and LASIK groups, respectively); -9.00 to -6.00 D, high myopia (n=1822, including 991 and 831 in SBK and LASIK groups, respectively), -6.00 to -3.00 D, moderate myopia (n=3071, including 1658 and 1413 in SBK and LASIK groups, respectively), and -3.00 to 0.00 D, low myopia (n=634, including 327 and 307 in SBK and LASIK groups, respectively). Uncorrected logMAR visual acuity values of patients were assessed under standard natural light. Analysis of variance was used for comparisons among different groups. Uncorrected visual acuity values were 0.0115±0.1051 and 0.0466±0.1477 at day 1 after operation for patients receiving SBK and LASIK, respectively (P<0.01); visual acuity values of 0.1854±0.1842, 0.0615±0.1326, -0.0033±0.0978, and -0.0164±0.0972 were obtained for patients in the extremely high, high, moderate, and low myopia groups, respectively (P<0.01). In addition, significant differences in visual acuity at day 1 after operation were found between patients receiving SBK and LASIK in each myopia subgroup. Compared with LASIK, SBK is safer and more effective, with faster recovery. Therefore, SBK is more likely to be accepted by patients than LASIK for better uncorrected visual acuity the day following operation.

  6. Visual outcome after endoscopic third ventriculostomy for hydrocephalus.

    PubMed

    Jung, Ji-Ho; Chai, Yong-Hyun; Jung, Shin; Kim, In-Young; Jang, Woo-Youl; Moon, Kyung-Sub; Kim, Seul-Kee; Chong, Sangjoon; Kim, Seung-Ki; Jung, Tae-Young

    2018-02-01

    Hydrocephalus-related symptoms are mostly improved after successful endoscopic third ventriculostomy (ETV). However, visual symptoms can be different. This study was focused on visual symptoms. We analyzed the magnetic resonance images (MRI) of the orbit and visual outcomes. From August 2006 to November 2016, 50 patients with hydrocephalus underwent ETV. The male-to-female ratio was 33:17, and the median age was 61 years (range, 5-74 years). There were 18 pediatric and 32 adult patients. Abnormal orbital MRI findings included prominent subarachnoid space around the optic nerves and vertical tortuosity of the optic nerves. We retrospectively analyzed clinical symptoms, causes of hydrocephalus, ETV success score (ETVSS), ETV success rate, ETV complications, orbital MRI findings, and visual impairment score (VIS). The median duration of follow-up was 59 months (range, 3-113 months). The most common symptoms were headache, vomiting, and gait disturbance. Visual symptoms were found in 6 patients (12%). The most common causes of hydrocephalus were posterior fossa tumor in 13 patients, pineal tumor in 12, aqueductal stenosis in 8, thalamic malignant glioma in 7, and tectal glioma in 4. ETVSS was 70 in 3 patients, 80 in 34 patients, and 90 in 13 patients. ETV success rate was 80%. ETVSS 70 showed the trend in short-term survival compared to ETVSS 90 and 80. ETV complications included epidural hematoma requiring operation in one patient, transient hemiparesis in two patients, and infection in two patients. Preoperative abnormal orbital MRI findings were found in 18 patients and postoperative findings in 7 patients. Four of six patients with visual symptoms had abnormal MR findings. Three patients did not show VIS improvement, including two with severe visual symptoms. Patients with severe visual impairment were found to have bad outcomes. The visual symptoms related with increased intracranial pressure should be carefully monitored and controlled to improve outcomes.

  7. Influence of uncorrected refractive error and unmet refractive error on visual impairment in a Brazilian population.

    PubMed

    Ferraz, Fabio H; Corrente, José E; Opromolla, Paula; Schellini, Silvana A

    2014-06-25

    The World Health Organization (WHO) definitions of blindness and visual impairment are widely based on best-corrected visual acuity excluding uncorrected refractive errors (URE) as a visual impairment cause. Recently, URE was included as a cause of visual impairment, thus emphasizing the burden of visual impairment due to refractive error (RE) worldwide is substantially higher. The purpose of the present study is to determine the reversal of visual impairment and blindness in the population correcting RE and possible associations between RE and individual characteristics. A cross-sectional study was conducted in nine counties of the western region of state of São Paulo, using systematic and random sampling of households between March 2004 and July 2005. Individuals aged more than 1 year old were included and were evaluated for demographic data, eye complaints, history, and eye exam, including no corrected visual acuity (NCVA), best corrected vision acuity (BCVA), automatic and manual refractive examination. The definition adopted for URE was applied to individuals with NCVA > 0.15 logMAR and BCVA ≤ 0.15 logMAR after refractive correction and unmet refractive error (UREN), individuals who had visual impairment or blindness (NCVA > 0.5 logMAR) and BCVA ≤ 0.5 logMAR after optical correction. A total of 70.2% of subjects had normal NCVA. URE was detected in 13.8%. Prevalence of 4.6% of optically reversible low vision and 1.8% of blindness reversible by optical correction were found. UREN was detected in 6.5% of individuals, more frequently observed in women over the age of 50 and in higher RE carriers. Visual impairment related to eye diseases is not reversible with spectacles. Using multivariate analysis, associations between URE and UREN with regard to sex, age and RE was observed. RE is an important cause of reversible blindness and low vision in the Brazilian population.

  8. Influence of uncorrected refractive error and unmet refractive error on visual impairment in a Brazilian population

    PubMed Central

    2014-01-01

    Background The World Health Organization (WHO) definitions of blindness and visual impairment are widely based on best-corrected visual acuity excluding uncorrected refractive errors (URE) as a visual impairment cause. Recently, URE was included as a cause of visual impairment, thus emphasizing the burden of visual impairment due to refractive error (RE) worldwide is substantially higher. The purpose of the present study is to determine the reversal of visual impairment and blindness in the population correcting RE and possible associations between RE and individual characteristics. Methods A cross-sectional study was conducted in nine counties of the western region of state of São Paulo, using systematic and random sampling of households between March 2004 and July 2005. Individuals aged more than 1 year old were included and were evaluated for demographic data, eye complaints, history, and eye exam, including no corrected visual acuity (NCVA), best corrected vision acuity (BCVA), automatic and manual refractive examination. The definition adopted for URE was applied to individuals with NCVA > 0.15 logMAR and BCVA ≤ 0.15 logMAR after refractive correction and unmet refractive error (UREN), individuals who had visual impairment or blindness (NCVA > 0.5 logMAR) and BCVA ≤ 0.5 logMAR after optical correction. Results A total of 70.2% of subjects had normal NCVA. URE was detected in 13.8%. Prevalence of 4.6% of optically reversible low vision and 1.8% of blindness reversible by optical correction were found. UREN was detected in 6.5% of individuals, more frequently observed in women over the age of 50 and in higher RE carriers. Visual impairment related to eye diseases is not reversible with spectacles. Using multivariate analysis, associations between URE and UREN with regard to sex, age and RE was observed. Conclusion RE is an important cause of reversible blindness and low vision in the Brazilian population. PMID:24965318

  9. Including Students with Visual Impairments: Softball

    ERIC Educational Resources Information Center

    Brian, Ali; Haegele, Justin A.

    2014-01-01

    Research has shown that while students with visual impairments are likely to be included in general physical education programs, they may not be as active as their typically developing peers. This article provides ideas for equipment modifications and game-like progressions for one popular physical education unit, softball. The purpose of these…

  10. Glenn Ligon: Re-Visioning Change

    ERIC Educational Resources Information Center

    Rhoades, Mindi; Sanders, Jim

    2007-01-01

    Glenn Ligon is a multifaceted artist working across multiple media, including painting, sculpture, printmaking, photography, video, and digital media. He is a conceptual artist, often working to include text with visuals and as visuals in his work. He appropriates text from classic authors, like Homer, from runaway slave broadsides, from Richard…

  11. Social Identity, Autism and Visual Impairment (VI) in the Early Years

    ERIC Educational Resources Information Center

    Dale, Naomi; Salt, Alison

    2008-01-01

    This article explores how visual impairment might impact on early social and emotional development including self-awareness and communication with others. Some children show a "developmental setback" and other worrying developmental trajectories in the early years, including autistic related behaviours and autistic spectrum disorders.…

  12. Frontal–Occipital Connectivity During Visual Search

    PubMed Central

    Pantazatos, Spiro P.; Yanagihara, Ted K.; Zhang, Xian; Meitzler, Thomas

    2012-01-01

    Abstract Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task. PMID:22708993

  13. Migraine with aura: visual disturbances and interrelationship with the pain phase. Vågå study of headache epidemiology.

    PubMed

    Sjaastad, Ottar; Bakketeig, Leiv S; Petersen, Hans C

    2006-06-01

    In the Vågå study of headache epidemiology, 1838 or 88.6% of the available 18-65-year-old inhabitants of the commune were included. Everyone was questioned and examined personally by the principal investigator (OS). There were 178 cases of various types of visual disturbances during the migraine attack, which corresponds to 9.7% of the study group. The prevalence among females was 11.9% and among males 7.4%; female/male ratio was 1.70, as against 1.05 in the total Vågå study population. By far the most frequently occurring visual disturbance pattern was (A) 1. Visual disturbances --> 2. pain-free interlude --> 3. pain phase (in 78% of the cases). Other frequent patterns were: (B). Visual disturbances, but no pain phase (24%); and: (C) 1. Pain phase --> 2. visual disturbances (23%). Evidently, in the solitary case, there might be more than one visual disturbance pattern. The most frequently occurring solitary visual disturbances were: scintillating scotoma (62%) and obscuration (33%); but also more rare ones were identified, like anopsia, autokinesis (movement of stationary objects), tunnel vision and micropsia. Among the non-visual aura disturbances, paraesthesias and speech disturbances were the most frequent ones. The prevalence of migraine with aura seemed to be considerably higher than in similar studies. This also includes studies that have been carried out with a face-to-face interview technique.

  14. Objective Measures of Visual Function in Papilledema

    PubMed Central

    Moss, Heather E.

    2016-01-01

    Synopsis Visual function is an important parameter to consider when managing patients with papilledema. Though the current standard of care uses standard automated perimetry (SAP) to obtain this information, this test is inherently subjective and prone to patient errors. Objective visual function tests including the visual evoked potential, pattern electroretinogram, photopic negative response of the full field electroretinogram, and pupillary light response have the potential to replace or supplement subjective visual function tests in papilledema management. This article reviews the evidence for use of objective visual function tests to assess visual function in papilledema and discusses future investigations needed to develop them as clinically practical and useful measures for this purpose. PMID:28451649

  15. Temporal properties of material categorization and material rating: visual vs non-visual material features.

    PubMed

    Nagai, Takehiro; Matsushima, Toshiki; Koida, Kowa; Tani, Yusuke; Kitazaki, Michiteru; Nakauchi, Shigeki

    2015-10-01

    Humans can visually recognize material categories of objects, such as glass, stone, and plastic, easily. However, little is known about the kinds of surface quality features that contribute to such material class recognition. In this paper, we examine the relationship between perceptual surface features and material category discrimination performance for pictures of materials, focusing on temporal aspects, including reaction time and effects of stimulus duration. The stimuli were pictures of objects with an identical shape but made of different materials that could be categorized into seven classes (glass, plastic, metal, stone, wood, leather, and fabric). In a pre-experiment, observers rated the pictures on nine surface features, including visual (e.g., glossiness and transparency) and non-visual features (e.g., heaviness and warmness), on a 7-point scale. In the main experiments, observers judged whether two simultaneously presented pictures were classified as the same or different material category. Reaction times and effects of stimulus duration were measured. The results showed that visual feature ratings were correlated with material discrimination performance for short reaction times or short stimulus durations, while non-visual feature ratings were correlated only with performance for long reaction times or long stimulus durations. These results suggest that the mechanisms underlying visual and non-visual feature processing may differ in terms of processing time, although the cause is unclear. Visual surface features may mainly contribute to material recognition in daily life, while non-visual features may contribute only weakly, if at all. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Mechanisms Underlying Development of Visual Maps and Receptive Fields

    PubMed Central

    Huberman, Andrew D.; Feller, Marla B.; Chapman, Barbara

    2008-01-01

    Patterns of synaptic connections in the visual system are remarkably precise. These connections dictate the receptive field properties of individual visual neurons and ultimately determine the quality of visual perception. Spontaneous neural activity is necessary for the development of various receptive field properties and visual feature maps. In recent years, attention has shifted to understanding the mechanisms by which spontaneous activity in the developing retina, lateral geniculate nucleus, and visual cortex instruct the axonal and dendritic refinements that give rise to orderly connections in the visual system. Axon guidance cues and a growing list of other molecules, including immune system factors, have also recently been implicated in visual circuit wiring. A major goal now is to determine how these molecules cooperate with spontaneous and visually evoked activity to give rise to the circuits underlying precise receptive field tuning and orderly visual maps. PMID:18558864

  17. The Chinese American Eye Study: Design and Methods

    PubMed Central

    Varma, Rohit; Hsu, Chunyi; Wang, Dandan; Torres, Mina; Azen, Stanley P.

    2016-01-01

    Purpose To summarize the study design, operational strategies and procedures of the Chinese American Eye Study (CHES), a population-based assessment of the prevalence of visual impairment, ocular disease, and visual functioning in Chinese Americans. Methods This population-based, cross-sectional study, included 4,570 Chinese, 50 years and older, residing in the city of Monterey Park, California. Each eligible participant completed a detailed interview and eye examination. The interview included an assessment of demographic, behavioral, and ocular risk factors and health-related and vision-related quality of life. The eye examination included measurements of visual acuity, intraocular pressure, visual fields, fundus and optic disc photography, a detailed anterior and posterior segment examination, and measurements of blood pressure, glycosylated hemoglobin levels, and blood glucose levels. Results The objectives of the CHES are to obtain prevalence estimates of visual impairment, refractive error, diabetic retinopathy, open-angle and angle-closure glaucoma, lens opacities, and age-related macular degeneration in Chinese-Americans. In addition, outcomes include effect estimates for risk factors associated with eye diseases. Lastly, CHES will investigate the genetic determinates of myopia and glaucoma. Conclusion The CHES will provide information about the prevalence and risk factors of ocular diseases in one of the fastest growing minority groups in the United States. PMID:24044409

  18. The virtual windtunnel: Visualizing modern CFD datasets with a virtual environment

    NASA Technical Reports Server (NTRS)

    Bryson, Steve

    1993-01-01

    This paper describes work in progress on a virtual environment designed for the visualization of pre-computed fluid flows. The overall problems involved in the visualization of fluid flow are summarized, including computational, data management, and interface issues. Requirements for a flow visualization are summarized. Many aspects of the implementation of the virtual windtunnel were uniquely determined by these requirements. The user interface is described in detail.

  19. Software Aids Visualization of Computed Unsteady Flow

    NASA Technical Reports Server (NTRS)

    Kao, David; Kenwright, David

    2003-01-01

    Unsteady Flow Analysis Toolkit (UFAT) is a computer program that synthesizes motions of time-dependent flows represented by very large sets of data generated in computational fluid dynamics simulations. Prior to the development of UFAT, it was necessary to rely on static, single-snapshot depictions of time-dependent flows generated by flow-visualization software designed for steady flows. Whereas it typically takes weeks to analyze the results of a largescale unsteady-flow simulation by use of steady-flow visualization software, the analysis time is reduced to hours when UFAT is used. UFAT can be used to generate graphical objects of flow visualization results using multi-block curvilinear grids in the format of a previously developed NASA data-visualization program, PLOT3D. These graphical objects can be rendered using FAST, another popular flow visualization software developed at NASA. Flow-visualization techniques that can be exploited by use of UFAT include time-dependent tracking of particles, detection of vortex cores, extractions of stream ribbons and surfaces, and tetrahedral decomposition for optimal particle tracking. Unique computational features of UFAT include capabilities for automatic (batch) processing, restart, memory mapping, and parallel processing. These capabilities significantly reduce analysis time and storage requirements, relative to those of prior flow-visualization software. UFAT can be executed on a variety of supercomputers.

  20. A systematic review of the technology-based assessment of visual perception and exploration behaviour in association football.

    PubMed

    McGuckian, Thomas B; Cole, Michael H; Pepping, Gert-Jan

    2018-04-01

    To visually perceive opportunities for action, athletes rely on the movements of their eyes, head and body to explore their surrounding environment. To date, the specific types of technology and their efficacy for assessing the exploration behaviours of association footballers have not been systematically reviewed. This review aimed to synthesise the visual perception and exploration behaviours of footballers according to the task constraints, action requirements of the experimental task, and level of expertise of the athlete, in the context of the technology used to quantify the visual perception and exploration behaviours of footballers. A systematic search for papers that included keywords related to football, technology, and visual perception was conducted. All 38 included articles utilised eye-movement registration technology to quantify visual perception and exploration behaviour. The experimental domain appears to influence the visual perception behaviour of footballers, however no studies investigated exploration behaviours of footballers in open-play situations. Studies rarely utilised representative stimulus presentation or action requirements. To fully understand the visual perception requirements of athletes, it is recommended that future research seek to validate alternate technologies that are capable of investigating the eye, head and body movements associated with the exploration behaviours of footballers during representative open-play situations.

  1. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  2. Reconfigurable Auditory-Visual Display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R. (Inventor); Anderson, Mark R. (Inventor); McClain, Bryan (Inventor); Miller, Joel D. (Inventor)

    2008-01-01

    System and method for visual and audible communication between a central operator and N mobile communicators (N greater than or equal to 2), including an operator transceiver and interface, configured to receive and display, for the operator, visually perceptible and audibly perceptible signals from each of the mobile communicators. The interface (1) presents an audible signal from each communicator as if the audible signal is received from a different location relative to the operator and (2) allows the operator to select, to assign priority to, and to display, the visual signals and the audible signals received from a specified communicator. Each communicator has an associated signal transmitter that is configured to transmit at least one of the visual signals and the audio signal associated with the communicator, where at least one of the signal transmitters includes at least one sensor that senses and transmits a sensor value representing a selected environmental or physiological parameter associated with the communicator.

  3. Visual Behaviors and Adaptations Associated with Cortical and Ocular Impairment in Children.

    ERIC Educational Resources Information Center

    Jan, J. E.; Groenveld, M.

    1993-01-01

    This article shows the usefulness of understanding visual behaviors in the diagnosis of various types of visual impairments that are due to ocular and cortical disorders. Behaviors discussed include nystagmus, ocular motor dyspraxia, head position, close viewing, field loss adaptations, mannerisms, photophobia, and abnormal color perception. (JDD)

  4. The Process of Probability Problem Solving: Use of External Visual Representations

    ERIC Educational Resources Information Center

    Zahner, Doris; Corter, James E.

    2010-01-01

    We investigate the role of external inscriptions, particularly those of a spatial or visual nature, in the solution of probability word problems. We define a taxonomy of external visual representations used in probability problem solving that includes "pictures," "spatial reorganization of the given information," "outcome listings," "contingency…

  5. The Visual Narrative: Kids, Comic Books, and Creativity.

    ERIC Educational Resources Information Center

    Hoff, Gary R.

    1982-01-01

    Discusses why junior high school students like comic books and examines how comic book art and visual narrative can be used in education. Copying comic book art can teach students several useful art techniques. Suggestions for using visual narratives to study science fiction, literature, folklore, and art history are included. (AM)

  6. Literacy Instruction Through Communicative and Visual Arts

    ERIC Educational Resources Information Center

    Lin, Chia-Hui

    2005-01-01

    The purpose of this article is to explore the evidence suggesting the effectiveness of literacy instruction through communicative and visual arts, according to Flood, Heath, and Lapp (1997). Visual arts includes everything from dramatic performances to comic books to television viewing. The communicative arts, such as reading, writing, and…

  7. Food Preparation: An Instructional Package with Adaptations for Visually Impaired Individuals.

    ERIC Educational Resources Information Center

    Crawford, Glinda B.; And Others

    This instructional package, developed for the home economics teacher of mainstreamed visually impaired students, provides food preparation lesson plans appropriate for the junior high level. First, teacher guidelines are given, including characteristics of the visually impaired, orienting such students to the classroom, orienting class members to…

  8. Visual Teaching Strategies for Children with Autism.

    ERIC Educational Resources Information Center

    Tissot, Catherine; Evans, Roy

    2003-01-01

    Describes the types of children with autism that would benefit from visual teaching strategies. Discusses the benefits and disadvantages of some of the more well-known programs that use visual teaching strategies, including movement-based systems relying on sign language, and materials-based systems such as Treatment and Education of Autistic and…

  9. Discovering Differences in the Nature of Verbal and Visual Messages

    ERIC Educational Resources Information Center

    Adler, Barbara Laughlin

    2006-01-01

    Objective: Students will identify several unique characteristics of verbal vs. visual messages, including the superior ability of language to communicate objective, factual, philosophical content in past, present, and future terms; and the superior ability of visual images to communicate social-emotional meaning and concrete information limited in…

  10. Visual Creativity across Cultures: A Comparison between Italians and Japanese

    ERIC Educational Resources Information Center

    Palmiero, Massimiliano; Nakatani, Chie; van Leeuwen, Cees

    2017-01-01

    Culture-related differences in visual creativity were investigated, comparing Italian and Japanese participants in terms of divergent (figural completion task) and product-oriented thinking (figural combination task). Visual restructuring ability was measured as the ability to reinterpret ambiguous figures and was included as a covariate. Results…

  11. Triage Visualization for Digital Media Exploitation

    DTIC Science & Technology

    2013-09-01

    and responding to threats. Previous work includes NVisionIP [17], a network visualization 8 tool that processes Argus NetFlow [18] data. NVisionIP...2012.02.021 [17] K. Lakkaraju et al., “Nvisionip: netflow visualizations of system state for security situational awareness,” in Proceedings of the 2004 ACM

  12. The Physical Environment and the Visually Impaired.

    ERIC Educational Resources Information Center

    Braf, Per-Gunnar

    Reported are results of a project carried out at the Swedish Institute for the Handicapped to determine needs of the visually impaired in the planning and adaptation of buildings and other forms of physical environment. Chapter 1 considers implications of impaired vision and includes definitions, statistics, and problems of the visually impaired…

  13. Multimedia Visualizer: An Animated, Object-Based OPAC.

    ERIC Educational Resources Information Center

    Lee, Newton S.

    1991-01-01

    Describes the Multimedia Visualizer, an online public access catalog (OPAC) that uses animated visualizations to make it more user friendly. Pictures of the system are shown that illustrate the interactive objects that patrons can access, including card catalog drawers, librarian desks, and bookshelves; and access to multimedia items is described.…

  14. Design for Visual Arts.

    ERIC Educational Resources Information Center

    Skeries, Larry

    Experiences suggested within this visual arts packet provide high school students with awareness of visual expression in graphic design, product design, architecture, and crafts. The unit may be used in whole or in part and includes information about art careers and art-related jobs found in major occupational fields. Specific lesson topics…

  15. Visualization of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.

  16. Horizontal visual search in a large field by patients with unilateral spatial neglect.

    PubMed

    Nakatani, Ken; Notoya, Masako; Sunahara, Nobuyuki; Takahashi, Shusuke; Inoue, Katsumi

    2013-06-01

    In this study, we investigated the horizontal visual search ability and pattern of horizontal visual search in a large space performed by patients with unilateral spatial neglect (USN). Subjects included nine patients with right hemisphere damage caused by cerebrovascular disease showing left USN, nine patients with right hemisphere damage but no USN, and six healthy individuals with no history of brain damage who were age-matched to the groups with brain right hemisphere damage. The number of visual search tasks accomplished was recorded in the first experiment. Neck rotation angle was continuously measured during the task and quantitative data of the measurements were collected. There was a strong correlation between the number of visual search tasks accomplished and the total Behavioral Inattention Test Conventional Subtest (BITC) score in subjects with right hemisphere damage. In both USN and control groups, the head position during the visual search task showed a balanced bell-shaped distribution from the central point on the field to the left and right sides. Our results indicate that compensatory strategies, including cervical rotation, may improve visual search capability and achieve balance on the neglected side. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Vibrotactile Feedback for Brain-Computer Interface Operation

    PubMed Central

    Cincotti, Febo; Kauhanen, Laura; Aloise, Fabio; Palomäki, Tapio; Caporusso, Nicholas; Jylänki, Pasi; Mattia, Donatella; Babiloni, Fabio; Vanacker, Gerolf; Nuttin, Marnix; Marciani, Maria Grazia; Millán, José del R.

    2007-01-01

    To be correctly mastered, brain-computer interfaces (BCIs) need an uninterrupted flow of feedback to the user. This feedback is usually delivered through the visual channel. Our aim was to explore the benefits of vibrotactile feedback during users' training and control of EEG-based BCI applications. A protocol for delivering vibrotactile feedback, including specific hardware and software arrangements, was specified. In three studies with 33 subjects (including 3 with spinal cord injury), we compared vibrotactile and visual feedback, addressing: (I) the feasibility of subjects' training to master their EEG rhythms using tactile feedback; (II) the compatibility of this form of feedback in presence of a visual distracter; (III) the performance in presence of a complex visual task on the same (visual) or different (tactile) sensory channel. The stimulation protocol we developed supports a general usage of the tactors; preliminary experimentations. All studies indicated that the vibrotactile channel can function as a valuable feedback modality with reliability comparable to the classical visual feedback. Advantages of using a vibrotactile feedback emerged when the visual channel was highly loaded by a complex task. In all experiments, vibrotactile feedback felt, after some training, more natural for both controls and SCI users. PMID:18354734

  18. Clinical Outcomes after Binocular Implantation of a New Trifocal Diffractive Intraocular Lens

    PubMed Central

    Kretz, Florian T. A.; Breyer, Detlev; Diakonis, Vasilios F.; Klabe, Karsten; Henke, Franziska; Auffarth, Gerd U.; Kaymak, Hakan

    2015-01-01

    Purpose. To evaluate visual, refractive, and contrast sensitivity outcomes, as well as the incidence of pseudophakic photic phenomena and patient satisfaction after bilateral diffractive trifocal intraocular lens (IOL) implantation. Methods. This prospective nonrandomized study included consecutive patients undergoing cataract surgery with bilateral implantation of a diffractive trifocal IOL (AT LISA tri 839MP, Carl Zeiss Meditec). Distance, intermediate, and near visual outcomes were evaluated as well as the defocus curve and the refractive outcomes 3 months after surgery. Photopic and mesopic contrast sensitivity, patient satisfaction, and halo perception were also evaluated. Results. Seventy-six eyes of 38 patients were included; 90% of eyes showed a spherical equivalent within ±0.50 diopters 3 months after surgery. All patients had a binocular uncorrected distance visual acuity of 0.00 LogMAR or better and a binocular uncorrected intermediate visual acuity of 0.10 LogMAR or better, 3 months after surgery. Furthermore, 85% of patients achieved a binocular uncorrected near visual acuity of 0.10 LogMAR or better. Conclusions. Trifocal diffractive IOL implantation seems to provide an effective restoration of visual function for far, intermediate, and near distances, providing high levels of visual quality and patient satisfaction. PMID:26301104

  19. Pragmatic abilities in children with congenital visual impairment: an exploration of non-literal language and advanced theory of mind understanding.

    PubMed

    Pijnacker, Judith; Vervloed, Mathijs P J; Steenbergen, Bert

    2012-11-01

    Children with congenital visual impairment have been reported to be delayed in theory of mind development. So far, research focused on first-order theory of mind, and included mainly blind children, whereas the majority of visually impaired children is not totally blind. The present study set out to explore whether children with a broader range of congenital visual impairments have a delay in more advanced theory of mind understanding, in particular second-order theory of mind (i.e. awareness that other people have beliefs about beliefs) and non-literal language (e.g. irony or figure of speech). Twenty-four children with congenital visual impairment and 24 typically developing sighted children aged between 6 and 13 were included. All children were presented with a series of stories involving understanding of theory of mind and non-literal language. When compared with sighted children of similar age and verbal intelligence, performance of children with congenital visual impairment on advanced theory of mind and non-literal stories was alike. The ability to understand the motivations behind non-literal language was associated with age, verbal intelligence and theory of mind skills, but was not associated with visual ability.

  20. VisBOL: Web-Based Tools for Synthetic Biology Design Visualization.

    PubMed

    McLaughlin, James Alastair; Pocock, Matthew; Mısırlı, Göksel; Madsen, Curtis; Wipat, Anil

    2016-08-19

    VisBOL is a Web-based application that allows the rendering of genetic circuit designs, enabling synthetic biologists to visually convey designs in SBOL visual format. VisBOL designs can be exported to formats including PNG and SVG images to be embedded in Web pages, presentations and publications. The VisBOL tool enables the automated generation of visualizations from designs specified using the Synthetic Biology Open Language (SBOL) version 2.0, as well as a range of well-known bioinformatics formats including GenBank and Pigeoncad notation. VisBOL is provided both as a user accessible Web site and as an open-source (BSD) JavaScript library that can be used to embed diagrams within other content and software.

  1. Visual Analytics 101

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean; Burtner, Edwin R.; Cook, Kristin A.

    This course will introduce the field of Visual Analytics to HCI researchers and practitioners highlighting the contributions they can make to this field. Topics will include a definition of visual analytics along with examples of current systems, types of tasks and end users, issues in defining user requirements, design of visualizations and interactions, guidelines and heuristics, the current state of user-centered evaluations, and metrics for evaluation. We encourage designers, HCI researchers, and HCI practitioners to attend to learn how their skills can contribute to advancing the state of the art of visual analytics

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keefer, Donald A.; Shaffer, Eric G.; Storsved, Brynne

    A free software application, RVA, has been developed as a plugin to the US DOE-funded ParaView visualization package, to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed as an open-source plugin to the 64 bit Windows version of ParaView 3.14. RVA was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing jointmore » visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less

  3. [Are Visual Field Defects Reversible? - Visual Rehabilitation with Brains].

    PubMed

    Sabel, B A

    2017-02-01

    Visual field defects are considered irreversible because the retina and optic nerve do not regenerate. Nevertheless, there is some potential for recovery of the visual fields. This can be accomplished by the brain, which analyses and interprets visual information and is able to amplify residual signals through neuroplasticity. Neuroplasticity refers to the ability of the brain to change its own functional architecture by modulating synaptic efficacy. This is actually the neurobiological basis of normal learning. Plasticity is maintained throughout life and can be induced by repetitively stimulating (training) brain circuits. The question now arises as to how plasticity can be utilised to activate residual vision for the treatment of visual field loss. Just as in neurorehabilitation, visual field defects can be modulated by post-lesion plasticity to improve vision in glaucoma, diabetic retinopathy or optic neuropathy. Because almost all patients have some residual vision, the goal is to strengthen residual capacities by enhancing synaptic efficacy. New treatment paradigms have been tested in clinical studies, including vision restoration training and non-invasive alternating current stimulation. While vision training is a behavioural task to selectively stimulate "relative defects" with daily vision exercises for the duration of 6 months, treatment with alternating current stimulation (30 min. daily for 10 days) activates and synchronises the entire retina and brain. Though full restoration of vision is not possible, such treatments improve vision, both subjectively and objectively. This includes visual field enlargements, improved acuity and reaction time, improved orientation and vision related quality of life. About 70 % of the patients respond to the therapies and there are no serious adverse events. Physiological studies of the effect of alternating current stimulation using EEG and fMRI reveal massive local and global changes in the brain. These include local activation of the visual cortex and global reorganisation of neuronal brain networks. Because modulation of neuroplasticity can strengthen residual vision, the brain deserves a better reputation in ophthalmology for its role in visual rehabilitation. For patients, there is now more light at the end of the tunnel, because vision loss in some areas of the visual field defect is indeed reversible. Georg Thieme Verlag KG Stuttgart · New York.

  4. RVA: A Plugin for ParaView 3.14

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-09-04

    RVA is a plugin developed for the 64-bit Windows version of the ParaView 3.14 visualization package. RVA is designed to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed onmore » enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less

  5. Cerebral versus Ocular Visual Impairment: The Impact on Developmental Neuroplasticity.

    PubMed

    Martín, Maria B C; Santos-Lozano, Alejandro; Martín-Hernández, Juan; López-Miguel, Alberto; Maldonado, Miguel; Baladrón, Carlos; Bauer, Corinna M; Merabet, Lotfi B

    2016-01-01

    Cortical/cerebral visual impairment (CVI) is clinically defined as significant visual dysfunction caused by injury to visual pathways and structures occurring during early perinatal development. Depending on the location and extent of damage, children with CVI often present with a myriad of visual deficits including decreased visual acuity and impaired visual field function. Most striking, however, are impairments in visual processing and attention which have a significant impact on learning, development, and independence. Within the educational arena, current evidence suggests that strategies designed for individuals with ocular visual impairment are not effective in the case of CVI. We propose that this variance may be related to differences in compensatory neuroplasticity related to the type of visual impairment, as well as underlying alterations in brain structural connectivity. We discuss the etiology and nature of visual impairments related to CVI, and how advanced neuroimaging techniques (i.e., diffusion-based imaging) may help uncover differences between ocular and cerebral causes of visual dysfunction. Revealing these differences may help in developing future strategies for the education and rehabilitation of individuals living with visual impairment.

  6. Abnormal white matter tractography of visual pathways detected by high-angular-resolution diffusion imaging (HARDI) corresponds to visual dysfunction in cortical/cerebral visual impairment

    PubMed Central

    Bauer, Corinna M.; Heidary, Gena; Koo, Bang-Bon; Killiany, Ronald J.; Bex, Peter; Merabet, Lotfi B.

    2014-01-01

    Cortical (cerebral) visual impairment (CVI) is characterized by visual dysfunction associated with damage to the optic radiations and/or visual cortex. Typically it results from pre- or perinatal hypoxic damage to postchiasmal visual structures and pathways. The neuroanatomical basis of this condition remains poorly understood, particularly with regard to how the resulting maldevelopment of visual processing pathways relates to observations in the clinical setting. We report our investigation of 2 young adults diagnosed with CVI and visual dysfunction characterized by difficulties related to visually guided attention and visuospatial processing. Using high-angular-resolution diffusion imaging (HARDI), we characterized and compared their individual white matter projections of the extrageniculo-striate visual system with a normal-sighted control. Compared to a sighted control, both CVI cases revealed a striking reduction in association fibers, including the inferior frontal-occipital fasciculus as well as superior and inferior longitudinal fasciculi. This reduction in fibers associated with the major pathways implicated in visual processing may provide a neuroanatomical basis for the visual dysfunctions observed in these patients. PMID:25087644

  7. Cerebral versus Ocular Visual Impairment: The Impact on Developmental Neuroplasticity

    PubMed Central

    Martín, Maria B. C.; Santos-Lozano, Alejandro; Martín-Hernández, Juan; López-Miguel, Alberto; Maldonado, Miguel; Baladrón, Carlos; Bauer, Corinna M.; Merabet, Lotfi B.

    2016-01-01

    Cortical/cerebral visual impairment (CVI) is clinically defined as significant visual dysfunction caused by injury to visual pathways and structures occurring during early perinatal development. Depending on the location and extent of damage, children with CVI often present with a myriad of visual deficits including decreased visual acuity and impaired visual field function. Most striking, however, are impairments in visual processing and attention which have a significant impact on learning, development, and independence. Within the educational arena, current evidence suggests that strategies designed for individuals with ocular visual impairment are not effective in the case of CVI. We propose that this variance may be related to differences in compensatory neuroplasticity related to the type of visual impairment, as well as underlying alterations in brain structural connectivity. We discuss the etiology and nature of visual impairments related to CVI, and how advanced neuroimaging techniques (i.e., diffusion-based imaging) may help uncover differences between ocular and cerebral causes of visual dysfunction. Revealing these differences may help in developing future strategies for the education and rehabilitation of individuals living with visual impairment. PMID:28082927

  8. What makes a visualization memorable?

    PubMed

    Borkin, Michelle A; Vo, Azalea A; Bylinskii, Zoya; Isola, Phillip; Sunkavalli, Shashank; Oliva, Aude; Pfister, Hanspeter

    2013-12-01

    An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: 'What makes a visualization memorable?' We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon's Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.

  9. Predictive factors of visual function recovery after pituitary adenoma resection: a literature review and Meta-analysis.

    PubMed

    Sun, Min; Zhang, Zhi-Qiang; Ma, Chi-Yuan; Chen, Sui-Hua; Chen, Xin-Jian

    2017-01-01

    To determine the dominant predictive factors of postoperative visual recovery for patients with pituitary adenoma. PubMed, Google Scholar, Web of Science and Cochrane Library were searched for relevant human studies, which investigated the prediction of the postoperative visual recovery of patients with pituitary adenoma, from January 2000 to May 2017. Meta-analyses were performed on the primary outcomes. After the related data were extracted by two independent investigators, pooled weighted mean difference (WMD) and odds ratio (OR) with 95% confidence interval (CI) were estimated using a random-effects or a fixed-effects model. Nineteen studies were included in the literature review, and nine trials were included in the Meta-analysis, which comprised 530 patients (975 eyes) with pituitary adenoma. For the primary outcomes, there was a significant difference between preoperative and postoperative mean deviation (MD) values of the visual field (WMD -5.85; 95%CI: -8.19 to -3.51; P <0.00001). Predictive characteristics of four factors were revealed in this Meta-analysis by assigning the patients to sufficient and insufficient groups according to postoperative visual field improvements, including preoperative visual field defect (WMD 10.09; 95%CI: 6.17 to 14.02; P <0.00001), patient age (WMD -12.32; 95%CI: -18.42 to -6.22; P <0.0001), symptom duration (WMD -5.04; 95%CI: -9.71 to -0.37; P =0.03), and preoperative peripapillary retinal nerve fiber layer (pRNFL) thickness (OR 0.1; 95% CI: 0.04 to 0.23; P <0.00001). Preoperative visual field defect, symptom duration, patient age, and preoperative pRNFL thickness are the dominant predictive factors of the postoperative recovery of the visual field for patients with pituitary adenoma.

  10. Ultrasound visual feedback treatment and practice variability for residual speech sound errors

    PubMed Central

    Preston, Jonathan L.; McCabe, Patricia; Rivera-Campos, Ahmed; Whittle, Jessica L.; Landry, Erik; Maas, Edwin

    2014-01-01

    Purpose The goals were to (1) test the efficacy of a motor-learning based treatment that includes ultrasound visual feedback for individuals with residual speech sound errors, and (2) explore whether the addition of prosodic cueing facilitates speech sound learning. Method A multiple baseline single subject design was used, replicated across 8 participants. For each participant, one sound context was treated with ultrasound plus prosodic cueing for 7 sessions, and another sound context was treated with ultrasound but without prosodic cueing for 7 sessions. Sessions included ultrasound visual feedback as well as non-ultrasound treatment. Word-level probes assessing untreated words were used to evaluate retention and generalization. Results For most participants, increases in accuracy of target sound contexts at the word level were observed with the treatment program regardless of whether prosodic cueing was included. Generalization between onset singletons and clusters was observed, as well as generalization to sentence-level accuracy. There was evidence of retention during post-treatment probes, including at a two-month follow-up. Conclusions A motor-based treatment program that includes ultrasound visual feedback can facilitate learning of speech sounds in individuals with residual speech sound errors. PMID:25087938

  11. An Audio-Visual Resource Notebook for Adult Consumer Education. An Annotated Bibliography of Selected Audio-Visual Aids for Adult Consumer Education, with Special Emphasis on Materials for Elderly, Low-Income and Handicapped Consumers.

    ERIC Educational Resources Information Center

    Virginia State Dept. of Agriculture and Consumer Services, Richmond, VA.

    This document is an annotated bibliography of audio-visual aids in the field of consumer education, intended especially for use among low-income, elderly, and handicapped consumers. It was developed to aid consumer education program planners in finding audio-visual resources to enhance their presentations. Materials listed include 293 resources…

  12. Including a Student with Multiple Disabilities and Visual Impairment in Her Neighborhood School.

    ERIC Educational Resources Information Center

    Bowden, J.; Thorburn, J.

    1993-01-01

    This article discusses mainstreaming of a student (age five) with physical, intellectual, visual, and suspected auditory impairments in her neighborhood school in Auckland, New Zealand. Comments of the people involved in the program, including the principal, teachers, teacher's aide, family members, and fellow pupils are reported; and the success…

  13. Visual Literacy: Does It Enhance Leadership Abilities Required for the Twenty-First Century?

    ERIC Educational Resources Information Center

    Bintz, Carol

    2016-01-01

    The twenty-first century hosts a well-established global economy, where leaders are required to have increasingly complex skills that include creativity, innovation, vision, relatability, critical thinking and well-honed communications methods. The experience gained by learning to be visually literate includes the ability to see, observe, analyze,…

  14. Toward a New Theory for Selecting Instructional Visuals.

    ERIC Educational Resources Information Center

    Croft, Richard S.; Burton, John K.

    This paper provides a rationale for the selection of illustrations and visual aids for the classroom. The theories that describe the processing of visuals are dual coding theory and cue summation theory. Concept attainment theory offers a basis for selecting which cues are relevant for any learning task which includes a component of identification…

  15. Adaptive Behavior of Primary School Students with Visual Impairments: The Impact of Educational Settings

    ERIC Educational Resources Information Center

    Metsiou, Katerina; Papadopoulos, Konstantinos; Agaliotis, Ioannis

    2011-01-01

    This study explored the adaptive behavior of primary school students with visual impairments, as well as the impact of educational setting on their adaptive behavior. Instrumentation included an informal questionnaire and the Vineland Adaptive Behavior Scales. Participants were 36 primary school students with visual impairments. The educational…

  16. Supporting Multimedia Learning with Visual Signalling and Animated Pedagogical Agent: Moderating Effects of Prior Knowledge

    ERIC Educational Resources Information Center

    Johnson, A. M.; Ozogul, G.; Reisslein, M.

    2015-01-01

    An experiment examined the effects of visual signalling to relevant information in multiple external representations and the visual presence of an animated pedagogical agent (APA). Students learned electric circuit analysis using a computer-based learning environment that included Cartesian graphs, equations and electric circuit diagrams. The…

  17. An Empirical Comparison of Visualization Tools To Assist Information Retrieval on the Web.

    ERIC Educational Resources Information Center

    Heo, Misook; Hirtle, Stephen C.

    2001-01-01

    Discusses problems with navigation in hypertext systems, including cognitive overload, and describes a study that tested information visualization techniques to see which best represented the underlying structure of Web space. Considers the effects of visualization techniques on user performance on information searching tasks and the effects of…

  18. Predictors of Employment Outcomes for People with Visual Impairment in Taiwan: The Contribution of Disability Employment Services

    ERIC Educational Resources Information Center

    Jang, Yuh; Wang, Yun-Tung; Lin, Meng-Hsiu; Shih, Kevin J.

    2013-01-01

    Introduction: We investigated the employment status and identified factors that may affect the employment outcomes of people with visual impairments in Taiwan. Methods: A retrospective, ex post facto design study was conducted. The sample included 313 visually impaired clients who commenced and "closed" (completed) disability employment…

  19. Be the Volume: A Classroom Activity to Visualize Volume Estimation

    ERIC Educational Resources Information Center

    Mikhaylov, Jessica

    2011-01-01

    A hands-on activity can help multivariable calculus students visualize surfaces and understand volume estimation. This activity can be extended to include the concepts of Fubini's Theorem and the visualization of the curves resulting from cross-sections of the surface. This activity uses students as pillars and a sheet or tablecloth for the…

  20. Problems Confronting Visual Culture

    ERIC Educational Resources Information Center

    Efland, Arthur D.

    2005-01-01

    A new movement has appeared recommending, in part, that the field of art education should lessen its traditional ties to drawing, painting, and the study of masterpieces to become the study of visual culture. Visual cultural study refers to an all-encompassing category of cultural practice that includes the fine arts but also deals with the study…

  1. Cognitive Strategies for Learning from Static and Dynamic Visuals.

    ERIC Educational Resources Information Center

    Lewalter, D.

    2003-01-01

    Studied the effects of including static or dynamic visuals in an expository text on a learning outcome and the use of learning strategies when working with these visuals. Results for 60 undergraduates for both types of illustration indicate different frequencies in the use of learning strategies relevant for the learning outcome. (SLD)

  2. The Social Experiences of High School Students with Visual Impairments

    ERIC Educational Resources Information Center

    Jessup, Glenda; Bundy, Anita C.; Broom, Alex; Hancock, Nicola

    2017-01-01

    Introduction: This study explores the social experiences in high school of students with visual impairments. Methods: Experience sampling methodology was used to examine (a) how socially included students with visual impairments feel, (b) the internal qualities of their activities, and (c) the factors that influence a sense of inclusion. Twelve…

  3. Op art and visual perception.

    PubMed

    Wade, N J

    1978-01-01

    An attempt is made to list the visual phenomena exploited in op art. These include moire frinlude moiré fringes, afterimages, Hermann grid effects, Gestalt grouping principles, blurring and movement due to astigmatic fluctuations in accommodation, scintillation and streaming possibly due to eye movements, and visual persistence. The historical origins of these phenomena are also noted.

  4. Expectations for Visual Function: An Initial Evaluation of a New Clinical Instrument.

    ERIC Educational Resources Information Center

    Corn, Anne L.; Webne, Steve L.

    2001-01-01

    A study explored the internal consistency of items in a visual screening instrument developed by Project PAVE: Expectations for Visual Functioning (EVF). The test includes 20 items that evaluate a child's functional use of vision. A pilot test involving 129 teachers indicates the EFV is internally consistent. (Contains three references.) (CR)

  5. Visual Sociological Portrayals of Race and Childhood: Case Studies from the Thirties.

    ERIC Educational Resources Information Center

    Wieder, Alan

    1995-01-01

    Describes life during the Depression era by studying narrative and visual portrayals of African American children in both rural and urban settings. Sources include Richard Wright's "12 Million Black Voices" and Stella Gentry Sharpe's "Tobe" as well as work that critically analyzes these narrative and visual documentations of…

  6. Some Issues Concerning Access to Information by Blind and Partially Sighted Pupils.

    ERIC Educational Resources Information Center

    Green, Christopher F.

    This paper examines problems faced by visually-impaired secondary pupils in gaining access to information in print. The ever-increasing volume of information available inundates the sighted and is largely inaccessible in print format to the visually impaired. Important issues of availability for the visually impaired include whether information is…

  7. African American Youth and the Artist's Identity: Cultural Models and Aspirational Foreclosure

    ERIC Educational Resources Information Center

    Charland, William

    2010-01-01

    The decision to participate in visual arts studies in college and visual arts professions in adult life is the product of multiple factors, including the influences of family, community, peer group, mass culture, and K-12 schooling. Recognizing African American underrepresentation in visual arts studies and professions, this article explores how…

  8. Defense Styles Influencing Career Choice of Visually Challenged Students at Undergraduate Level

    ERIC Educational Resources Information Center

    Kumar, S. Raja

    2016-01-01

    Visually challenged students' career choice is influenced by many factors, including life context, personal aptitudes, and educational attainment. This study focuses the defense styles of visually challenged students and also study about their career choice. Survey method has been adopted in this investigation. Totally 77 samples were collected…

  9. Teaching Choice Making to Children with Visual Impairments and Multiple Disabilities in Preschool and Kindergarten Classrooms

    ERIC Educational Resources Information Center

    Clark, Christine; McDonnell, Andrea P.

    2008-01-01

    This study examined the effectiveness of an intervention package that included visual accommodations, daily preference assessments, and naturalistic instructional strategies on the accuracy of choice-making responses for three participants with visual impairments and multiple disabilities. It also examined the participants' ability to maintain and…

  10. SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications

    PubMed Central

    Kalinin, Alexandr A.; Palanimalai, Selvam; Dinov, Ivo D.

    2018-01-01

    The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis. PMID:29630069

  11. SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications.

    PubMed

    Kalinin, Alexandr A; Palanimalai, Selvam; Dinov, Ivo D

    2017-04-01

    The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis.

  12. Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Alan E.; Crow, Vernon L.; Payne, Deborah A.

    Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a data visualization method includes accessing a plurality of initial documents at a first moment in time, first processing the initial documents providing processed initial documents, first identifying a plurality of first associations of the initial documents using the processed initial documents, generating a first visualization depicting the first associations, accessing a plurality of additional documents at a second moment in time after the first moment in time, second processing the additional documents providing processed additional documents, secondmore » identifying a plurality of second associations of the additional documents and at least some of the initial documents, wherein the second identifying comprises identifying using the processed initial documents and the processed additional documents, and generating a second visualization depicting the second associations.« less

  13. Widespread correlation patterns of fMRI signal across visual cortex reflect eccentricity organization.

    PubMed

    Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan E B; Kastner, Sabine; Hasson, Uri

    2015-02-19

    The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas.

  14. Identifying and characterising cerebral visual impairment in children: a review.

    PubMed

    Philip, Swetha Sara; Dutton, Gordon N

    2014-05-01

    Cerebral visual impairment (CVI) comprises visual malfunction due to retro-chiasmal visual and visual association pathway pathology. This can be isolated or accompany anterior visual pathway dysfunction. It is a major cause of low vision in children in the developed and developing world due to increasing survival in paediatric and neonatal care. CVI can present in many combinations and degrees. There are multiple causes and it is common in children with cerebral palsy. CVI can be identified easily, if a structured approach to history-taking is employed. This review describes the features of CVI and describes practical management strategies aimed at helping affected children. A literature review was undertaken using 'Medline' and 'Pubmed'. Search terms included cerebral visual impairment, cortical visual impairment, dorsal stream dysfunction and visual function in cerebral palsy. © 2014 The Authors. Clinical and Experimental Optometry © 2014 Optometrists Association Australia.

  15. Visual Impairment in White, Chinese, Black, and Hispanic Participants from the Multi-Ethnic Study of Atherosclerosis Cohort.

    PubMed

    Fisher, Diana E; Shrager, Sandi; Shea, Steven J; Burke, Gregory L; Klein, Ronald; Wong, Tien Y; Klein, Barbara E; Cotch, Mary Frances

    2015-01-01

    To describe the prevalence of visual impairment and examine its association with demographic, socioeconomic, and health characteristics in the Multi-Ethnic Study of Atherosclerosis (MESA) cohort. Visual acuity data were obtained from 6134 participants, aged 46-87 years at time of examination between 2002 and 2004 (mean age 64 years, 47.6% male), from six communities in the United States. Visual impairment was defined as presenting visual acuity 20/50 or worse in the better-seeing eye. Risk factors were included in multivariable logistic regression models to determine their impact on visual impairment for men and women in each racial/ethnic group. Among all participants, 6.6% (n = 421) had visual impairment, including 5.6% of men (n = 178) and 7.5% of women (n = 243). Prevalence of impairment ranged from 4.2% (n = 52) and 6.0% (n = 77) in white men and women, respectively, to 7.6% (n = 37) and 11.6% (n = 44) in Chinese men and women, respectively. Older age was significantly associated with visual impairment in both men and women, particularly in those with lower socioeconomic status, but the effects of increasing age were more pronounced in men. Two-thirds of participants already wore distance correction, and not unexpectedly, a lower prevalence of visual impairment was seen in this group; however, 2.4% of men and 3.5% of women with current distance correction had correctable visual impairment, most notably among seniors. Even in the U.S. where prevalence of refractive correction is high, both visual impairment and uncorrected refractive error represent current public health challenges.

  16. Visual Impairment in White, Chinese, Black and Hispanic Participants from the Multi-Ethnic Study of Atherosclerosis Cohort

    PubMed Central

    Fisher, Diana E.; Shrager, Sandi; Shea, Steven J.; Burke, Gregory L.; Klein, Ronald; Wong, Tien Y.; Klein, Barbara E; Cotch, Mary Frances

    2016-01-01

    Purpose To describe the prevalence of visual impairment and examine its association with demographic, socioeconomic, and health characteristics in the Multi-Ethnic Study of Atherosclerosis (MESA) cohort. Methods Visual acuity data was obtained from 6134 participants, aged 46 to 87 years old at time of examination between 2002 and 2004 (mean age 64 years, 47.6% male), from six communities in the United States (U.S.). Visual impairment was defined as a presenting visual acuity of 20/50 or worse in the better-seeing eye. Risk factors were included in multivariable logistic regression models to determine their impact on visual impairment for men and women in each racial/ethnic group. Results Among all participants, 6.6% (N=421) had visual impairment, including 5.6% (N=178) of men and 7.5% (N=243) of women. Prevalence of impairment ranged from 4.2% (N=52) and 6.0% (N=77) in White men and women, respectively, to 7.6% (N=37) and 11.6% (N=44) in Chinese men and women, respectively. Older age was significantly associated with visual impairment in both men and women, particularly in those with lower socioeconomic status, but the effects of increasing age were more pronounced in men. Two-thirds of participants already wore distance correction and not unexpectedly, lower prevalence of visual impairment was seen in this group; however, 2.4% of men and 3.5% of women with current distance correction had correctable visual impairment, most notably among seniors. Conclusion Even in the United States where prevalence of refractive correction is high, both visual impairment and uncorrected refractive error represent current public health challenges. PMID:26395659

  17. The prevalence of visual impairment and blindness in underserved rural areas: a crucial issue for future.

    PubMed

    Hashemi, H; Yekta, A; Jafarzadehpur, E; Doostdar, A; Ostadimoghaddam, H; Khabazkhoob, M

    2017-08-01

    PurposeTo determine the prevalence of visual impairment and blindness in underserved Iranian villages and to identify the most common cause of visual impairment and blindness.Patients and methodsMultistage cluster sampling was used to select the participants who were then invited to undergo complete examinations. Optometric examinations including visual acuity, and refraction were performed for all individuals. Ophthalmic examinations included slit-lamp biomicroscopy and ophthalmoscopy. Visual impairment was determined according to the definitions of the WHO and presenting vision.ResultsOf 3851 selected individuals, 3314 (86.5%) participated in the study. After using the exclusion criteria, the present report was prepared based on the data of 3095 participants. The mean age of the participants was 37.6±20.7 years (3-93 years). The prevalence of visual impairment and blindness was 6.43% (95% confidence interval (CI): 3.71-9.14) and 1.18% (95% CI: 0.56-1.79), respectively. The prevalence of visual impairment varied from 0.75% in participants aged less than 5 years to 38.36% in individuals above the age of 70 years. Uncorrected refractive errors and cataract were the first and second leading causes of visual impairment; moreover, cataract and refractive errors were responsible for 35.90 and 20.51% of the cases of blindness, respectively.ConclusionThe prevalence of visual impairment was markedly high in this study. Lack of access to health services was the main reason for the high prevalence of visual impairment in this study. Cataract and refractive errors are responsible for 80% of visual impairments which can be due to poverty in underserved villages.

  18. BDNF Variants May Modulate Long-Term Visual Memory Performance in a Healthy Cohort

    PubMed Central

    Avgan, Nesli; Sutherland, Heidi G.; Spriggens, Lauren K.; Yu, Chieh; Ibrahim, Omar; Bellis, Claire; Haupt, Larisa M.; Shum, David H. K.; Griffiths, Lyn R.

    2017-01-01

    Brain-derived neurotrophic factor (BDNF) is involved in numerous cognitive functions including learning and memory. BDNF plays an important role in synaptic plasticity in humans and rats with BDNF shown to be essential for the formation of long-term memories. We previously identified a significant association between the BDNF Val66Met polymorphism (rs6265) and long-term visual memory (p-value = 0.003) in a small cohort (n = 181) comprised of healthy individuals who had been phenotyped for various aspects of memory function. In this study, we have extended the cohort to 597 individuals and examined multiple genetic variants across both the BDNF and BDNF-AS genes for association with visual memory performance as assessed by the Wechsler Memory Scale—Fourth Edition subtests Visual Reproduction I and II (VR I and II). VR I assesses immediate visual memory, whereas VR II assesses long-term visual memory. Genetic association analyses were performed for 34 single nucleotide polymorphisms genotyped on Illumina OmniExpress BeadChip arrays with the immediate and long-term visual memory phenotypes. While none of the BDNF and BDNF-AS variants were shown to be significant for immediate visual memory, we found 10 variants (including the Val66Met polymorphism (p-value = 0.006)) that were nominally associated, and three variants (two variants in BDNF and one variant in the BDNF-AS locus) that were significantly associated with long-term visual memory. Our data therefore suggests a potential role for BDNF, and its anti-sense transcript BDNF-AS, in long-term visual memory performance. PMID:28304362

  19. BDNF Variants May Modulate Long-Term Visual Memory Performance in a Healthy Cohort.

    PubMed

    Avgan, Nesli; Sutherland, Heidi G; Spriggens, Lauren K; Yu, Chieh; Ibrahim, Omar; Bellis, Claire; Haupt, Larisa M; Shum, David H K; Griffiths, Lyn R

    2017-03-17

    Brain-derived neurotrophic factor (BDNF) is involved in numerous cognitive functions including learning and memory. BDNF plays an important role in synaptic plasticity in humans and rats with BDNF shown to be essential for the formation of long-term memories. We previously identified a significant association between the BDNF Val66Met polymorphism (rs6265) and long-term visual memory ( p -value = 0.003) in a small cohort ( n = 181) comprised of healthy individuals who had been phenotyped for various aspects of memory function. In this study, we have extended the cohort to 597 individuals and examined multiple genetic variants across both the BDNF and BDNF-AS genes for association with visual memory performance as assessed by the Wechsler Memory Scale-Fourth Edition subtests Visual Reproduction I and II (VR I and II). VR I assesses immediate visual memory, whereas VR II assesses long-term visual memory. Genetic association analyses were performed for 34 single nucleotide polymorphisms genotyped on Illumina OmniExpress BeadChip arrays with the immediate and long-term visual memory phenotypes. While none of the BDNF and BDNF-AS variants were shown to be significant for immediate visual memory, we found 10 variants (including the Val66Met polymorphism ( p -value = 0.006)) that were nominally associated, and three variants (two variants in BDNF and one variant in the BDNF-AS locus) that were significantly associated with long-term visual memory. Our data therefore suggests a potential role for BDNF , and its anti-sense transcript BDNF-AS , in long-term visual memory performance.

  20. Solar System Visualizations

    NASA Technical Reports Server (NTRS)

    Brown, Alison M.

    2005-01-01

    Solar System Visualization products enable scientists to compare models and measurements in new ways that enhance the scientific discovery process, enhance the information content and understanding of the science results for both science colleagues and the public, and create.visually appealing and intellectually stimulating visualization products. Missions supported include MER, MRO, and Cassini. Image products produced include pan and zoom animations of large mosaics to reveal the details of surface features and topography, animations into registered multi-resolution mosaics to provide context for microscopic images, 3D anaglyphs from left and right stereo pairs, and screen captures from video footage. Specific products include a three-part context animation of the Cassini Enceladus encounter highlighting images from 350 to 4 meter per pixel resolution; Mars Reconnaissance Orbiter screen captures illustrating various instruments during assembly and testing at the Payload Hazardous Servicing Facility at Kennedy Space Center; and an animation of Mars Exploration Rover Opportunity's 'Rub al Khali' panorama where the rover was stuck in the deep fine sand for more than a month. This task creates new visualization products that enable new science results and enhance the public's understanding of the Solar System and NASA's missions of exploration.

  1. Light and the laboratory mouse.

    PubMed

    Peirson, Stuart N; Brown, Laurence A; Pothecary, Carina A; Benson, Lindsay A; Fisk, Angus S

    2018-04-15

    Light exerts widespread effects on physiology and behaviour. As well as the widely-appreciated role of light in vision, light also plays a critical role in many non-visual responses, including regulating circadian rhythms, sleep, pupil constriction, heart rate, hormone release and learning and memory. In mammals, responses to light are all mediated via retinal photoreceptors, including the classical rods and cones involved in vision as well as the recently identified melanopsin-expressing photoreceptive retinal ganglion cells (pRGCs). Understanding the effects of light on the laboratory mouse therefore depends upon an appreciation of the physiology of these retinal photoreceptors, including their differing sens itivities to absolute light levels and wavelengths. The signals from these photoreceptors are often integrated, with different responses involving distinct retinal projections, making generalisations challenging. Furthermore, many commonly used laboratory mouse strains carry mutations that affect visual or non-visual physiology, ranging from inherited retinal degeneration to genetic differences in sleep and circadian rhythms. Here we provide an overview of the visual and non-visual systems before discussing practical considerations for the use of light for researchers and animal facility staff working with laboratory mice. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  2. PATTERNS OF CLINICALLY SIGNIFICANT COGNITIVE IMPAIRMENT IN HOARDING DISORDER.

    PubMed

    Mackin, R Scott; Vigil, Ofilio; Insel, Philip; Kivowitz, Alana; Kupferman, Eve; Hough, Christina M; Fekri, Shiva; Crothers, Ross; Bickford, David; Delucchi, Kevin L; Mathews, Carol A

    2016-03-01

    The cognitive characteristics of individuals with hoarding disorder (HD) are not well understood. Existing studies are relatively few and somewhat inconsistent but suggest that individuals with HD may have specific dysfunction in the cognitive domains of categorization, speed of information processing, and decision making. However, there have been no studies evaluating the degree to which cognitive dysfunction in these domains reflects clinically significant cognitive impairment (CI). Participants included 78 individuals who met DSM-V criteria for HD and 70 age- and education-matched controls. Cognitive performance on measures of memory, attention, information processing speed, abstract reasoning, visuospatial processing, decision making, and categorization ability was evaluated for each participant. Rates of clinical impairment for each measure were compared, as were age- and education-corrected raw scores for each cognitive test. HD participants showed greater incidence of CI on measures of visual memory, visual detection, and visual categorization relative to controls. Raw-score comparisons between groups showed similar results with HD participants showing lower raw-score performance on each of these measures. In addition, in raw-score comparisons HD participants also demonstrated relative strengths compared to control participants on measures of verbal and visual abstract reasoning. These results suggest that HD is associated with a pattern of clinically significant CI in some visually mediated neurocognitive processes including visual memory, visual detection, and visual categorization. Additionally, these results suggest HD individuals may also exhibit relative strengths, perhaps compensatory, in abstract reasoning in both verbal and visual domains. © 2015 Wiley Periodicals, Inc.

  3. Living Independently: Exploring the Experiences of Visually Impaired People Living in Age-Related and Lifetime Housing Through Qualitative Synthesis.

    PubMed

    Rooney, Clíona; Hadjri, Karim; Faith, Verity; Rooney, Máirin; McAllister, Keith; Craig, Cathy

    2017-01-01

    The aim of this study is to gain a deeper understanding of the experiences of visually impaired older people living independently at home. As populations are aging globally, there is now an increase in the prevalence of visual impairment. That means for ongoing and future aging-in-place strategies that seek to enable older people to remain independent for longer, more attention needs to be given to the needs of those with visual impairment. As people develop visual impairment, they use adaptive strategies including modifying long-term homes or relocating to more suitable accommodation. In the United Kingdom, aging-in-place strategies include employing statutory lifetime home standards (LTHS) in the home or relocating to sheltered housing to live independently with support available if required. To get a better understanding of the needs of the visually impaired in the home, 12 interviews with six visually impaired occupants of LTHS homes and six from sheltered accommodation were analyzed separately using interpretative phenomenological analysis. Secondly, qualitative synthesis was used to further analyze themes generated from both samples before interview results were conceptualized in two superordinate concepts, namely, "negotiating priorities" and "understanding visual impairment." Participants from both groups had similar needs and were willing to compromise by living with some negative features. Those who coped well with moving utilized various resources. These findings will shed more understanding on providing good quality housing for those with visual impairment wanting to live either independently or within healthcare home environments.

  4. Neurorehabilitation of saccadic ocular movement in a patient with a homonymous hemianopia postgeniculate caused by an arteriovenous malformation

    PubMed Central

    Pineda-Ortíz, Mirna; Pacheco-López, Gustavo; Rubio-Osornio, Moisés; Rubio, Carmen; Valadez-Rodríguez, Juan

    2018-01-01

    Abstract Rationale: Visual therapy, which includes a restorative and compensatory approach, seems to be a viable treatment option for homonymous defects of the visual field in patients with postgeniculate injury of the visual pathway, due to occipital arteriovenous malformation (AVM). Until now, the Mexican population suffering from homonymous hemianopia did not have health services that provided any type of visual therapy for their condition. Patient concerns: A 31-year-old patient, who underwent a surgical procedure for resection of the AVM, was referred with posterior low vision on the left side. Diagnoses: The patient was diagnosed with left homonymous hemianopia. Interventions: Visual neurorehabilitation therapy (NRT), which integrated restorative and compensatory approaches, was administered for 3 hours each week. NRT included fixation, follow-up, search, peripheral vision, and reading. Outcomes: The NRT did not change visual field defects and, retinotopocally, the same campimetric defects remained. However, after training the tracking ocular movements improved to standard values on the ENG, further, the visual search became more organized. The reading reached a level without mistakes, with rhythm and goog intonation. The Beck test demostrated an improvement in depression symptoms. Regarding the daily life activities, the patient reported significant improvements. Lessons: Visual NRT can significantly improve eye movements, as well as the quality of life and independence of the patient. This integral approach could be an effective therapeutic option for homonymous defects of the visual field. PMID:29538218

  5. Wave Propagation Through Inhomogeneities With Applications to Novel Sensing Techniques

    NASA Technical Reports Server (NTRS)

    Adamovsky, G.; Tokars, R.; Varga, D.; Floyd B.

    2008-01-01

    The paper describes phenomena observed as a result of laser pencil beam interactions with abrupt interfaces including aerodynamic shocks. Based on these phenomena, a novel flow visualization technique based on a laser scanning pencil beam is introduced. The technique reveals properties of light interaction with interfaces including aerodynamic shocks that are not seen using conventional visualization. Various configurations of scanning beam devices including those with no moving parts, as well as results of "proof-of-concept" tests, are included.

  6. The occurrence of visual and cognitive impairment, and eye diseases in the super-elderly in Japan: a cross-sectional single-center study.

    PubMed

    Fukuoka, Hideki; Nagaya, Masahiro; Toba, Kenji

    2015-10-29

    The current state of eye diseases and treatments in the elderly as well as the relationships between dementia and systemic diseases remain unclear. Therefore, this study evaluated the prevalence of eye diseases, visual impairment, cognitive impairment, and falls (which are an important health issue and are considered one of the Geriatric Giants) in super-elderly people in Japan. The subjects were 31 elderly people (62 eyes; mean age: 84.6 ± 8.8 years; age range 61-98 years) who were admitted to a geriatric health services facility. Eye treatment status, systemic diseases, dementia, and recent falls were investigated. Eye examinations including vision and intraocular pressure measurement, and slit-lamp biomicroscopy were conducted. Mean best corrected visual acuity (logMAR) was 0.51 ± 0.56, and mean intraocular pressure was 13.7 ± 3.5 mmHg. Approximately half of the subjects exhibited excavation of the optic nerve head including cataracts and glaucoma. Ten subjects had visual impairment (i.e., visual acuity of the eye with the better vision <20/40). The mean Hasegawa dementia scale scores between the visually impaired and non-visually impaired groups were 10.2 ± 6 and 16 ± 8 points, respectively (p < 0.05). Furthermore, 70% of subjects with visual impairment experienced a fall in the past year compared to 48% of those without visual impairment, although the difference was not significant. Regarding systemic diseases, there were 6, 5, and 15 cases of diabetes, hyperlipidemia, and hypertension, respectively. There was no significant difference between these systemic diseases and visual function after adjusted for age and gender. The percentages of patients with age-related eye diseases and poor visual acuity in a geriatric health services facility were extremely high. Compared to those without visual impairment, those with visual impairment had lower dementia scores and a higher rate of falls.

  7. Visual disability rates in a ten-year cohort of patients with anterior visual pathway meningiomas.

    PubMed

    Bor-Shavit, Elite; Hammel, Naama; Nahum, Yoav; Rappaport, Zvi Harry; Stiebel-Kalish, Hadas

    2015-01-01

    To examine the visual outcome of anterior visual pathway meningioma (AVPM) patients followed for at least one year. Data were collected on demographics, clinical course and management. Visual disability was classified at the first and last examination as follows: I--no visual disability; II--mild visual defect in one eye; III--mild visual defect in both eyes; IV--loss of driver's license; V--legally blind. Eight-one AVPM patients had their tumor originate in the clinoid process in 23 (28%), sphenoid-wing area in 18 (22%), cavernous sinus in 15 (19%), tuberculum sellae in 8 (10%), and mixed in 17 (21%). On last examination, 46 patients (57%) had good visual acuity in one or both eyes (Class I or II) and 17 (21%) were mildly affected in both eyes. The rate of Class IV disability was 16%, and Class V disability was 6%. Attention needs to be addressed to the considerable proportion of patients with AVPM (22% in this study) who may lose their driver's license or become legally blind. Occupational therapists should play an important role in the multidisciplinary management of those patients to help them adapt to their new physical and social situation. Anterior visual pathway meningiomas (AVPMs) are commonly not life-threatening but they can lead to profound visual disability, especially when the tumor originates in the tuberculum sellae and cavernous sinus. Particular attention should be paid to visual acuity and visual field deficits, as these can profoundly affect the patient's quality of life including ability to drive and activities of daily living. The interdisciplinary management of patients with AVPM should include the neurosurgeon, neuro-ophthalmologist and occupational therapist. Also, early intervention by the occupational therapist can help patients adapt to their current physical and social situation and return to everyday tasks more rapidly.

  8. [Progressive visual agnosia].

    PubMed

    Sugimoto, Azusa; Futamura, Akinori; Kawamura, Mitsuru

    2011-10-01

    Progressive visual agnosia was discovered in the 20th century following the discovery of classical non-progressive visual agnosia. In contrast to the classical type, which is caused by cerebral vascular disease or traumatic injury, progressive visual agnosia is a symptom of neurological degeneration. The condition of progressive visual loss, including visual agnosia, and posterior cerebral atrophy was named posterior cortical atrophy (PCA) by Benson et al. (1988). Progressive visual agnosia is also observed in semantic dementia (SD) and other degenerative diseases, but there is a difference in the subtype of visual agnosia associated with these diseases. Lissauer (1890) classified visual agnosia into apperceptive and associative types, and it in most cases, PCA is associated with the apperceptive type. However, SD patients exhibit symptoms of associative visual agnosia before changing to those of semantic memory disorder. Insights into progressive visual agnosia have helped us understand the visual system and discover how we "perceive" the outer world neuronally, with regard to consciousness. Although PCA is a type of atypical dementia, its diagnosis is important to enable patients to live better lives with appropriate functional support.

  9. 47 CFR 73.621 - Noncommercial educational TV stations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... interrupt regular programming. Note: Commission interpretation of this rule, including the acceptable form... Blanking Interval and in the Visual Signal. The provisions governing VBI and visual signal...

  10. 47 CFR 73.621 - Noncommercial educational TV stations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... interrupt regular programming. Note: Commission interpretation of this rule, including the acceptable form... Blanking Interval and in the Visual Signal. The provisions governing VBI and visual signal...

  11. A prospective profile of visual field loss following stroke: prevalence, type, rehabilitation, and outcome.

    PubMed

    Rowe, Fiona J; Wright, David; Brand, Darren; Jackson, Carole; Harrison, Shirley; Maan, Tallat; Scott, Claire; Vogwell, Linda; Peel, Sarah; Akerman, Nicola; Dodridge, Caroline; Howard, Claire; Shipman, Tracey; Sperring, Una; Macdiarmid, Sonia; Freeman, Cicely

    2013-01-01

    To profile site of stroke/cerebrovascular accident, type and extent of field loss, treatment options, and outcome. Prospective multicentre cohort trial. Standardised referral and investigation protocol of visual parameters. 915 patients were recruited with a mean age of 69 years (SD 14). 479 patients (52%) had visual field loss. 51 patients (10%) had no visual symptoms. Almost half of symptomatic patients (n = 226) complained only of visual field loss: almost half (n = 226) also had reading difficulty, blurred vision, diplopia, and perceptual difficulties. 31% (n = 151) had visual field loss as their only visual impairment: 69% (n = 328) had low vision, eye movement deficits, or visual perceptual difficulties. Occipital and parietal lobe strokes most commonly caused visual field loss. Treatment options included visual search training, visual awareness, typoscopes, substitutive prisms, low vision aids, refraction, and occlusive patches. At followup 15 patients (7.5%) had full recovery, 78 (39%) had improvement, and 104 (52%) had no recovery. Two patients (1%) had further decline of visual field. Patients with visual field loss had lower quality of life scores than stroke patients without visual impairment. Stroke survivors with visual field loss require assessment to accurately define type and extent of loss, diagnose coexistent visual impairments, and offer targeted treatment.

  12. Parallel Visualization Co-Processing of Overnight CFD Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Edwards, David E.; Haimes, Robert

    1999-01-01

    An interactive visualization system pV3 is being developed for the investigation of advanced computational methodologies employing visualization and parallel processing for the extraction of information contained in large-scale transient engineering simulations. Visual techniques for extracting information from the data in terms of cutting planes, iso-surfaces, particle tracing and vector fields are included in this system. This paper discusses improvements to the pV3 system developed under NASA's Affordable High Performance Computing project.

  13. Flow Visualization and Laser Velocimetry for Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Hunter, W. W., Jr. (Editor); Foughner, J. T., Jr. (Editor)

    1982-01-01

    The need for flow visualization and laser velocimetry were discussed. The purpose was threefold: (1) provide a state-of-the-art overview; (2) provide a forum for industry, universities, and government agencies to address problems in developing useful and productive flow visualization and laser velocimetry measurement techniques; and (3) provide discussion of recent developments and applications of flow visualization and laser velocimetry measurement techniques and instrumentation systems for wind tunnels including the 0.3-Meter Transonic Cryogenic Tunnel.

  14. Experiments in teleoperator and autonomous control of space robotic vehicles

    NASA Technical Reports Server (NTRS)

    Alexander, Harold L.

    1991-01-01

    A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.

  15. Visual Impairments, "Including Blindness." NICHCY Disability Fact Sheet #13

    ERIC Educational Resources Information Center

    National Dissemination Center for Children with Disabilities, 2012

    2012-01-01

    Vision is one of the five senses. Being able to see gives tremendous access to learning about the world around--people's faces and the subtleties of expression, what different things look like and how big they are, and the physical environments, including approaching hazards. When a child has a visual impairment, it is cause for immediate…

  16. A graph algebra for scalable visual analytics.

    PubMed

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  17. SBOL Visual: A Graphical Language for Genetic Designs.

    PubMed

    Quinn, Jacqueline Y; Cox, Robert Sidney; Adler, Aaron; Beal, Jacob; Bhatia, Swapnil; Cai, Yizhi; Chen, Joanna; Clancy, Kevin; Galdzicki, Michal; Hillson, Nathan J; Le Novère, Nicolas; Maheshwari, Akshay J; McLaughlin, James Alastair; Myers, Chris J; P, Umesh; Pocock, Matthew; Rodriguez, Cesar; Soldatova, Larisa; Stan, Guy-Bart V; Swainston, Neil; Wipat, Anil; Sauro, Herbert M

    2015-12-01

    Synthetic Biology Open Language (SBOL) Visual is a graphical standard for genetic engineering. It consists of symbols representing DNA subsequences, including regulatory elements and DNA assembly features. These symbols can be used to draw illustrations for communication and instruction, and as image assets for computer-aided design. SBOL Visual is a community standard, freely available for personal, academic, and commercial use (Creative Commons CC0 license). We provide prototypical symbol images that have been used in scientific publications and software tools. We encourage users to use and modify them freely, and to join the SBOL Visual community: http://www.sbolstandard.org/visual.

  18. Visualizing SPH Cataclysmic Variable Accretion Disk Simulations with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.; Wood, Matthew A.

    2015-01-01

    We present innovative ways to use Blender, a 3D graphics package, to visualize smoothed particle hydrodynamics particle data of cataclysmic variable accretion disks. We focus on the methods of shape key data constructs to increasedata i/o and manipulation speed. The implementation of the methods outlined allow for compositing of the various visualization layers into a final animation. The viewing of the disk in 3D from different angles can allow for a visual analysisof the physical system and orbits. The techniques have a wide ranging set of applications in astronomical visualization,including both observation and theoretical data.

  19. Move with Me: A Parents' Guide to Movement Development for Visually Impaired Babies.

    ERIC Educational Resources Information Center

    Blind Childrens Center, Los Angeles, CA.

    This booklet presents suggestions for parents to promote their visually impaired infant's motor development. It is pointed out that babies with serious visual loss often prefer their world to be constant and familiar and may resist change (including change in position); therefore, it is important that a wide range of movement activities be…

  20. Visualization of time-varying natural tree data

    Treesearch

    S. Brasch; L. Linsen; E.G. McPherson

    2007-01-01

    Given a set of global (natural) tree parameters measured for many specimens of different ages for a range of species, we have developed a tool that visualizes these parameters over time. The parameters include measures of tree dimensions like heights, diameters, and crown shape, and measures of costs and benefits for growing the tree. We visualize the tree dimensions...

  1. The Influence of Visual Arts Education on Children with ASD

    ERIC Educational Resources Information Center

    Çevirgen, Ayse; Aktas, Burcu; Kot, Mehtap

    2018-01-01

    The aim of this research is to examine the effects of visual arts on a child with Autism Spectrum Disorder (ASD). The research included a 13-years-old male student with ASD, the student's parents, and the visual arts teacher. The research was designed according to the case study from qualitative research models. Semi-structured interviewing and…

  2. Data Visualization and Animation Lab (DVAL) overview

    NASA Technical Reports Server (NTRS)

    Stacy, Kathy; Vonofenheim, Bill

    1994-01-01

    The general capabilities of the Langley Research Center Data Visualization and Animation Laboratory is described. These capabilities include digital image processing, 3-D interactive computer graphics, data visualization and analysis, video-rate acquisition and processing of video images, photo-realistic modeling and animation, video report generation, and color hardcopies. A specialized video image processing system is also discussed.

  3. Reflections on Visual and Material Culture: An Example from Southwest Chicago

    ERIC Educational Resources Information Center

    Ulbricht, J.

    2007-01-01

    Although several art educators have called our attention to the importance of studying visual culture, others have widened the discussion to include material culture and its effects on our lives. Because of growing concern for the value of exploring the personal and social functions of visual and material culture, the purpose of this article is to…

  4. Integration of visual and motion cues for flight simulator requirements and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1976-01-01

    Investigations for the improvement of flight simulators are reported. Topics include: visual cues in landing, comparison of linear and nonlinear washout filters using a model of the vestibular system, and visual vestibular interactions (yaw axis). An abstract is given for a thesis on the applications of human dynamic orientation models to motion simulation.

  5. Identification and Intervention for Students Who Are Visually Impaired and Who Have Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Li, Alicia

    2009-01-01

    At least 60% of children with disabilities have multiple disabilities including visual impairments (VI). Because the visual system is neurologically based, any problems of the neurological system will also likely affect vision. The estimated number of students with VI and additional disabilities has increased significantly over the years. Since…

  6. Qualitative Differences in the Representation of Abstract versus Concrete Words: Evidence from the Visual-World Paradigm

    ERIC Educational Resources Information Center

    Dunabeitia, Jon Andoni; Aviles, Alberto; Afonso, Olivia; Scheepers, Christoph; Carreiras, Manuel

    2009-01-01

    In the present visual-world experiment, participants were presented with visual displays that included a target item that was a semantic associate of an abstract or a concrete word. This manipulation allowed us to test a basic prediction derived from the qualitatively different representational framework that supports the view of different…

  7. The Impact of Baseline Trend Control on Visual Analysis of Single-Case Data

    ERIC Educational Resources Information Center

    Mercer, Sterett H.; Sterling, Heather E.

    2012-01-01

    The impact of baseline trend control on visual analyses of AB intervention graphs was examined with simulated data at various values of baseline trend, autocorrelation, and effect size. Participants included 202 undergraduate students with minimal training in visual analysis and 10 graduate students and faculty with more training and experience in…

  8. The Right Hemisphere Advantage in Visual Change Detection Depends on Temporal Factors

    ERIC Educational Resources Information Center

    Spotorno, Sara; Faure, Sylvane

    2011-01-01

    What accounts for the Right Hemisphere (RH) functional superiority in visual change detection? An original task which combines one-shot and divided visual field paradigms allowed us to direct change information initially to the RH or the Left Hemisphere (LH) by deleting, respectively, an object included in the left or right half of a scene…

  9. The Effectiveness of a Multidisciplinary Group Rehabilitation Program on the Psychosocial Functioning of Elderly People Who Are Visually Impaired

    ERIC Educational Resources Information Center

    Alma, Manna A.; Groothoff, Johan W.; Melis-Dankers, Bart J. M.; Suurmeijer, Theo P. B. M.; van der Mei, Sijrike F.

    2013-01-01

    Introduction: The pilot study reported here determined the effectiveness of a multidisciplinary group rehabilitation program, Visually Impaired Elderly Persons Participating (VIPP), on psychosocial functioning. Methods: The single-group pretest-posttest pilot study included 29 persons with visual impairments (aged 55 and older) who were referred…

  10. Lewy Body Dementia

    MedlinePlus

    ... People with Lewy body dementia may experience visual hallucinations, and changes in alertness and attention. Other effects ... body dementia signs and symptoms may include: Visual hallucinations. Hallucinations may be one of the first symptoms, ...

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine

    This slide presentation describes basic topological concepts, including topological spaces, homeomorphisms, homotopy, betti numbers. Scalar field topology explores finding topological features and scalar field visualization, and vector field topology explores finding topological features and vector field visualization.

  12. Occupational Health and the Visual Arts: An Introduction.

    PubMed

    Hinkamp, David; McCann, Michael; Babin, Angela R

    2017-09-01

    Occupational hazards in the visual arts often involve hazardous materials, though hazardous equipment and hazardous work conditions can also be found. Occupational health professionals are familiar with most of these hazards and are particularly qualified to contribute clinical and preventive expertise to these issues. Articles illustrating visual arts health issues were sought and reviewed. Literature sources included medical databases, unindexed art-health publications, and popular press articles. Few medical articles examine health issues in the visuals arts directly, but exposures to pigments, solvents, and other hazards found in the visual arts are well described. The hierarchy of controls is an appropriate model for controlling hazards and promoting safer visual art workplaces. The health and safety of those working in the visual arts can benefit from the occupational health approach. Sources of further information are available.

  13. Detection Progress of Selected Drugs in TLC

    PubMed Central

    Pyka, Alina

    2014-01-01

    This entry describes applications of known indicators and dyes as new visualizing reagents and various visualizing systems as well as photocatalytic reactions and bioautography method for the detection of bioactive compounds including drugs and compounds isolated from herbal extracts. Broadening index, detection index, characteristics of densitometric band, modified contrast index, limit of detection, densitometric visualizing index, and linearity range of detected compounds were used for the evaluation of visualizing effects of applied visualizing reagents. It was shown that visualizing effect depends on the chemical structure of the visualizing reagent, the structure of the substance detected, and the chromatographic adsorbent applied. The usefulness of densitometry to direct detection of some drugs was also shown. Quoted papers indicate the detection progress of selected drugs investigated by thin-layer chromatography (TLC). PMID:24551853

  14. Effects of Normal Aging on Visuo-Motor Plasticity

    NASA Technical Reports Server (NTRS)

    Roller, Carrie A.; Cohen, Helen S.; Kimball, Kay T.; Bloomberg, Jacob J.

    2001-01-01

    Normal aging is associated with declines in neurologic function. Uncompensated visual and vestibular problems may have dire consequences including dangerous falls. Visuomotor plasticity is a form of behavioral neural plasticity which is important in the process of adapting to visual or vestibular alteration, including those changes due to pathology, pharmacotherapy, surgery or even entry into a microgravity or underwater environment. In order to determine the effects of aging on visuomotor plasticity, we chose the simple and easily measured paradigm of visual-motor re-arrangement created by using visual displacement prisms while throwing small balls at a target. Subjects threw balls before, during and after wearing a set of prisms which displace the visual scene by twenty degrees to the right. Data obtained during adaptation were modeled using multilevel analyses for 73 subjects aged 20 to 80 years. We found no statistically significant difference in measures of visuomotor plasticity with advancing age. Further studies are underway examining variable practice training as a potential mechanism for enhancing this form of behavioral neural plasticity.

  15. Effects of normal aging on visuo-motor plasticity

    NASA Technical Reports Server (NTRS)

    Roller, Carrie A.; Cohen, Helen S.; Kimball, Kay T.; Bloomberg, Jacob J.

    2002-01-01

    Normal aging is associated with declines in neurologic function. Uncompensated visual and vestibular problems may have dire consequences including dangerous falls. Visuo-motor plasticity is a form of behavioral neural plasticity, which is important in the process of adapting to visual or vestibular alteration, including those changes due to pathology, pharmacotherapy, surgery or even entry into microgravity or an underwater environment. To determine the effects of aging on visuo-motor plasticity, we chose the simple and easily measured paradigm of visual-motor rearrangement created by using visual displacement prisms while throwing small balls at a target. Subjects threw balls before, during and after wearing a set of prisms which displace the visual scene by twenty degrees to the right. Data obtained during adaptation were modeled using multilevel modeling techniques for 73 subjects, aged 20 to 80 years. We found no statistically significant difference in measures of visuo-motor plasticity with advancing age. Further studies are underway examining variable practice training as a potential mechanism for enhancing this form of behavioral neural plasticity.

  16. Association between visual impairment and patient-reported visual disability at different stages of cataract surgery.

    PubMed

    Acosta-Rojas, E Ruthy; Comas, Mercè; Sala, Maria; Castells, Xavier

    2006-10-01

    To evaluate the association between visual impairment (visual acuity, contrast sensitivity, stereopsis) and patient-reported visual disability at different stages of cataract surgery. A cohort of 104 patients aged 60 years and over with bilateral cataract was assessed preoperatively, after first-eye surgery (monocular pseudophakia) and after second-eye surgery (binocular pseudophakia). Partial correlation coefficients (PCC) and linear regression models were calculated. In patients with bilateral cataracts, visual disability was associated with visual acuity (PCC = -0.30) and, to a lesser extent, with contrast sensitivity (PCC = 0.16) and stereopsis (PCC = -0.09). In monocular and binocular pseudophakia, visual disability was more strongly associated with stereopsis (PCC = -0.26 monocular and -0.51 binocular) and contrast sensitivity (PCC = 0.18 monocular and 0.34 binocular) than with visual acuity (PCC = -0.18 monocular and -0.18 binocular). Visual acuity, contrast sensitivity and stereopsis accounted for between 17% and 42% of variance in visual disability. The association of visual impairment with patient-reported visual disability differed at each stage of cataract surgery. Measuring other forms of visual impairment independently from visual acuity, such as contrast sensitivity or stereopsis, could be important in evaluating both needs and outcomes in cataract surgery. More comprehensive assessment of the impact of cataract on patients should include measurement of both visual impairment and visual disability.

  17. Comment on "Cheating prevention in visual cryptography".

    PubMed

    Chen, Yu-Chi; Horng, Gwoboa; Tsai, Du-Shiau

    2012-07-01

    Visual cryptography (VC), proposed by Naor and Shamir, has numerous applications, including visual authentication and identification, steganography, and image encryption. In 2006, Horng showed that cheating is possible in VC, where some participants can deceive the remaining participants by forged transparencies. Since then, designing cheating-prevention visual secret-sharing (CPVSS) schemes has been studied by many researchers. In this paper, we cryptanalyze the Hu-Tzeng CPVSS scheme and show that it is not cheating immune. We also outline an improvement that helps to overcome the problem.

  18. A study of methods to predict and measure the transmission of sound through the walls of light aircraft. A survey of techniques for visualization of noise fields

    NASA Technical Reports Server (NTRS)

    Marshall, S. E.; Bernhard, R.

    1984-01-01

    A survey of the most widely used methods for visualizing acoustic phenomena is presented. Emphasis is placed on acoustic processes in the audible frequencies. Many visual problems are analyzed on computer graphic systems. A brief description of the current technology in computer graphics is included. The visualization technique survey will serve as basis for recommending an optimum scheme for displaying acoustic fields on computer graphic systems.

  19. The Effects of Asynchronous Visual Delays on Simulator Flight Performance and the Development of Simulator Sickness Symptomatology

    DTIC Science & Technology

    1986-12-26

    NAVAL TRAINING SYSTEMS CENTER ORLANDO. FLORIDA IT FILE COPY THE EFFECTS OF ASYNCHRONOUS VISUAL DELAYS ON SIMULATOR FLIGHT PERFORMANCE AND THE...ASYNCHRONOUS VISUAL. DELAYS ON SI.WLATOR FLIGHT PERF OMANCE AND THE DEVELOPMENT OF SIMLATOR SICKNESS SYMPTOMATOLOGY K. C. Uliano, E. Y. Lambert, R. S. Kennedy...ACCESSION NO. N63733N SP-01 0785-7P6 I. 4780 11. TITLE (Include Security Classification) The Effects of Asynchronous Visual Delays on Simulator Flight

  20. A workflow for the 3D visualization of meteorological data

    NASA Astrophysics Data System (ADS)

    Helbig, Carolin; Rink, Karsten

    2014-05-01

    In the future, climate change will strongly influence our environment and living conditions. To predict possible changes, climate models that include basic and process conditions have been developed and big data sets are produced as a result of simulations. The combination of various variables of climate models with spatial data from different sources helps to identify correlations and to study key processes. For our case study we use results of the weather research and forecasting (WRF) model of two regions at different scales that include various landscapes in Northern Central Europe and Baden-Württemberg. We visualize these simulation results in combination with observation data and geographic data, such as river networks, to evaluate processes and analyze if the model represents the atmospheric system sufficiently. For this purpose, a continuous workflow that leads from the integration of heterogeneous raw data to visualization using open source software (e.g. OpenGeoSys Data Explorer, ParaView) is developed. These visualizations can be displayed on a desktop computer or in an interactive virtual reality environment. We established a concept that includes recommended 3D representations and a color scheme for the variables of the data based on existing guidelines and established traditions in the specific domain. To examine changes over time in observation and simulation data, we added the temporal dimension to the visualization. In a first step of the analysis, the visualizations are used to get an overview of the data and detect areas of interest such as regions of convection or wind turbulences. Then, subsets of data sets are extracted and the included variables can be examined in detail. An evaluation by experts from the domains of visualization and atmospheric sciences establish if they are self-explanatory and clearly arranged. These easy-to-understand visualizations of complex data sets are the basis for scientific communication. In addition, they have become an essential medium for the evaluation and verification of models. Particularly in interdisciplinary research projects, they support the scientists in discussions and help to set a general level of knowledge.

  1. Is current eye-care-policy focus almost exclusively on cataract adequate to deal with blindness in India?

    PubMed

    Dandona, L; Dandona, R; Naduvilath, T J; McCarty, C A; Nanda, A; Srinivas, M; Mandal, P; Rao, G N

    1998-05-02

    India's National Programme for Control of Blindness focuses almost exclusively on cataract, based on a national survey done in the 1980s which reported that cataract caused 80% of the blindness in India. No current population-based data on the causes of blindness in India are available. We assessed the rate and causes of blindness in an urban population in southern India. We selected 2954 participants by stratified, random, cluster, systematic sampling from Hyderabad city. Eligible participants were interviewed and given a detailed ocular assessment, including visual acuity, refraction, slitlamp biomicroscopy, applanation intraocular pressure, gonioscopy, dilatation, grading of cataract, stereoscopic fundus assessment, and automated-threshold visual fields. 2522 participants, including 1399 aged 30 years or more, were assessed. 49 participants (all aged > or =30 years) were blind (presenting distance visual acuity <6/60 or central visual field <200 in the better eye). The rate of blindness among those aged 30 years or more, adjusted for age and sex, was 3.08% ([95% CI 1.95-4.21]). Causes included cataract (29.7%), retinal disease (17.1%), corneal disease (15.4%), refractive error (12.5%), glaucoma (12.1%), and optic atrophy (11.0%). 15.7% of the blindness caused by visual-field constriction would have been missed without visual-field examination. Also without visual-field and detailed dilated-fundus assessments, blindness attributed to cataract would have been overestimated by up to 75.8%. If the use of cataract surgery in this urban population was half that found in this study, which simulates the situation in rural India, cataract would have caused 51.8% (39.4-64.2) of blindness, significantly less than the 80% accepted by current policy. Much of the blindness in this Indian population was due to non-cataract causes. The previous national survey did not include detailed dilated-fundus assessment and visual-field examination which could have led to overestimation of cataract as a cause of blindness in India. Policy-makers in India should encourage well-designed population-based epidemiological studies from which to develop a comprehensive long-term policy on blindness in addition to dealing with cataract.

  2. Comparison of visual acuity of the patients on the first day after sub-Bowman keratomileusis or laser in situ keratomileusis

    PubMed Central

    Zhao, Wei; Wu, Ting; Dong, Ze-Hong; Feng, Jie; Ren, Yu-Feng; Wang, Yu-Sheng

    2016-01-01

    AIM To compare recovery of the visual acuity in patients one day after sub-Bowman keratomileusis (SBK) or laser in situ keratomileusis (LASIK). METHODS Data from 5923 eyes in 2968 patients that received LASIK (2755 eyes) or SBK (3168 eyes) were retrospectively analyzed. The eyes were divided into 4 groups according to preoperative spherical equivalent: between -12.00 to -9.00 D, extremely high myopia (n=396, including 192 and 204 in SBK and LASIK groups, respectively); -9.00 to -6.00 D, high myopia (n=1822, including 991 and 831 in SBK and LASIK groups, respectively), -6.00 to -3.00 D, moderate myopia (n=3071, including 1658 and 1413 in SBK and LASIK groups, respectively), and -3.00 to 0.00 D, low myopia (n=634, including 327 and 307 in SBK and LASIK groups, respectively). Uncorrected logMAR visual acuity values of patients were assessed under standard natural light. Analysis of variance was used for comparisons among different groups. RESULTS Uncorrected visual acuity values were 0.0115±0.1051 and 0.0466±0.1477 at day 1 after operation for patients receiving SBK and LASIK, respectively (P<0.01); visual acuity values of 0.1854±0.1842, 0.0615±0.1326, -0.0033±0.0978, and -0.0164±0.0972 were obtained for patients in the extremely high, high, moderate, and low myopia groups, respectively (P<0.01). In addition, significant differences in visual acuity at day 1 after operation were found between patients receiving SBK and LASIK in each myopia subgroup. CONCLUSION Compared with LASIK, SBK is safer and more effective, with faster recovery. Therefore, SBK is more likely to be accepted by patients than LASIK for better uncorrected visual acuity the day following operation. PMID:27158619

  3. Developing Visual Thinking in the Electronic Health Record.

    PubMed

    Boyd, Andrew D; Young, Christine D; Amatayakul, Margret; Dieter, Michael G; Pawola, Lawrence M

    2017-01-01

    The purpose of this vision paper is to identify how data visualization could transform healthcare. Electronic Health Records (EHRs) are maturing with new technology and tools being applied. Researchers are reaping the benefits of data visualization to better access compilations of EHR data for enhanced clinical research. Data visualization, while still primarily the domain of clinical researchers, is beginning to show promise for other stakeholders. A non-exhaustive review of the literature indicates that respective to the growth and development of the EHR, the maturity of data visualization in healthcare is in its infancy. Visual analytics has been only cursorily applied to healthcare. A fundamental issue contributing to fragmentation and poor coordination of healthcare delivery is that each member of the healthcare team, including patients, has a different view. Summarizing all of this care comprehensively for any member of the healthcare team is a "wickedly hard" visual analytics and data visualization problem to solve.

  4. An evaluation-guided approach for effective data visualization on tablets

    NASA Astrophysics Data System (ADS)

    Games, Peter S.; Joshi, Alark

    2015-01-01

    There is a rising trend of data analysis and visualization tasks being performed on a tablet device. Apps with interactive data visualization capabilities are available for a wide variety of domains. We investigate whether users grasp how to effectively interpret and interact with visualizations. We conducted a detailed user evaluation to study the abilities of individuals with respect to analyzing data on a tablet through an interactive visualization app. Based upon the results of the user evaluation, we find that most subjects performed well at understanding and interacting with simple visualizations, specifically tables and line charts. A majority of the subjects struggled with identifying interactive widgets, recognizing interactive widgets with overloaded functionality, and understanding visualizations which do not display data for sorted attributes. Based on our study, we identify guidelines for designers and developers of mobile data visualization apps that include recommendations for effective data representation and interaction.

  5. Moderate perinatal thyroid hormone insufficiency alters visual system function in adult rats.

    PubMed

    Boyes, William K; Degn, Laura; George, Barbara Jane; Gilbert, Mary E

    2018-04-21

    Thyroid hormone (TH) is critical for many aspects of neurodevelopment and can be disrupted by a variety of environmental contaminants. Sensory systems, including audition and vision are vulnerable to TH insufficiencies, but little data are available on visual system development at less than severe levels of TH deprivation. The goal of the current experiments was to explore dose-response relations between graded levels of TH insufficiency during development and the visual function of adult offspring. Pregnant Long Evans rats received 0 or 3 ppm (Experiment 1), or 0, 1, 2, or 3 ppm (Experiment 2) of propylthiouracil (PTU), an inhibitor of thyroid hormone synthesis, in drinking water from gestation day (GD) 6 to postnatal day (PN) 21. Treatment with PTU caused dose-related reductions of serum T4, with recovery on termination of exposure, and euthyroidism by the time of visual function testing. Tests of retinal (electroretinograms; ERGs) and visual cortex (visual evoked potentials; VEPs) function were assessed in adult offspring. Dark-adapted ERG a-waves, reflecting rod photoreceptors, were increased in amplitude by PTU. Light-adapted green flicker ERGs, reflecting M-cone photoreceptors, were reduced by PTU exposure. UV-flicker ERGs, reflecting S-cones, were not altered. Pattern-elicited VEPs were significantly reduced by 2 and 3 ppm PTU across a range of stimulus contrast values. The slope of VEP amplitude-log contrast functions was reduced by PTU, suggesting impaired visual contrast gain. Visual contrast gain primarily reflects function of visual cortex, and is responsible for adjusting sensitivity of perceptual mechanisms in response to changing visual scenes. The results indicate that moderate levels of pre-and post-natal TH insufficiency led to alterations in visual function of adult rats, including both retinal and visual cortex sites of dysfunction. Copyright © 2018. Published by Elsevier B.V.

  6. Planning, Implementation and Optimization of Future space Missions using an Immersive Visualization Environement (IVE) Machine

    NASA Astrophysics Data System (ADS)

    Harris, E.

    Planning, Implementation and Optimization of Future Space Missions using an Immersive Visualization Environment (IVE) Machine E. N. Harris, Lockheed Martin Space Systems, Denver, CO and George.W. Morgenthaler, U. of Colorado at Boulder History: A team of 3-D engineering visualization experts at the Lockheed Martin Space Systems Company have developed innovative virtual prototyping simulation solutions for ground processing and real-time visualization of design and planning of aerospace missions over the past 6 years. At the University of Colorado, a team of 3-D visualization experts are developing the science of 3-D visualization and immersive visualization at the newly founded BP Center for Visualization, which began operations in October, 2001. (See IAF/IAA-01-13.2.09, "The Use of 3-D Immersive Visualization Environments (IVEs) to Plan Space Missions," G. A. Dorn and G. W. Morgenthaler.) Progressing from Today's 3-D Engineering Simulations to Tomorrow's 3-D IVE Mission Planning, Simulation and Optimization Techniques: 3-D (IVEs) and visualization simulation tools can be combined for efficient planning and design engineering of future aerospace exploration and commercial missions. This technology is currently being developed and will be demonstrated by Lockheed Martin in the (IVE) at the BP Center using virtual simulation for clearance checks, collision detection, ergonomics and reach-ability analyses to develop fabrication and processing flows for spacecraft and launch vehicle ground support operations and to optimize mission architecture and vehicle design subject to realistic constraints. Demonstrations: Immediate aerospace applications to be demonstrated include developing streamlined processing flows for Reusable Space Transportation Systems and Atlas Launch Vehicle operations and Mars Polar Lander visual work instructions. Long-range goals include future international human and robotic space exploration missions such as the development of a Mars Reconnaissance Orbiter and Lunar Base construction scenarios. Innovative solutions utilizing Immersive Visualization provide the key to streamlining the mission planning and optimizing engineering design phases of future aerospace missions.

  7. Combining visual rehabilitative training and noninvasive brain stimulation to enhance visual function in patients with hemianopia: a comparative case study.

    PubMed

    Plow, Ela B; Obretenova, Souzana N; Halko, Mark A; Kenkel, Sigrid; Jackson, Mary Lou; Pascual-Leone, Alvaro; Merabet, Lotfi B

    2011-09-01

    To standardize a protocol for promoting visual rehabilitative outcomes in post-stroke hemianopia by combining occipital cortical transcranial direct current stimulation (tDCS) with Vision Restoration Therapy (VRT). A comparative case study assessing feasibility and safety. A controlled laboratory setting. Two patients, both with right hemianopia after occipital stroke damage. METHODS AND OUTCOME MEASUREMENTS: Both patients underwent an identical VRT protocol that lasted 3 months (30 minutes, twice a day, 3 days per week). In patient 1, anodal tDCS was delivered to the occipital cortex during VRT training, whereas in patient 2 sham tDCS with VRT was performed. The primary outcome, visual field border, was defined objectively by using high-resolution perimetry. Secondary outcomes included subjective characterization of visual deficit and functional surveys that assessed performance on activities of daily living. For patient 1, the neural correlates of visual recovery were also investigated, by using functional magnetic resonance imaging. Delivery of combined tDCS with VRT was feasible and safe. High-resolution perimetry revealed a greater shift in visual field border for patient 1 versus patient 2. Patient 1 also showed greater recovery of function in activities of daily living. Contrary to the expectation, patient 2 perceived greater subjective improvement in visual field despite objective high-resolution perimetry results that indicated otherwise. In patient 1, visual function recovery was associated with functional magnetic resonance imaging activity in surviving peri-lesional and bilateral higher-order visual areas. Results of preliminary case comparisons suggest that occipital cortical tDCS may enhance recovery of visual function associated with concurrent VRT through visual cortical reorganization. Future studies may benefit from incorporating protocol refinements such as those described here, which include global capture of function, control for potential confounds, and investigation of underlying neural substrates of recovery. Copyright © 2011 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  8. UGV: security analysis of subsystem control network

    NASA Astrophysics Data System (ADS)

    Abbott-McCune, Sam; Kobezak, Philip; Tront, Joseph; Marchany, Randy; Wicks, Al

    2013-05-01

    Unmanned Ground vehicles (UGVs) are becoming prolific in the heterogeneous superset of robotic platforms. The sensors which provide odometry, localization, perception, and vehicle diagnostics are fused to give the robotic platform a sense of the environment it is traversing. The automotive industry CAN bus has dominated the industry due to the fault tolerance and the message structure allowing high priority messages to reach the desired node in a real time environment. UGVs are being researched and produced at an accelerated rate to preform arduous, repetitive, and dangerous missions that are associated with a military action in a protracted conflict. The technology and applications of the research will inevitably be turned into dual-use platforms to aid civil agencies in the performance of their various operations. Our motivation is security of the holistic system; however as subsystems are outsourced in the design, the overall security of the system may be diminished. We will focus on the CAN bus topology and the vulnerabilities introduced in UGVs and recognizable security vulnerabilities that are inherent in the communications architecture. We will show how data can be extracted from an add-on CAN bus that can be customized to monitor subsystems. The information can be altered or spoofed to force the vehicle to exhibit unwanted actions or render the UGV unusable for the designed mission. The military relies heavily on technology to maintain information dominance, and the security of the information introduced onto the network by UGVs must be safeguarded from vulnerabilities that can be exploited.

  9. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator

    PubMed Central

    Ballesteros, Joaquin; Urdiales, Cristina; Martinez, Antonio B.; van Dieën, Jaap H.

    2016-01-01

    Gait analysis can provide valuable information on a person’s condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i) there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars—related to the user condition—and the estimation error; and (ii) this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation. PMID:27834911

  10. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator.

    PubMed

    Ballesteros, Joaquin; Urdiales, Cristina; Martinez, Antonio B; van Dieën, Jaap H

    2016-11-10

    Gait analysis can provide valuable information on a person's condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i) there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars-related to the user condition-and the estimation error; and (ii) this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.

  11. Odometry and Low-Cost Sensor Fusion in Tmm Dataset

    NASA Astrophysics Data System (ADS)

    Manzino, A. M.; Taglioretti, C.

    2016-03-01

    The aim of this study is to identify the most powerful motion model and filtering technique to represent an urban terrestrial mobile mapping (TMM) survey and ultimately to obtain the best representation of the car trajectory. The authors want to test how far a motion model and a more or less refined filtering technique could bring benefits in the determination of the car trajectory. To achieve the necessary data for the application of the motion models and the filtering techniques described in the article, the authors realized a TMM survey in the urban centre of Turin by equipping a vehicle with various instruments: a low-cost action-cam also able to record the GPS trace of the vehicle even in the presence of obstructions, an inertial measurement system and an odometer. The results of analysis show in the article indicate that the Unscented Kalman Filter (UKF) technique provides good results in the determination of the vehicle trajectory, especially if the motion model considers more states (such as the positions, the tangential velocity, the angular velocity, the heading, the acceleration). The authors also compared the results obtained with a motion model characterized by four, five and six states. A natural corollary to this work would be the introduction to the UKF of the photogrammetric information obtained by the same camera placed on board the vehicle. These data would permit to establish how photogrammetric measurements can improve the quality of TMM solutions, especially in the absence of GPS signals (like urban canyons).

  12. Correlation of visual in vitro cytotoxicity ratings of biomaterials with quantitative in vitro cell viability measurements.

    PubMed

    Bhatia, Sujata K; Yetter, Ann B

    2008-08-01

    Medical devices and implanted biomaterials are often assessed for biological reactivity using visual scores of cell-material interactions. In such testing, biomaterials are assigned cytotoxicity ratings based on visual evidence of morphological cellular changes, including cell lysis, rounding, spreading, and proliferation. For example, ISO 10993 cytotoxicity testing of medical devices allows the use of a visual grading scale. The present study compared visual in vitro cytotoxicity ratings to quantitative in vitro cytotoxicity measurements for biomaterials to determine the level of correlation between visual scoring and a quantitative cell viability assay. Biomaterials representing a spectrum of biological reactivity levels were evaluated, including organo-tin polyvinylchloride (PVC; a known cytotoxic material), ultra-high molecular weight polyethylene (a known non-cytotoxic material), and implantable tissue adhesives. Each material was incubated in direct contact with mouse 3T3 fibroblast cell cultures for 24 h. Visual scores were assigned to the materials using a 5-point rating scale; the scorer was blinded to the material identities. Quantitative measurements of cell viability were performed using a 3-(4,5-dimethylthiozol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) colorimetric assay; again, the assay operator was blinded to material identities. The investigation revealed a high degree of correlation between visual cytotoxicity ratings and quantitative cell viability measurements; a Pearson's correlation gave a correlation coefficient of 0.90 between the visual cytotoxicity score and the percent viable cells. An equation relating the visual cytotoxicity score and the percent viable cells was derived. The results of this study are significant for the design and interpretation of in vitro cytotoxicity studies of novel biomaterials.

  13. Prevalence and Causes of Visual Loss Among the Indigenous Peoples of the World: A Systematic Review.

    PubMed

    Foreman, Joshua; Keel, Stuart; van Wijngaarden, Peter; Bourne, Rupert A; Wormald, Richard; Crowston, Jonathan; Taylor, Hugh R; Dirani, Mohamed

    2018-05-01

    Studies have documented a higher disease burden in indigenous compared with nonindigenous populations, but no global data on the epidemiology of visual loss in indigenous peoples are available. A systematic review of literature on visual loss in the world's indigenous populations could identify major gaps and inform interventions to reduce their burden of visual loss. To conduct a systematic review on the prevalence and causes of visual loss among the world's indigenous populations. A search of databases and alternative sources identified literature on the prevalence and causes of visual loss (visual impairment and blindness) and eye diseases in indigenous populations. Studies from January 1, 1990, through August 1, 2017, that included clinical eye examinations of indigenous participants and, where possible, compared findings with those of nonindigenous populations were included. Methodologic quality of studies was evaluated to reveal gaps in the literature. Limited data were available worldwide. A total of 85 articles described 64 unique studies from 24 countries that examined 79 598 unique indigenous participants. Nineteen studies reported comparator data on 42 085 nonindigenous individuals. The prevalence of visual loss was reported in 13 countries, with visual impairment ranging from 0.6% in indigenous Australian children to 48.5% in native Tibetans 50 years or older. Uncorrected refractive error was the main cause of visual impairment (21.0%-65.1%) in 5 of 6 studies that measured presenting visual acuity. Cataract was the main cause of visual impairment in all 6 studies measuring best-corrected acuity (25.4%-72.2%). Cataract was the leading cause of blindness in 13 studies (32.0%-79.2%), followed by uncorrected refractive error in 2 studies (33.0% and 35.8%). Most countries with indigenous peoples do not have data on the burden of visual loss in these populations. Although existing studies vary in methodologic quality and reliability, they suggest that most visual loss in indigenous populations is avoidable. Improvements in quality and frequency of research into the eye health of indigenous communities appear to be required, and coordinated eye care programs should be implemented to specifically target the indigenous peoples of the world.

  14. Visual Impairment, Including Blindness

    MedlinePlus

    ... Tips for parents Resources of more info Julian’s Story When Julian was almost two years old, he ... as orientation and mobility (O&M); use assistive technologies designed for children with visual impairments; use what ...

  15. Public health nurse perceptions of Omaha System data visualization.

    PubMed

    Lee, Seonah; Kim, Era; Monsen, Karen A

    2015-10-01

    Electronic health records (EHRs) provide many benefits related to the storage, deployment, and retrieval of large amounts of patient data. However, EHRs have not fully met the need to reuse data for decision making on follow-up care plans. Visualization offers new ways to present health data, especially in EHRs. Well-designed data visualization allows clinicians to communicate information efficiently and effectively, contributing to improved interpretation of clinical data and better patient care monitoring and decision making. Public health nurse (PHN) perceptions of Omaha System data visualization prototypes for use in EHRs have not been evaluated. To visualize PHN-generated Omaha System data and assess PHN perceptions regarding the visual validity, helpfulness, usefulness, and importance of the visualizations, including interactive functionality. Time-oriented visualization for problems and outcomes and Matrix visualization for problems and interventions were developed using PHN-generated Omaha System data to help PHNs consume data and plan care at the point of care. Eleven PHNs evaluated prototype visualizations. Overall PHNs response to visualizations was positive, and feedback for improvement was provided. This study demonstrated the potential for using visualization techniques within EHRs to summarize Omaha System patient data for clinicians. Further research is needed to improve and refine these visualizations and assess the potential to incorporate visualizations within clinical EHRs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Visual field defects of the contralateral eye of non-arteritic ischemic anterior optic neuropathy: are they related to sleep apnea?

    PubMed

    Aptel, Florent; Aryal-Charles, Nischal; Tamisier, Renaud; Pépin, Jean-Louis; Lesoin, Antoine; Chiquet, Christophe

    2017-06-01

    To evaluate whether obstructive sleep apnea (OSA) is responsible for the visual field defects found in the fellow eyes of patients with non-arteritic ischemic optic neuropathy (NAION). Prospective cross-sectional study. The visual fields of the fellow eyes of NAION subjects with OSA were compared to the visual fields of control OSA patients matched for OSA severity. All patients underwent comprehensive ophthalmological and general examination including Humphrey 24.2 SITA-Standard visual field and polysomnography. Visual field defects were classified according the Ischemic Optic Neuropathy Decompression Trial (IONDT) classification. From a cohort of 78 consecutive subjects with NAION, 34 unaffected fellow eyes were compared to 34 control eyes of subjects matched for OSA severity (apnea-hypopnea index [AHI] 35.5 ± 11.6 vs 35.4 ± 9.4 events per hour, respectively, p = 0.63). After adjustment for age and body mass index, all visual field parameters were significantly different between the NAION fellow eyes and those of the control OSA groups, including mean deviation (-4.5 ± 3.7 vs -1.3 ± 1.8 dB, respectively, p < 0.05), visual field index (91.6 ± 10 vs 97.4 ± 3.5%, respectively, p = 0.002), pattern standard deviation (3.7 ± 2.3 vs 2.5 ± 2 dB, respectively, p = 0.015), and number of subjects with at least one defect on the IONDT classification (20 vs 10, respectively, p < 0.05). OSA alone does not explain the visual field defects frequently found in the fellow eyes of NAION patients.

  17. Ophthalmological outcome after resection of tumors based on the pineal gland.

    PubMed

    Hart, Michael G; Sarkies, Nicholas J; Santarius, Thomas; Kirollos, Ramez W

    2013-08-01

    Descriptions of visual dysfunction in pineal gland tumors tend to focus on upward gaze palsy alone. The authors aimed to characterize the nature, incidence, and functional significance of ophthalmological dysfunction after resection of tumors based on the pineal gland. Review of a retrospective case series was performed and included consecutive patients who underwent surgery performed by a consultant neurosurgeon between 2002 and 2011. Only tumors specifically based on the pineal gland were included; tumors encroaching on the pineal gland from other regions were excluded. All patients with visual signs and/or symptoms were reviewed by a specialist consultant neuroophthalmologist to accurately characterize the nature of their deficits. Visual disturbance was defined as visual symptoms caused by a disturbance of ocular motility. A total of 20 patients underwent resection of pineal gland tumors. Complete resection was obtained in 85%, and there were no perioperative deaths. Visual disturbance was present in 35% at presentation; of those who had normal ocular motility preoperatively 82% had normal motility postoperatively. In total, 55% of patients had residual visual disturbance postoperatively. Although upward gaze tended to improve, significant functional deficits remained, particularly with regard to complex convergence and accommodation dysfunction. Prisms were used in 25% but were only ever partially effective. Visual outcome was only related to preoperative visual status and tumor volume (multivariate analysis). Long-term visual morbidity after pineal gland tumor resection is common and leads to significant functional impairment. Improvement in deficits rarely occurs spontaneously, and prisms only have limited effectiveness, probably due to the dynamic nature of supranuclear ocular movement coordination.

  18. Visual impairment evaluation in 119 children with congenital Zika syndrome.

    PubMed

    Ventura, Liana O; Ventura, Camila V; Dias, Natália de C; Vilar, Isabelle G; Gois, Adriana L; Arantes, Tiago E; Fernandes, Luciene C; Chiang, Michael F; Miller, Marilyn T; Lawrence, Linda

    2018-06-01

    To assess visual impairment in a large sample of infants with congenital Zika syndrome (CZS) and to compare with a control group using the same assessment protocol. The study group was composed of infants with confirmed diagnosis of CZS. Controls were healthy infants matched for age, sex, and socioeconomic status. All infants underwent comprehensive ophthalmologic evaluation including visual acuity, visual function assessment, and visual developmental milestones. The CZS group included 119 infants; the control group, 85 infants. At examination, the mean age of the CZS group was 8.5 ± 1.2 months (range, 6-13 months); of the controls, 8.4 ± 1.8 months (range, 5-12 months; P = 0.598). Binocular Teller Acuity Card (TAC) testing was abnormal in 107 CZS infants and in 4 controls (89.9% versus 5% [P < 0.001]). In the study group, abnormal monocular TAC results were more frequent in eyes with funduscopic alterations (P = 0.008); however, 104 of 123 structurally normal eyes (84.6%) also presented abnormal TAC results. Binocular contrast sensitivity was reduced in 87 of 107 CZS infants and in 8 of 80 controls (81.3% versus 10% [P < 0.001]). The visual development milestones were less achieved by infants with CZS compared to controls (P < 0.001). Infants with CZS present with severe visual impairment. A protocol for assessment of the ocular findings, visual acuity, and visual developmental milestones tested against age-matched controls is suggested. Copyright © 2018 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.

  19. Visual acuity and visual skills in Malaysian children with learning disabilities

    PubMed Central

    Muzaliha, Mohd-Nor; Nurhamiza, Buang; Hussein, Adil; Norabibas, Abdul-Rani; Mohd-Hisham-Basrun, Jaafar; Sarimah, Abdullah; Leo, Seo-Wei; Shatriah, Ismail

    2012-01-01

    Background: There is limited data in the literature concerning the visual status and skills in children with learning disabilities, particularly within the Asian population. This study is aimed to determine visual acuity and visual skills in children with learning disabilities in primary schools within the suburban Kota Bharu district in Malaysia. Methods: We examined 1010 children with learning disabilities aged between 8–12 years from 40 primary schools in the Kota Bharu district, Malaysia from January 2009 to March 2010. These children were identified based on their performance in a screening test known as the Early Intervention Class for Reading and Writing Screening Test conducted by the Ministry of Education, Malaysia. Complete ocular examinations and visual skills assessment included near point of convergence, amplitude of accommodation, accommodative facility, convergence break and recovery, divergence break and recovery, and developmental eye movement tests for all subjects. Results: A total of 4.8% of students had visual acuity worse than 6/12 (20/40), 14.0% had convergence insufficiency, 28.3% displayed poor accommodative amplitude, and 26.0% showed signs of accommodative infacility. A total of 12.1% of the students had poor convergence break, 45.7% displayed poor convergence recovery, 37.4% showed poor divergence break, and 66.3% were noted to have poor divergence recovery. The mean horizontal developmental eye movement was significantly prolonged. Conclusion: Although their visual acuity was satisfactory, nearly 30% of the children displayed accommodation problems including convergence insufficiency, poor accommodation, and accommodative infacility. Convergence and divergence recovery are the most affected visual skills in children with learning disabilities in Malaysia. PMID:23055674

  20. Guidance, Counseling, and Support Services for High School Students with Physical Disabilities: Visual, Hearing, Orthopedic, Neuromuscular, Epilepsy, Chronic Health Conditions. Includes State Resource Directory.

    ERIC Educational Resources Information Center

    Foster, June C.; And Others

    Intended for use by high school guidance personnel, the two volumes provide general information and a resource guide on physical disabilities including visual impairment, hearing impairment, orthopedic handicap, neuromuscular handicap, epilepsy, diabetes, and other chronic health conditions. The first section provides an overview of each of the…

  1. Understanding visualization: a formal approach using category theory and semiotics.

    PubMed

    Vickers, Paul; Faith, Joe; Rossiter, Nick

    2013-06-01

    This paper combines the vocabulary of semiotics and category theory to provide a formal analysis of visualization. It shows how familiar processes of visualization fit the semiotic frameworks of both Saussure and Peirce, and extends these structures using the tools of category theory to provide a general framework for understanding visualization in practice, including: Relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. This paper generalizes previous work on the formal characterization of visualization by, inter alia, Ziemkiewicz and Kosara and allows us to formally distinguish properties of the visualization process that previous work does not.

  2. How scientists develop competence in visual communication

    NASA Astrophysics Data System (ADS)

    Ostergren, Marilyn

    Visuals (maps, charts, diagrams and illustrations) are an important tool for communication in most scientific disciplines, which means that scientists benefit from having strong visual communication skills. This dissertation examines the nature of competence in visual communication and the means by which scientists acquire this competence. This examination takes the form of an extensive multi-disciplinary integrative literature review and a series of interviews with graduate-level science students. The results are presented as a conceptual framework that lays out the components of competence in visual communication, including the communicative goals of science visuals, the characteristics of effective visuals, the skills and knowledge needed to create effective visuals and the learning experiences that promote the acquisition of these forms of skill and knowledge. This conceptual framework can be used to inform pedagogy and thus help graduate students achieve a higher level of competency in this area; it can also be used to identify aspects of acquiring competence in visual communication that need further study.

  3. Widespread correlation patterns of fMRI signal across visual cortex reflect eccentricity organization

    PubMed Central

    Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan EB; Kastner, Sabine; Hasson, Uri

    2015-01-01

    The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas. DOI: http://dx.doi.org/10.7554/eLife.03952.001 PMID:25695154

  4. A survey on sensor coverage and visual data capturing/processing/transmission in wireless visual sensor networks.

    PubMed

    Yap, Florence G H; Yen, Hong-Hsu

    2014-02-20

    Wireless Visual Sensor Networks (WVSNs) where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs) that can only transmit scalar information (e.g., temperature), the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/ transmit visual data in limited resources (hardware capability and bandwidth) WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/ processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs.

  5. A Survey on Sensor Coverage and Visual Data Capturing/Processing/Transmission in Wireless Visual Sensor Networks

    PubMed Central

    Yap, Florence G. H.; Yen, Hong-Hsu

    2014-01-01

    Wireless Visual Sensor Networks (WVSNs) where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs) that can only transmit scalar information (e.g., temperature), the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/transmit visual data in limited resources (hardware capability and bandwidth) WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs. PMID:24561401

  6. Adapting Artworks for People Who Are Blind or Visually Impaired Using Raised Printing

    ERIC Educational Resources Information Center

    Krivec, Tjaša; Muck, Tadeja; Germadnik, Rolanda Fugger; Majnaric, Igor; Golob, Gorazd

    2014-01-01

    Everyone has the right to freely participate in the cultural life of the community (United Nations, 2012). In Europe and around the globe, many efforts have been made in order to include people with visual impairments and blindness into the cultural life. The objects and artifacts exhibited in museums for people with visual impairments are…

  7. Eye-Tracking in the Study of Visual Expertise: Methodology and Approaches in Medicine

    ERIC Educational Resources Information Center

    Fox, Sharon E.; Faulkner-Jones, Beverly E.

    2017-01-01

    Eye-tracking is the measurement of eye motions and point of gaze of a viewer. Advances in this technology have been essential to our understanding of many forms of visual learning, including the development of visual expertise. In recent years, these studies have been extended to the medical professions, where eye-tracking technology has helped us…

  8. Visualizing planetary data by using 3D engines

    NASA Astrophysics Data System (ADS)

    Elgner, S.; Adeli, S.; Gwinner, K.; Preusker, F.; Kersten, E.; Matz, K.-D.; Roatsch, T.; Jaumann, R.; Oberst, J.

    2017-09-01

    We examined 3D gaming engines for their usefulness in visualizing large planetary image data sets. These tools allow us to include recent developments in the field of computer graphics in our scientific visualization systems and present data products interactively and in higher quality than before. We started to set up the first applications which will take use of virtual reality (VR) equipment.

  9. How a Visual Language of Abstract Shapes Facilitates Cultural and International Border Crossings

    ERIC Educational Resources Information Center

    Conroy, Arthur Thomas, III

    2016-01-01

    This article describes a visual language comprised of abstract shapes that has been shown to be effective in communicating prior knowledge between and within members of a small team or group. The visual language includes a set of geometric shapes and rules that guide the construction of the abstract diagrams that are the external representation of…

  10. Time Series Data Visualization in World Wide Telescope

    NASA Astrophysics Data System (ADS)

    Fay, J.

    WorldWide Telescope provides a rich set of timer series visualization for both archival and real time data. WWT consists of both interactive desktop tools for interactive immersive visualization and HTML5 web based controls that can be utilized in customized web pages. WWT supports a range of display options including full dome, power walls, stereo and virtual reality headsets.

  11. Examination of the Relation between an Assessment of Skills and Performance on Auditory-Visual Conditional Discriminations for Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Kodak, Tiffany; Clements, Andrea; Paden, Amber R.; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A.

    2015-01-01

    The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The…

  12. Foundations of Education, Volume I: History and Theory of Teaching Children and Youths with Visual Impairments. Second Edition.

    ERIC Educational Resources Information Center

    Holbrook, M. Cay, Ed.; Koenig, Alan J., Ed.

    This text, one of two volumes on the instruction of students with visual impairments, focuses on the history and theory of teaching such students. The following chapters are included: (1) "Historical Perspectives" (Phil Hatlen) with emphasis on the last 50 years; (2) "Visual Impairment" (Kathleen M. Huebner) which provides general information…

  13. Factors affecting the visual outcome in hyphema management in Guinness Eye Center Onitsha.

    PubMed

    Onyekwe, L O

    2008-12-01

    This study aims of determining the complications, outcome of hyphema treatment and recommend ways of enhancing good visual outcome. The records of all cases of hyphema seen from 1st January 2001 to 31st December 2005 were reviewed retrospectively. The variables analyzed were the biodata of all the patients, the agents causing hyphema, associated injuries and complications. Visual acuity at presentation, discharge and last visit was analyzed. Seventy four patients that had hyphema were reviewed. The male:female ratio was 3.5:1. Trauma was predominantly main cause of hyphema. The common agents of injury include whip (23.2%) and fist (18.8%). The common complications were secondary glaucoma (52.5%), corneal siderosis (30.0%) and rebleeding (10%). Visual outcome is related to time ofpresentation, complications and treatment. Significant improvement was achieved following treatment. Hyphema is a common complication of eye injuries. It is commonly associated with other eye injuries like vitreous haemorrhage and cataract. Common complications include secondary glaucoma, corneal siderosis and rebleeding. Visual outcome is dependent on time of presentation, severity and nature of complications. Visual outcome can be improved by early presentation and detection of complications and appropriate treatment.

  14. Visual function at altitude under night vision assisted conditions.

    PubMed

    Vecchi, Diego; Morgagni, Fabio; Guadagno, Anton G; Lucertini, Marco

    2014-01-01

    Hypoxia, even mild, is known to produce negative effects on visual function, including decreased visual acuity and sensitivity to contrast, mostly in low light. This is of special concern when night vision devices (NVDs) are used during flight because they also provide poor images in terms of resolution and contrast. While wearing NVDs in low light conditions, 16 healthy male aviators were exposed to a simulated altitude of 12,500 ft in a hypobaric chamber. Snellen visual acuity decreased in normal light from 28.5 +/- 4.2/20 (normoxia) to 37.2 +/- 7.4/20 (hypoxia) and, in low light, from 33.8 +/- 6.1/20 (normoxia) to 42.2 +/- 8.4/20 (hypoxia), both at a significant level. An association was found between blood oxygen saturation and visual acuity without significance. No changes occurred in terms of sensitivity to contrast. Our data demonstrate that mild hypoxia is capable of affecting visual acuity and the photopic/high mesopic range of NVD-aided vision. This may be due to several reasons, including the sensitivity to hypoxia of photoreceptors and other retinal cells. Contrast sensitivity is possibly preserved under NVD-aided vision due to its dependency on the goggles' gain.

  15. Harnessing the web information ecosystem with wiki-based visualization dashboards.

    PubMed

    McKeon, Matt

    2009-01-01

    We describe the design and deployment of Dashiki, a public website where users may collaboratively build visualization dashboards through a combination of a wiki-like syntax and interactive editors. Our goals are to extend existing research on social data analysis into presentation and organization of data from multiple sources, explore new metaphors for these activities, and participate more fully in the web!s information ecology by providing tighter integration with real-time data. To support these goals, our design includes novel and low-barrier mechanisms for editing and layout of dashboard pages and visualizations, connection to data sources, and coordinating interaction between visualizations. In addition to describing these technologies, we provide a preliminary report on the public launch of a prototype based on this design, including a description of the activities of our users derived from observation and interviews.

  16. Effects of awareness interventions on children's attitudes toward peers with a visual impairment.

    PubMed

    Reina, Raul; López, Víctor; Jiménez, Mario; García-Calvo, Tomás; Hutzler, Yeshayahu

    2011-09-01

    The purpose of this study was to explore the effect of two awareness programs (6-day vs. 1-day programs) on children's attitudes toward peers with a visual impairment. Three hundred and forty-four Spanish physical education students (164 girls and 180 boys) aged 10-15 years, took part in the study. A modified version of the Attitudes Toward Disability Questionnaire (ATDQ) was used, which includes three sub-scales: (i) cognitive perceptions, (ii) emotional perception, and (iii) behavioral readiness to interact with children with disabilities. The questionnaire was filled out during the regular physical education class before and immediately after the awareness activity. The 6-day didactical unit included a lecture on visual impairments and a video describing visual impairments and the game of 5-a-side soccer (first lesson), sensibilization activities toward visual impairment (second and third lessons), training and competitive 5-a-side soccer tasks using blindfolded goggles (fourth and fifth lessons), and a sport show and chat with soccer players with a visual impairment (sixth lesson). The 1-day awareness unit only included the final session of the didactical activity. Repeated measures analysis of variance revealed significant time effects in the cognitive, emotional, and behavioral subscales. Sex also was found to demonstrate significant effects, in which women showed more favorable results than men. A time-by-group intervention effect was only demonstrated in the cognitive sub-scale, and the 6-day didactic intervention was more effective than the 1-day awareness unit.

  17. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  18. Computer systems and methods for visualizing data

    DOEpatents

    Stolte, Chris; Hanrahan, Patrick

    2010-07-13

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  19. Computer systems and methods for visualizing data

    DOEpatents

    Stolte, Chris; Hanrahan, Patrick

    2013-01-29

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  20. Visual search in a forced-choice paradigm

    NASA Technical Reports Server (NTRS)

    Holmgren, J. E.

    1974-01-01

    The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.

  1. Do Bedside Visual Tools Improve Patient and Caregiver Satisfaction? A Systematic Review of the Literature.

    PubMed

    Goyal, Anupama A; Tur, Komalpreet; Mann, Jason; Townsend, Whitney; Flanders, Scott A; Chopra, Vineet

    2017-11-01

    Although common, the impact of low-cost bedside visual tools, such as whiteboards, on patient care is unclear. To systematically review the literature and assess the influence of bedside visual tools on patient satisfaction. Medline, Embase, SCOPUS, Web of Science, CINAHL, and CENTRAL. Studies of adult or pediatric hospitalized patients reporting physician identification, understanding of provider roles, patient-provider communication, and satisfaction with care from the use of visual tools were included. Outcomes were categorized as positive, negative, or neutral based on survey responses for identification, communication, and satisfaction. Two reviewers screened studies, extracted data, and assessed the risk of study bias. Sixteen studies met the inclusion criteria. Visual tools included whiteboards (n = 4), physician pictures (n = 7), whiteboard and picture (n = 1), electronic medical record-based patient portals (n = 3), and formatted notepads (n = 1). Tools improved patients' identification of providers (13/13 studies). The impact on understanding the providers' roles was largely positive (8/10 studies). Visual tools improved patient-provider communication (4/5 studies) and satisfaction (6/8 studies). In adults, satisfaction varied between positive with the use of whiteboards (2/5 studies) and neutral with pictures (1/5 studies). Satisfaction related to pictures in pediatric patients was either positive (1/3 studies) or neutral (1/3 studies). Differences in tool format (individual pictures vs handouts with pictures of all providers) and study design (randomized vs cohort) may explain variable outcomes. The use of bedside visual tools appears to improve patient recognition of providers and patient-provider communication. Future studies that include better design and outcome assessment are necessary before widespread use can be recommended. © 2017 Society of Hospital Medicine

  2. Auditory and visual health after ten years of exposure to metal-on-metal hip prostheses: a cross-sectional study follow up.

    PubMed

    Prentice, Jennifer R; Blackwell, Christopher S; Raoof, Naz; Bacon, Paul; Ray, Jaydip; Hickman, Simon J; Wilkinson, J Mark

    2014-01-01

    Case reports of patients with mal-functioning metal-on-metal hip replacement (MoMHR) prostheses suggest an association of elevated circulating metal levels with visual and auditory dysfunction. However, it is unknown if this is a cumulative exposure effect and the impact of prolonged low level exposure, relevant to the majority of patients with a well-functioning prosthesis, has not been studied. Twenty four male patients with a well-functioning MoMHR and an age and time since surgery matched group of 24 male patients with conventional total hip arthroplasty (THA) underwent clinical and electrophysiological assessment of their visual and auditory health at a mean of ten years after surgery. Median circulating cobalt and chromium concentrations were higher in patients after MoMHR versus those with THA (P<0.0001), but were within the Medicines and Healthcare Products Regulatory Agency (UK) investigation threshold. Subjective auditory tests including pure tone audiometric and speech discrimination findings were similar between groups (P>0.05). Objective assessments, including amplitude and signal-to-noise ratio of transient evoked and distortion product oto-acoustic emissions (TEOAE and DPOAE, respectively), were similar for all the frequencies tested (P>0.05). Auditory brainstem responses (ABR) and cortical evoked response audiometry (ACR) were also similar between groups (P>0.05). Ophthalmological evaluations, including self-reported visual function by visual functioning questionnaire, as well as binocular low contrast visual acuity and colour vision were similar between groups (P>0.05). Retinal nerve fibre layer thickness and macular volume measured by optical coherence tomography were also similar between groups (P>0.05). In the presence of moderately elevated metal levels associated with well-functioning implants, MoMHR exposure does not associate with clinically demonstrable visual or auditory dysfunction.

  3. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex.

    PubMed

    Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.

  4. Visual Masking in Schizophrenia: Overview and Theoretical Implications

    PubMed Central

    Green, Michael F.; Lee, Junghee; Wynn, Jonathan K.; Mathis, Kristopher I.

    2011-01-01

    Visual masking provides several key advantages for exploring the earliest stages of visual processing in schizophrenia: it allows for control over timing at the millisecond level, there are several well-supported theories of the underlying neurobiology of visual masking, and it is amenable to examination by electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI). In this paper, we provide an overview of the visual masking impairment schizophrenia, including the relevant theoretical mechanisms for masking impairment. We will discuss its relationship to clinical symptoms, antipsychotic medications, diagnostic specificity, and presence in at-risk populations. As part of this overview, we will cover the neural correlates of visual masking based on recent findings from EEG and fMRI. Finally, we will suggest a possible mechanism that could explain the patterns of masking findings and other visual processing findings in schizophrenia. PMID:21606322

  5. 3DScapeCS: application of three dimensional, parallel, dynamic network visualization in Cytoscape

    PubMed Central

    2013-01-01

    Background The exponential growth of gigantic biological data from various sources, such as protein-protein interaction (PPI), genome sequences scaffolding, Mass spectrometry (MS) molecular networking and metabolic flux, demands an efficient way for better visualization and interpretation beyond the conventional, two-dimensional visualization tools. Results We developed a 3D Cytoscape Client/Server (3DScapeCS) plugin, which adopted Cytoscape in interpreting different types of data, and UbiGraph for three-dimensional visualization. The extra dimension is useful in accommodating, visualizing, and distinguishing large-scale networks with multiple crossed connections in five case studies. Conclusions Evaluation on several experimental data using 3DScapeCS and its special features, including multilevel graph layout, time-course data animation, and parallel visualization has proven its usefulness in visualizing complex data and help to make insightful conclusions. PMID:24225050

  6. 6th Yahya Cohen Lecture: visual experience during cataract surgery.

    PubMed

    Au Eong, K G

    2002-09-01

    The visual sensations many patients experience during cataract surgery under local anaesthesia have received little attention until recently. This paper reviews the recent studies on this phenomenon, discusses its clinical significance and suggests novel approaches to reduce its negative impact on the surgery. Literature review. Many patients who have cataract surgery under retrobulbar, peribulbar or topical anaesthesia experience a variety of visual sensations in their operated eye during surgery. These visual sensations include perception of light, movements, flashes, one or more colours, surgical instruments, the surgeon's hand/fingers, the surgeon and changes in light brightness. Some patients experience transient no light perception, even if the operation is performed under topical anaesthesia. The clinical significance of this phenomenon lies in the fact that approximately 7.1% to 15.4% of patients find their visual experience frightening. This fear and anxiety may cause some patients to become uncooperative during surgery and trigger a sympathetic surge, causing such undesirable effects as hypertension, tachycardia, ischaemic strain on the heart, hyperventilation and acute panic attack. Several approaches to reduce the negative impact of patients' visual experience are suggested, including appropriate preoperative counselling and reducing the ability of patients to see during surgery. The findings that some patients find their intraoperative visual experience distressing have a major impact on the way ophthalmologists manage their cataract patients. To reduce its negative impact, surgeons should consider incorporating appropriate preoperative counselling on potential intraoperative visual experience when obtaining informed consent for surgery.

  7. MONGKIE: an integrated tool for network analysis and visualization for multi-omics data.

    PubMed

    Jang, Yeongjun; Yu, Namhee; Seo, Jihae; Kim, Sun; Lee, Sanghyuk

    2016-03-18

    Network-based integrative analysis is a powerful technique for extracting biological insights from multilayered omics data such as somatic mutations, copy number variations, and gene expression data. However, integrated analysis of multi-omics data is quite complicated and can hardly be done in an automated way. Thus, a powerful interactive visual mining tool supporting diverse analysis algorithms for identification of driver genes and regulatory modules is much needed. Here, we present a software platform that integrates network visualization with omics data analysis tools seamlessly. The visualization unit supports various options for displaying multi-omics data as well as unique network models for describing sophisticated biological networks such as complex biomolecular reactions. In addition, we implemented diverse in-house algorithms for network analysis including network clustering and over-representation analysis. Novel functions include facile definition and optimized visualization of subgroups, comparison of a series of data sets in an identical network by data-to-visual mapping and subsequent overlaying function, and management of custom interaction networks. Utility of MONGKIE for network-based visual data mining of multi-omics data was demonstrated by analysis of the TCGA glioblastoma data. MONGKIE was developed in Java based on the NetBeans plugin architecture, thus being OS-independent with intrinsic support of module extension by third-party developers. We believe that MONGKIE would be a valuable addition to network analysis software by supporting many unique features and visualization options, especially for analysing multi-omics data sets in cancer and other diseases. .

  8. Living in the dark does not mean a blind life: bird and mammal visual communication in dim light.

    PubMed

    Penteriani, Vincenzo; Delgado, María Del Mar

    2017-04-05

    For many years, it was believed that bird and mammal communication 'in the dark of the night' relied exclusively on vocal and chemical signalling. However, in recent decades, several case studies have conveyed the idea that the nocturnal world is rich in visual information. Clearly, a visual signal needs a source of light to work, but diurnal light (twilight included, i.e. any light directly dependent on the sun) is not the only source of luminosity on this planet. Actually, moonlight represents a powerful source of illumination that cannot be neglected from the perspective of visual communication. White patches of feathers and fur on a dark background have the potential to be used to communicate with conspecifics and heterospecifics in dim light across different contexts and for a variety of reasons. Here: (i) we review current knowledge on visual signalling in crepuscular and nocturnal birds and mammals; and (ii) we also present some possible cases of birds and mammals that, due to the characteristics of their feather and fur coloration pattern, might use visual signals in dim light. Visual signalling in nocturnal animals is still an emerging field and, to date, it has received less attention than many other means of communication, including visual communication under daylight. For this reason, many questions remain unanswered and, sometimes, even unasked.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).

  9. Living in the dark does not mean a blind life: bird and mammal visual communication in dim light

    PubMed Central

    2017-01-01

    For many years, it was believed that bird and mammal communication ‘in the dark of the night’ relied exclusively on vocal and chemical signalling. However, in recent decades, several case studies have conveyed the idea that the nocturnal world is rich in visual information. Clearly, a visual signal needs a source of light to work, but diurnal light (twilight included, i.e. any light directly dependent on the sun) is not the only source of luminosity on this planet. Actually, moonlight represents a powerful source of illumination that cannot be neglected from the perspective of visual communication. White patches of feathers and fur on a dark background have the potential to be used to communicate with conspecifics and heterospecifics in dim light across different contexts and for a variety of reasons. Here: (i) we review current knowledge on visual signalling in crepuscular and nocturnal birds and mammals; and (ii) we also present some possible cases of birds and mammals that, due to the characteristics of their feather and fur coloration pattern, might use visual signals in dim light. Visual signalling in nocturnal animals is still an emerging field and, to date, it has received less attention than many other means of communication, including visual communication under daylight. For this reason, many questions remain unanswered and, sometimes, even unasked. This article is part of the themed issue ‘Vision in dim light’. PMID:28193809

  10. A training tool for visual aids. Using tracing techniques to create visual aids.

    PubMed

    Clark, M; Walters, J E; Wileman, R

    1982-01-01

    This training tool explains the use of tracing techniques to create visuals requiring few materials and no training of special skills in drawing. Magazines, books, posters, and many other materials contain photographs and drawings which can be used to create visual aids for health training and public health education. The materials required are pencils, an eraser, crayons or colored marking pens, paper clips, tracing and drawing paper, carbon paper, and sources of visual images. The procedure is described. The material was prepared by INTRAH staff members. Other materials include how to evaluate teaching, how to create a family health case study and training in group dynamics.

  11. SBOL Visual: A Graphical Language for Genetic Designs

    DOE PAGES

    Quinn, Jacqueline Y.; Cox, Robert Sidney; Adler, Aaron; ...

    2015-12-03

    Synthetic Biology Open Language (SBOL) Visual is a graphical standard for genetic engineering. We report that it consists of symbols representing DNA subsequences, including regulatory elements and DNA assembly features. These symbols can be used to draw illustrations for communication and instruction, and as image assets for computer-aided design. SBOL Visual is a community standard, freely available for personal, academic, and commercial use (Creative Commons CC0 license). We provide prototypical symbol images that have been used in scientific publications and software tools. We encourage users to use and modify them freely, and to join the SBOL Visual community: http://www.sbolstandard.org/visual.

  12. SBOL Visual: A Graphical Language for Genetic Designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Jacqueline Y.; Cox, Robert Sidney; Adler, Aaron

    Synthetic Biology Open Language (SBOL) Visual is a graphical standard for genetic engineering. We report that it consists of symbols representing DNA subsequences, including regulatory elements and DNA assembly features. These symbols can be used to draw illustrations for communication and instruction, and as image assets for computer-aided design. SBOL Visual is a community standard, freely available for personal, academic, and commercial use (Creative Commons CC0 license). We provide prototypical symbol images that have been used in scientific publications and software tools. We encourage users to use and modify them freely, and to join the SBOL Visual community: http://www.sbolstandard.org/visual.

  13. SBOL Visual: A Graphical Language for Genetic Designs

    PubMed Central

    Adler, Aaron; Beal, Jacob; Bhatia, Swapnil; Cai, Yizhi; Chen, Joanna; Clancy, Kevin; Galdzicki, Michal; Hillson, Nathan J.; Le Novère, Nicolas; Maheshwari, Akshay J.; McLaughlin, James Alastair; Myers, Chris J.; P, Umesh; Pocock, Matthew; Rodriguez, Cesar; Soldatova, Larisa; Stan, Guy-Bart V.; Swainston, Neil; Wipat, Anil; Sauro, Herbert M.

    2015-01-01

    Synthetic Biology Open Language (SBOL) Visual is a graphical standard for genetic engineering. It consists of symbols representing DNA subsequences, including regulatory elements and DNA assembly features. These symbols can be used to draw illustrations for communication and instruction, and as image assets for computer-aided design. SBOL Visual is a community standard, freely available for personal, academic, and commercial use (Creative Commons CC0 license). We provide prototypical symbol images that have been used in scientific publications and software tools. We encourage users to use and modify them freely, and to join the SBOL Visual community: http://www.sbolstandard.org/visual. PMID:26633141

  14. A knowledge based system for scientific data visualization

    NASA Technical Reports Server (NTRS)

    Senay, Hikmet; Ignatius, Eve

    1992-01-01

    A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.

  15. OnSight: Multi-platform Visualization of the Surface of Mars

    NASA Astrophysics Data System (ADS)

    Abercrombie, S. P.; Menzies, A.; Winter, A.; Clausen, M.; Duran, B.; Jorritsma, M.; Goddard, C.; Lidawer, A.

    2017-12-01

    A key challenge of planetary geology is to develop an understanding of an environment that humans cannot (yet) visit. Instead, scientists rely on visualizations created from images sent back by robotic explorers, such as the Curiosity Mars rover. OnSight is a multi-platform visualization tool that helps scientists and engineers to visualize the surface of Mars. Terrain visualization allows scientists to understand the scale and geometric relationships of the environment around the Curiosity rover, both for scientific understanding and for tactical consideration in safely operating the rover. OnSight includes a web-based 2D/3D visualization tool, as well as an immersive mixed reality visualization. In addition, OnSight offers a novel feature for communication among the science team. Using the multiuser feature of OnSight, scientists can meet virtually on Mars, to discuss geology in a shared spatial context. Combining web-based visualization with immersive visualization allows OnSight to leverage strengths of both platforms. This project demonstrates how 3D visualization can be adapted to either an immersive environment or a computer screen, and will discuss advantages and disadvantages of both platforms.

  16. Thinking in Pictures as a cognitive account of autism.

    PubMed

    Kunda, Maithilee; Goel, Ashok K

    2011-09-01

    We analyze the hypothesis that some individuals on the autism spectrum may use visual mental representations and processes to perform certain tasks that typically developing individuals perform verbally. We present a framework for interpreting empirical evidence related to this "Thinking in Pictures" hypothesis and then provide comprehensive reviews of data from several different cognitive tasks, including the n-back task, serial recall, dual task studies, Raven's Progressive Matrices, semantic processing, false belief tasks, visual search, spatial recall, and visual recall. We also discuss the relationships between the Thinking in Pictures hypothesis and other cognitive theories of autism including Mindblindness, Executive Dysfunction, Weak Central Coherence, and Enhanced Perceptual Functioning.

  17. AWE: Aviation Weather Data Visualization

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Lodha, Suresh K.

    2001-01-01

    The two official sources for aviation weather reports both require the pilot to mentally visualize the provided information. In contrast, our system, Aviation Weather Environment (AWE) presents aviation specific weather available to pilots in an easy to visualize form. We start with a computer-generated textual briefing for a specific area. We map this briefing onto a grid specific to the pilot's route that includes only information relevant to his flight route that includes only information relevant to his flight as defined by route, altitude, true airspeed, and proposed departure time. By modifying various parameters, the pilot can use AWE as a planning tool as well as a weather briefing tool.

  18. Visual acuity and contrast sensitivity are two important factors affecting vision-related quality of life in advanced age-related macular degeneration

    PubMed Central

    Selivanova, Alexandra; Shin, Hyun Joon; Miller, Joan W.; Jackson, Mary Lou

    2018-01-01

    Purpose Vision loss from age-related macular degeneration (AMD) has a profound effect on vision-related quality of life (VRQoL). The pupose of this study is to identify clinical factors associated with VRQoL using the Rasch- calibrated NEI VFQ-25 scales in bilateral advanced AMD patients. Methods We retrospectively reviewed 47 patients (mean age 83.2 years) with bilateral advanced AMD. Clinical assessment included age, gender, type of AMD, high contrast visual acuity (VA), history of medical conditions, contrast sensitivity (CS), central visual field loss, report of Charles Bonnet Syndrome, current treatment for AMD and Rasch-calibrated NEI VFQ-25 visual function and socioemotional function scales. The NEI VFQ visual function scale includes items of general vision, peripheral vision, distance vision and near vision-related activity while the socioemotional function scale includes items of vision related-social functioning, role difficulties, dependency, and mental health. Multiple regression analysis (structural regression model) was performed using fixed item parameters obtained from the one-parameter item response theory model. Results Multivariate analysis showed that high contrast VA and CS were two factors influencing VRQoL visual function scale (β = -0.25, 95% CI-0.37 to -0.12, p<0.001 and β = 0.35, 95% CI 0.25 to 0.46, p<0.001) and socioemontional functioning scale (β = -0.2, 95% CI -0.37 to -0.03, p = 0.023, and β = 0.3, 95% CI 0.18 to 0.43, p = 0.001). Central visual field loss was not assoicated with either VRQoL visual or socioemontional functioning scale (β = -0.08, 95% CI-0.28 to 0.12,p = 0.44 and β = -0.09, 95% CI -0.03 to 0.16, p = 0.50, respectively). Conclusion In patients with vision impairment secondary to bilateral advanced AMD, high contrast VA and CS are two important factors affecting VRQoL. PMID:29746512

  19. Visual acuity and contrast sensitivity are two important factors affecting vision-related quality of life in advanced age-related macular degeneration.

    PubMed

    Roh, Miin; Selivanova, Alexandra; Shin, Hyun Joon; Miller, Joan W; Jackson, Mary Lou

    2018-01-01

    Vision loss from age-related macular degeneration (AMD) has a profound effect on vision-related quality of life (VRQoL). The pupose of this study is to identify clinical factors associated with VRQoL using the Rasch- calibrated NEI VFQ-25 scales in bilateral advanced AMD patients. We retrospectively reviewed 47 patients (mean age 83.2 years) with bilateral advanced AMD. Clinical assessment included age, gender, type of AMD, high contrast visual acuity (VA), history of medical conditions, contrast sensitivity (CS), central visual field loss, report of Charles Bonnet Syndrome, current treatment for AMD and Rasch-calibrated NEI VFQ-25 visual function and socioemotional function scales. The NEI VFQ visual function scale includes items of general vision, peripheral vision, distance vision and near vision-related activity while the socioemotional function scale includes items of vision related-social functioning, role difficulties, dependency, and mental health. Multiple regression analysis (structural regression model) was performed using fixed item parameters obtained from the one-parameter item response theory model. Multivariate analysis showed that high contrast VA and CS were two factors influencing VRQoL visual function scale (β = -0.25, 95% CI-0.37 to -0.12, p<0.001 and β = 0.35, 95% CI 0.25 to 0.46, p<0.001) and socioemontional functioning scale (β = -0.2, 95% CI -0.37 to -0.03, p = 0.023, and β = 0.3, 95% CI 0.18 to 0.43, p = 0.001). Central visual field loss was not assoicated with either VRQoL visual or socioemontional functioning scale (β = -0.08, 95% CI-0.28 to 0.12,p = 0.44 and β = -0.09, 95% CI -0.03 to 0.16, p = 0.50, respectively). In patients with vision impairment secondary to bilateral advanced AMD, high contrast VA and CS are two important factors affecting VRQoL.

  20. An infrared/video fusion system for military robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A.W.; Roberts, R.S.

    1997-08-05

    Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less

  1. Literature review of visual representation of the results of benefit-risk assessments of medicinal products.

    PubMed

    Hallgreen, Christine E; Mt-Isa, Shahrul; Lieftucht, Alfons; Phillips, Lawrence D; Hughes, Diana; Talbot, Susan; Asiimwe, Alex; Downey, Gerald; Genov, Georgy; Hermann, Richard; Noel, Rebecca; Peters, Ruth; Micaleff, Alain; Tzoulaki, Ioanna; Ashby, Deborah

    2016-03-01

    The PROTECT Benefit-Risk group is dedicated to research in methods for continuous benefit-risk monitoring of medicines, including the presentation of the results, with a particular emphasis on graphical methods. A comprehensive review was performed to identify visuals used for medical risk and benefit-risk communication. The identified visual displays were grouped into visual types, and each visual type was appraised based on five criteria: intended audience, intended message, knowledge required to understand the visual, unintentional messages that may be derived from the visual and missing information that may be needed to understand the visual. Sixty-six examples of visual formats were identified from the literature and classified into 14 visual types. We found that there is not one single visual format that is consistently superior to others for the communication of benefit-risk information. In addition, we found that most of the drawbacks found in the visual formats could be considered general to visual communication, although some appear more relevant to specific formats and should be considered when creating visuals for different audiences depending on the exact message to be communicated. We have arrived at recommendations for the use of visual displays for benefit-risk communication. The recommendation refers to the creation of visuals. We outline four criteria to determine audience-visual compatibility and consider these to be a key task in creating any visual. Next we propose specific visual formats of interest, to be explored further for their ability to address nine different types of benefit-risk analysis information. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Mathematics Conceptual Visualization with HyperCard.

    ERIC Educational Resources Information Center

    Haws, LaDawn

    1992-01-01

    Hypermedia provides an easy-to-use option for adding visualization, via the computer, to the classroom. Some examples of this medium are presented, including applications in basic linear algebra and calculus, and a tutorial in electromagnetism. (Author)

  3. A Discussion of Assessment Needs in Manual Communication for Pre-College Students.

    ERIC Educational Resources Information Center

    Cokely, Dennis R.

    The paper reviews issues in evaluating the manual communications skills of pre-college hearing impaired students, including testing of visual discrimination and visual memory, simultaneous communication, and attention span. (CL)

  4. Visual Thinking, Algebraic Thinking, and a Full Unit-Circle Diagram.

    ERIC Educational Resources Information Center

    Shear, Jonathan

    1985-01-01

    The study of trigonometric functions in terms of the unit circle offer an example of how students can learn algebraic relations and operations while using visually oriented thinking. Illustrations are included. (MNS)

  5. Changes of Visual Pathway and Brain Connectivity in Glaucoma: A Systematic Review

    PubMed Central

    Nuzzi, Raffaele; Dallorto, Laura; Rolle, Teresa

    2018-01-01

    Background: Glaucoma is a leading cause of irreversible blindness worldwide. The increasing interest in the involvement of the cortical visual pathway in glaucomatous patients is due to the implications in recent therapies, such as neuroprotection and neuroregeneration. Objective: In this review, we outline the current understanding of brain structural, functional, and metabolic changes detected with the modern techniques of neuroimaging in glaucomatous subjects. Methods: We screened MEDLINE, EMBASE, CINAHL, CENTRAL, LILACS, Trip Database, and NICE for original contributions published until 31 October 2017. Studies with at least six patients affected by any type of glaucoma were considered. We included studies using the following neuroimaging techniques: functional Magnetic Resonance Imaging (fMRI), resting-state fMRI (rs-fMRI), magnetic resonance spectroscopy (MRS), voxel- based Morphometry (VBM), surface-based Morphometry (SBM), diffusion tensor MRI (DTI). Results: Over a total of 1,901 studies, 56 case series with a total of 2,381 patients were included. Evidence of neurodegenerative process in glaucomatous patients was found both within and beyond the visual system. Structural alterations in visual cortex (mainly reduced cortex thickness and volume) have been demonstrated with SBM and VBM; these changes were not limited to primary visual cortex but also involved association visual areas. Other brain regions, associated with visual function, demonstrated a certain grade of increased or decreased gray matter volume. Functional and metabolic abnormalities resulted within primary visual cortex in all studies with fMRI and MRS. Studies with rs-fMRI found disrupted connectivity between the primary and higher visual cortex and between visual cortex and associative visual areas in the task-free state of glaucomatous patients. Conclusions: This review contributes to the better understanding of brain abnormalities in glaucoma. It may stimulate further speculation about brain plasticity at a later age and therapeutic strategies, such as the prevention of cortical degeneration in patients with glaucoma. Structural, functional, and metabolic neuroimaging methods provided evidence of changes throughout the visual pathway in glaucomatous patients. Other brain areas, not directly involved in the processing of visual information, also showed alterations. PMID:29896087

  6. Views and practices of Australian optometrists regarding driving for patients with central visual impairment.

    PubMed

    Oberstein, Sharon L; Boon, Mei Ying; Chu, Byoung Sun; Wood, Joanne M

    2016-09-01

    Eye-care practitioners are often required to make recommendations regarding their patients' visual fitness for driving, including patients with visual impairment. This study aimed to understand the perspectives and management strategies adopted by optometrists regarding driving for their patients with central visual impairment. Optometrists were invited to participate in an online survey (from April to June 2012). Items were designed to explore the views and practices adopted by optometrists regarding driving for patients with central visual impairment (visual acuity [VA] poorer than 6/12, normal visual fields, cognitive and physical health), including conditional driver's licences and bioptic telescopes. Closed- and open-ended questions were used. The response rate was 14 per cent (n = 300 valid responses were received). Most respondents (83 per cent) reported that they advised their patients with visual impairment to 'always' or 'sometimes' stop driving. Most were confident in interpreting the visual licensing standards (78 per cent) and advising on legal responsibilities concerning driving (99 per cent). Respondents were familiar with VA requirements for unconditional licensing (98 per cent); however, the median response VA of 6/15 as the poorest VA suggested for conditional licences differed from international practice and Australian medical guidelines released a month prior to the survey's launch. Few respondents reported prescribing bioptic telescopes (two per cent). While 97 per cent of respondents stated that they discussed conditional licences with their patients with visual impairment, relatively few (28 per cent) reported having completed conditional licence applications for such individuals in the previous year. Those who had completed applications were more experienced in years of practice (p = 0.02) and spent more time practising in rural locations (p = 0.03) than those who had not. The majority of Australian optometrists were receptive to the possibilities of driving options for individuals with central visual impairment, although management approaches varied with respect to conditional licensing. © 2016 Optometry Australia.

  7. [Comparison of visual impairment caused by trachoma in China between 1978 and 2006].

    PubMed

    Hu, Ailian; Cai, Xiaogu; Qiao, Liya; Zhang, Ye; Zhang, Xu; Sun, Baochen; Wang, Ningli

    2015-10-01

    To understand the distribution of visual impairment caused by trachoma in China and provide evidences for evaluation of eliminating blinding trachoma in China in the mission of Vision 2020. Sampling study. The results from the first year 1987 and second (year 2006) national sampling surveys of disabled persons were analyzed. Chi-square test was performed using SAS 9.30 to analyze the rates of visual impairment caused by trachoma in different groups. Unifactor and multifactor analyses were applied to analyze the relevance between visual impairment caused by trachoma and risk factors, including gender and age. The rate of visual impairment caused by trachoma was 102.01 persons/100 000 in 1987 and 17.62 persons/100 000 in 2006. The percentage of trachoma in all kinds of visual impairment was 14.25% in 1987 and 1.87% in 2006, and the difference was significant (F = 1 382.6, P < 0.01). Spatial aggregation was obvious in visual impairment caused by trachoma. H-aggregation areas included Hubei, Sichuan, Anhui, Shannxi, Guizhou, Hunan provinces and Chongqing Municipality. Survival time without trachoma between 1987 and 2006 was significantly different (F = 2 745.9, P < 0.01). The rate and risk of visual impairment caused by trachoma increased with age. Except the group of > 85 years, the rate of visual impairment caused by trachoma in all age groups in 1987 was significantly higher than that in 2006. The risk of visual impairment caused by trachoma in 1987 was 5.8 times that in 2006. If the other risk factors were not involved, the risk in 1987 was 8.75 times that in 2006. The risk in females was twice that in males. Both, the rate and risk of visual impairment caused by trachoma were significantly reduced in China. Impressive progresses were achieved in trachoma prevention and control.

  8. Visual and proprioceptive interaction in patients with bilateral vestibular loss☆

    PubMed Central

    Cutfield, Nicholas J.; Scott, Gregory; Waldman, Adam D.; Sharp, David J.; Bronstein, Adolfo M.

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients. PMID:25061564

  9. Visual Acuity Reporting in Clinical Research Publications.

    PubMed

    Tsou, Brittany C; Bressler, Neil M

    2017-06-01

    Visual acuity results in publications typically are reported in Snellen or non-Snellen formats or both. A study in 2011 suggested that many ophthalmologists do not understand non-Snellen formats, such as logarithm of the Minimum Angle of Resolution (logMAR) or Early Treatment Diabetic Retinopathy Study (ETDRS) letter scores. As a result, some journals, since at least 2013, have instructed authors to provide approximate Snellen equivalents next to non-Snellen visual acuity values. To evaluate how authors currently report visual acuity and whether they provide Snellen equivalents when their reports include non-Snellen formats. From November 21, 2016, through December 14, 2016, one reviewer evaluated visual acuity reporting among all articles published in 4 ophthalmology clinical journals from November 2015 through October 2016, including 3 of 4 journals that instructed authors to provide Snellen equivalents for visual acuity reported in non-Snellen formats. Frequency of formats of visual acuity reporting and frequency of providing Snellen equivalents when non-Snellen formats are given. The 4 journals reviewed had the second, fourth, fifth, and ninth highest impact factors for ophthalmology journals in 2015. Of 1881 articles reviewed, 807 (42.9%) provided a visual acuity measurement. Of these, 396 (49.1%) used only a Snellen format; 411 (50.9%) used a non-Snellen format. Among those using a non-Snellen format, 145 (35.3%) provided a Snellen equivalent while 266 (64.7%) provided only a non-Snellen format. More than half of all articles in 4 ophthalmology clinical journals fail to provide a Snellen equivalent when visual acuity is not in a Snellen format. Since many US ophthalmologists may not comprehend non-Snellen formats easily, these data suggest that editors and publishing staff should encourage authors to provide Snellen equivalents whenever visual acuity data are reported in a non-Snellen format to improve ease of understanding visual acuity measurements.

  10. Visual and proprioceptive interaction in patients with bilateral vestibular loss.

    PubMed

    Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients.

  11. Visual hallucinations in Parkinson's disease: a review and phenomenological survey

    PubMed Central

    Barnes, J; David, A

    2001-01-01

    OBJECTIVES—Between 8% and 40% of patients with Parkinson's disease undergoing long term treatment will have visual hallucinations during the course of their illness. There were two main objectives: firstly, to review the literature on Parkinson's disease and summarise those factors most often associated with hallucinations; secondly, to carry out a clinical comparison of ambulant patients with Parkinson's disease with and without visual hallucinations, and provide a detailed phenomenological analysis of the hallucinations.
METHODS—A systematic literature search using standard electronic databases of published surveys and case-control studies was undertaken. In parallel, a two stage questionnaire survey was carried out based on members of a local branch of the Parkinson's Disease Society and followed up with a clinical interview.
RESULTS—The review disclosed common factors associated with visual hallucinations in Parkinson's disease including greater age and duration of illness, cognitive impairment, and depression and sleep disturbances. The survey comprised 21 patients with visual hallucinations and 23 without. The hallucinators had a longer duration and a greater severity of illness, and tended to show more depressed mood and cognitive impairment. The typical visual hallucination in these patients is a complex visual image experienced while they are alert and have their eyes open. The image appears without any known trigger or voluntary effort, is somewhat blurred, and commonly moves. It stays present for a period of "seconds" or "minutes". The content can be variable within and between hallucinators, and includes such entities as people, animals, buildings, or scenery. These features resemble those highlighted in hallucinations in the visually impaired (Charles Bonnet's syndrome).
CONCLUSION—A consistent set of factors are associated with visual hallucinations in Parkinson's disease. The results of the phenomenological survey and those of visual hallucinations carried out in other settings suggest a common physiological substrate for visual hallucinations but with cognitive factors playing an as yet unspecified role.

 PMID:11385004

  12. The Tribal Odisha Eye Disease Study (TOES) 1: prevalence and causes of visual impairment among tribal children in an urban school in Eastern India.

    PubMed

    Warkad, Vivekanand U; Panda, Lapam; Behera, Pradeep; Das, Taraprasad; Mohanta, Bikash C; Khanna, Rohit

    2018-04-01

    To estimate the prevalence and causes of visual impairment and other ocular comorbidities among tribal children in an urban school population in eastern India. In this cross-sectional study, vision screening tests were administered to tribal school children. Demographic data, including name, age, sex, home district, height, and weight of each child, and examination data, including unaided and pinhole visual acuity, external eye examination with a flashlight, slit-lamp examination, intraocular pressure (IOP) measurement, and undilated fundus photography, were collected. Children with visual acuity of less than 20/20, abnormal anterior or posterior segment findings, and IOP of >21 mm Hg were referred for further evaluation. Of 10,038 children (5,840 males [58.2%]) screened, 335 (median age, 9 years; range, 6-17 years) were referred. Refractive error was the most common cause of visual impairment (59.52%; 95% CI, 51.97-66.65) followed by amblyopia (17.2%; 95% CI, 12.3-23.6) and posterior segment anomaly (14.88%; 95% CI, 10.2-21.0). The prevalence of best-corrected visual acuity of 20/40 was 0.13%. The prevalence of blindness was 0.03%. Visual impairment among tribal children in this residential school is an uncommon but important disability. Copyright © 2018 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.

  13. Fluorescent Imaging With Indocyanine Green During Laparoscopic Cholecystectomy in Patients at Increased Risk of Bile Duct Injury

    PubMed Central

    Ankersmit, Marjolein; van Dam, Dieuwertje A.; van Rijswijk, Anne-Sophie; van den Heuvel, Baukje; Tuynman, Jurriaan B.; Meijerink, Wilhelmus J. H. J.

    2017-01-01

    Background. Although rare, injury to the common bile duct (CBD) during laparoscopic cholecystectomy (LC) can be reduced by better intraoperative visualization of the cystic duct (CD) and CBD. The aim of this study was to establish the efficacy of early visualization of the CD and the added value of CBD identification, using near-infrared (NIR) light and the fluorescent agent indocyanine green (ICG), in patients at increased risk of bile duct injury. Materials and Methods. Patients diagnosed with complicated cholecystitis and scheduled for LC were included. The CBD and CD were visualized with NIR light before and during dissection of the liver hilus and at critical view of safety (CVS). Results. Of the 20 patients originally included, 2 were later excluded due to conversion. In 6 of 18 patients, the CD was visualized early during dissection and prior to imaging with conventional white light. The CBD was additionally visualized with ICG-NIR in 7 of 18 patients. In 1 patient, conversion was prevented due to detection of the CD and CBD with ICG-NIR. Conclusions. Early visualization of the CD or additional identification of the CBD using ICG-NIR in patients with complicated cholecystolithiasis can be helpful in preventing CBD injury. Future studies should attempt to establish the optimal dosage and time frame for ICG administration and bile duct visualization with respect to different gallbladder pathologies. PMID:28178882

  14. Mapping arealisation of the visual cortex of non-primate species: lessons for development and evolution

    PubMed Central

    Homman-Ludiye, Jihane; Bourne, James A.

    2014-01-01

    The integration of the visual stimulus takes place at the level of the neocortex, organized in anatomically distinct and functionally unique areas. Primates, including humans, are heavily dependent on vision, with approximately 50% of their neocortical surface dedicated to visual processing and possess many more visual areas than any other mammal, making them the model of choice to study visual cortical arealisation. However, in order to identify the mechanisms responsible for patterning the developing neocortex, specifying area identity as well as elucidate events that have enabled the evolution of the complex primate visual cortex, it is essential to gain access to the cortical maps of alternative species. To this end, species including the mouse have driven the identification of cellular markers, which possess an area-specific expression profile, the development of new tools to label connections and technological advance in imaging techniques enabling monitoring of cortical activity in a behaving animal. In this review we present non-primate species that have contributed to elucidating the evolution and development of the visual cortex. We describe the current understanding of the mechanisms supporting the establishment of areal borders during development, mainly gained in the mouse thanks to the availability of genetically modified lines but also the limitations of the mouse model and the need for alternate species. PMID:25071460

  15. Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities

    PubMed Central

    Foerster, Rebecca M.; Poth, Christian H.; Behler, Christian; Botsch, Mario; Schneider, Werner X.

    2016-01-01

    Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen’s visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions. PMID:27869220

  16. Creating visual explanations improves learning.

    PubMed

    Bobek, Eliza; Tversky, Barbara

    2016-01-01

    Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.

  17. Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities.

    PubMed

    Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X

    2016-11-21

    Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.

  18. Basic visual function and cortical thickness patterns in posterior cortical atrophy.

    PubMed

    Lehmann, Manja; Barnes, Josephine; Ridgway, Gerard R; Wattam-Bell, John; Warrington, Elizabeth K; Fox, Nick C; Crutch, Sebastian J

    2011-09-01

    Posterior cortical atrophy (PCA) is characterized by a progressive decline in higher-visual object and space processing, but the extent to which these deficits are underpinned by basic visual impairments is unknown. This study aimed to assess basic and higher-order visual deficits in 21 PCA patients. Basic visual skills including form detection and discrimination, color discrimination, motion coherence, and point localization were measured, and associations and dissociations between specific basic visual functions and measures of higher-order object and space perception were identified. All participants showed impairment in at least one aspect of basic visual processing. However, a number of dissociations between basic visual skills indicated a heterogeneous pattern of visual impairment among the PCA patients. Furthermore, basic visual impairments were associated with particular higher-order object and space perception deficits, but not with nonvisual parietal tasks, suggesting the specific involvement of visual networks in PCA. Cortical thickness analysis revealed trends toward lower cortical thickness in occipitotemporal (ventral) and occipitoparietal (dorsal) regions in patients with visuoperceptual and visuospatial deficits, respectively. However, there was also a lot of overlap in their patterns of cortical thinning. These findings suggest that different presentations of PCA represent points in a continuum of phenotypical variation.

  19. Conceptual design study for an advanced cab and visual system, volume 2

    NASA Technical Reports Server (NTRS)

    Rue, R. J.; Cyrus, M. L.; Garnett, T. A.; Nachbor, J. W.; Seery, J. A.; Starr, R. L.

    1980-01-01

    The performance, design, construction and testing requirements are defined for developing an advanced cab and visual system. The rotorcraft system integration simulator is composed of the advanced cab and visual system and the rotorcraft system motion generator, and is part of an existing simulation facility. User's applications for the simulator include rotorcraft design development, product improvement, threat assessment, and accident investigation.

  20. Art, Science & Visual Literacy: Selected Readings from the Annual Conference of the International Visual Literacy Association (24th, Pittsburgh, Pennsylvania, September 30-October 4, 1992).

    ERIC Educational Resources Information Center

    Braden, Roberts A., Ed.; And Others

    Following an introductory paper on Pittsburgh and the arts, 57 conference papers are presented under the following four major categories: (1) "Imagery, Science and the Arts," including discovery in art and science, technology and art, visual design of newspapers, multimedia science education, science learning and interactive videodisc technology,…

Top