Sample records for monocular image sequence

  1. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    NASA Astrophysics Data System (ADS)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  2. Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation

    PubMed Central

    Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin

    2014-01-01

    Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780

  3. The determination of high-resolution spatio-temporal glacier motion fields from time-lapse sequences

    NASA Astrophysics Data System (ADS)

    Schwalbe, Ellen; Maas, Hans-Gerd

    2017-12-01

    This paper presents a comprehensive method for the determination of glacier surface motion vector fields at high spatial and temporal resolution. These vector fields can be derived from monocular terrestrial camera image sequences and are a valuable data source for glaciological analysis of the motion behaviour of glaciers. The measurement concepts for the acquisition of image sequences are presented, and an automated monoscopic image sequence processing chain is developed. Motion vector fields can be derived with high precision by applying automatic subpixel-accuracy image matching techniques on grey value patterns in the image sequences. Well-established matching techniques have been adapted to the special characteristics of the glacier data in order to achieve high reliability in automatic image sequence processing, including the handling of moving shadows as well as motion effects induced by small instabilities in the camera set-up. Suitable geo-referencing techniques were developed to transform image measurements into a reference coordinate system.The result of monoscopic image sequence analysis is a dense raster of glacier surface point trajectories for each image sequence. Each translation vector component in these trajectories can be determined with an accuracy of a few centimetres for points at a distance of several kilometres from the camera. Extensive practical validation experiments have shown that motion vector and trajectory fields derived from monocular image sequences can be used for the determination of high-resolution velocity fields of glaciers, including the analysis of tidal effects on glacier movement, the investigation of a glacier's motion behaviour during calving events, the determination of the position and migration of the grounding line and the detection of subglacial channels during glacier lake outburst floods.

  4. Non-Rigid Structure Estimation in Trajectory Space from Monocular Vision

    PubMed Central

    Wang, Yaming; Tong, Lingling; Jiang, Mingfeng; Zheng, Junbao

    2015-01-01

    In this paper, the problem of non-rigid structure estimation in trajectory space from monocular vision is investigated. Similar to the Point Trajectory Approach (PTA), based on characteristic points’ trajectories described by a predefined Discrete Cosine Transform (DCT) basis, the structure matrix was also calculated by using a factorization method. To further optimize the non-rigid structure estimation from monocular vision, the rank minimization problem about structure matrix is proposed to implement the non-rigid structure estimation by introducing the basic low-rank condition. Moreover, the Accelerated Proximal Gradient (APG) algorithm is proposed to solve the rank minimization problem, and the initial structure matrix calculated by the PTA method is optimized. The APG algorithm can converge to efficient solutions quickly and lessen the reconstruction error obviously. The reconstruction results of real image sequences indicate that the proposed approach runs reliably, and effectively improves the accuracy of non-rigid structure estimation from monocular vision. PMID:26473863

  5. Scale Estimation and Correction of the Monocular Simultaneous Localization and Mapping (SLAM) Based on Fusion of 1D Laser Range Finder and Vision Data.

    PubMed

    Zhang, Zhuang; Zhao, Rujin; Liu, Enhai; Yan, Kun; Ma, Yuebo

    2018-06-15

    This article presents a new sensor fusion method for visual simultaneous localization and mapping (SLAM) through integration of a monocular camera and a 1D-laser range finder. Such as a fusion method provides the scale estimation and drift correction and it is not limited by volume, e.g., the stereo camera is constrained by the baseline and overcomes the limited depth range problem associated with SLAM for RGBD cameras. We first present the analytical feasibility for estimating the absolute scale through the fusion of 1D distance information and image information. Next, the analytical derivation of the laser-vision fusion is described in detail based on the local dense reconstruction of the image sequences. We also correct the scale drift of the monocular SLAM using the laser distance information which is independent of the drift error. Finally, application of this approach to both indoor and outdoor scenes is verified by the Technical University of Munich dataset of RGBD and self-collected data. We compare the effects of the scale estimation and drift correction of the proposed method with the SLAM for a monocular camera and a RGBD camera.

  6. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  7. Detection of Obstacles in Monocular Image Sequences

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia

    1997-01-01

    The ability to detect and locate runways/taxiways and obstacles in images captured using on-board sensors is an essential first step in the automation of low-altitude flight, landing, takeoff, and taxiing phase of aircraft navigation. Automation of these functions under different weather and lighting situations, can be facilitated by using sensors of different modalities. An aircraft-based Synthetic Vision System (SVS), with sensors of different modalities mounted on-board, complements the current ground-based systems in functions such as detection and prevention of potential runway collisions, airport surface navigation, and landing and takeoff in all weather conditions. In this report, we address the problem of detection of objects in monocular image sequences obtained from two types of sensors, a Passive Millimeter Wave (PMMW) sensor and a video camera mounted on-board a landing aircraft. Since the sensors differ in their spatial resolution, and the quality of the images obtained using these sensors is not the same, different approaches are used for detecting obstacles depending on the sensor type. These approaches are described separately in two parts of this report. The goal of the first part of the report is to develop a method for detecting runways/taxiways and objects on the runway in a sequence of images obtained from a moving PMMW sensor. Since the sensor resolution is low and the image quality is very poor, we propose a model-based approach for detecting runways/taxiways. We use the approximate runway model and the position information of the camera provided by the Global Positioning System (GPS) to define regions of interest in the image plane to search for the image features corresponding to the runway markers. Once the runway region is identified, we use histogram-based thresholding to detect obstacles on the runway and regions outside the runway. This algorithm is tested using image sequences simulated from a single real PMMW image.

  8. Human pose tracking from monocular video by traversing an image motion mapped body pose manifold

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2010-01-01

    Tracking human pose from monocular video sequences is a challenging problem due to the large number of independent parameters affecting image appearance and nonlinear relationships between generating parameters and the resultant images. Unlike the current practice of fitting interpolation functions to point correspondences between underlying pose parameters and image appearance, we exploit the relationship between pose parameters and image motion flow vectors in a physically meaningful way. Change in image appearance due to pose change is realized as navigating a low dimensional submanifold of the infinite dimensional Lie group of diffeomorphisms of the two dimensional sphere S2. For small changes in pose, image motion flow vectors lie on the tangent space of the submanifold. Any observed image motion flow vector field is decomposed into the basis motion vector flow fields on the tangent space and combination weights are used to update corresponding pose changes in the different dimensions of the pose parameter space. Image motion flow vectors are largely invariant to style changes in experiments with synthetic and real data where the subjects exhibit variation in appearance and clothing. The experiments demonstrate the robustness of our method (within +/-4° of ground truth) to style variance.

  9. Sub-Pixel Accuracy Crack Width Determination on Concrete Beams in Load Tests by Triangle Mesh Geometry Analysis

    NASA Astrophysics Data System (ADS)

    Liebold, F.; Maas, H.-G.

    2018-05-01

    This paper deals with the determination of crack widths of concrete beams during load tests from monocular image sequences. The procedure starts in a reference image of the probe with suitable surface texture under zero load, where a large number of points is defined by an interest operator. Then a triangulated irregular network is established to connect the points. Image sequences are recorded during load tests with the load increasing continuously or stepwise, or at intermittently changing load. The vertices of the triangles are tracked through the consecutive images of the sequence with sub-pixel accuracy by least squares matching. All triangles are then analyzed for changes by principal strain calculation. For each triangle showing significant strain, a crack width is computed by a thorough geometric analysis of the relative movement of the vertices.

  10. Robust obstacle detection for unmanned surface vehicles

    NASA Astrophysics Data System (ADS)

    Qin, Yueming; Zhang, Xiuzhi

    2018-03-01

    Obstacle detection is of essential importance for Unmanned Surface Vehicles (USV). Although some obstacles (e.g., ships, islands) can be detected by Radar, there are many other obstacles (e.g., floating pieces of woods, swimmers) which are difficult to be detected via Radar because these obstacles have low radar cross section. Therefore, detecting obstacle from images taken onboard is an effective supplement. In this paper, a robust vision-based obstacle detection method for USVs is developed. The proposed method employs the monocular image sequence captured by the camera on the USVs and detects obstacles on the sea surface from the image sequence. The experiment results show that the proposed scheme is efficient to fulfill the obstacle detection task.

  11. Monocular deprivation of Fourier phase information boosts the deprived eye's dominance during interocular competition but not interocular phase combination.

    PubMed

    Bai, Jianying; Dong, Xue; He, Sheng; Bao, Min

    2017-06-03

    Ocular dominance has been extensively studied, often with the goal to understand neuroplasticity, which is a key characteristic within the critical period. Recent work on monocular deprivation, however, demonstrates residual neuroplasticity in the adult visual cortex. After deprivation of patterned inputs by monocular patching, the patched eye becomes more dominant. Since patching blocks both the Fourier amplitude and phase information of the input image, it remains unclear whether deprivation of the Fourier phase information alone is able to reshape eye dominance. Here, for the first time, we show that removing of the phase regularity without changing the amplitude spectra of the input image induced a shift of eye dominance toward the deprived eye, but only if the eye dominance was measured with a binocular rivalry task rather than an interocular phase combination task. These different results indicate that the two measurements are supported by different mechanisms. Phase integration requires the fusion of monocular images. The fused percept highly relies on the weights of the phase-sensitive monocular neurons that respond to the two monocular images. However, binocular rivalry reflects the result of direct interocular competition that strongly weights the contour information transmitted along each monocular pathway. Monocular phase deprivation may not change the weights in the integration (fusion) mechanism much, but alters the balance in the rivalry (competition) mechanism. Our work suggests that ocular dominance plasticity may occur at different stages of visual processing, and that homeostatic compensation also occurs for the lack of phase regularity in natural scenes. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Human silhouette matching based on moment invariants

    NASA Astrophysics Data System (ADS)

    Sun, Yong-Chao; Qiu, Xian-Jie; Xia, Shi-Hong; Wang, Zhao-Qi

    2005-07-01

    This paper aims to apply the method of silhouette matching based on moment invariants to infer the human motion parameters from video sequences of single monocular uncalibrated camera. Currently, there are two ways of tracking human motion: Marker and Markerless. While a hybrid framework is introduced in this paper to recover the input video contents. A standard 3D motion database is built up by marker technique in advance. Given a video sequences, human silhouettes are extracted as well as the viewpoint information of the camera which would be utilized to project the standard 3D motion database onto the 2D one. Therefore, the video recovery problem is formulated as a matching issue of finding the most similar body pose in standard 2D library with the one in video image. The framework is applied to the special trampoline sport where we can obtain the complicated human motion parameters in the single camera video sequences, and a lot of experiments are demonstrated that this approach is feasible in the field of monocular video-based 3D motion reconstruction.

  13. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  14. The monocular visual imaging technology model applied in the airport surface surveillance

    NASA Astrophysics Data System (ADS)

    Qin, Zhe; Wang, Jian; Huang, Chao

    2013-08-01

    At present, the civil aviation airports use the surface surveillance radar monitoring and positioning systems to monitor the aircrafts, vehicles and the other moving objects. Surface surveillance radars can cover most of the airport scenes, but because of the terminals, covered bridges and other buildings geometry, surface surveillance radar systems inevitably have some small segment blind spots. This paper presents a monocular vision imaging technology model for airport surface surveillance, achieving the perception of scenes of moving objects such as aircrafts, vehicles and personnel location. This new model provides an important complement for airport surface surveillance, which is different from the traditional surface surveillance radar techniques. Such technique not only provides clear objects activities screen for the ATC, but also provides image recognition and positioning of moving targets in this area. Thereby it can improve the work efficiency of the airport operations and avoid the conflict between the aircrafts and vehicles. This paper first introduces the monocular visual imaging technology model applied in the airport surface surveillance and then the monocular vision measurement accuracy analysis of the model. The monocular visual imaging technology model is simple, low cost, and highly efficient. It is an advanced monitoring technique which can make up blind spot area of the surface surveillance radar monitoring and positioning systems.

  15. Stereomotion speed perception is contrast dependent

    NASA Technical Reports Server (NTRS)

    Brooks, K.

    2001-01-01

    The effect of contrast on the perception of stimulus speed for stereomotion and monocular lateral motion was investigated for successive matches in random-dot stimuli. The familiar 'Thompson effect'--that a reduction in contrast leads to a reduction in perceived speed--was found in similar proportions for both binocular images moving in depth, and for monocular images translating laterally. This result is consistent with the idea that the monocular motion system has a significant input to the stereomotion system, and dominates the speed percept for approaching motion.

  16. Binocular and Monocular Depth Cues in Online Feedback Control of 3-D Pointing Movement

    PubMed Central

    Hu, Bo; Knill, David C.

    2012-01-01

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and thus were available in an observer’s retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account of the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions. PMID:21724567

  17. Effects of Ocular Optics on Perceived Visual Direction and Depth

    NASA Astrophysics Data System (ADS)

    Ye, Ming

    Most studies of human retinal image quality have specifically addressed the issues of image contrast, few have examined the problem of image location. However, one of the most impressive properties of human vision involves the location of objects. We are able to identify object location with great accuracy (less than 5 arcsec). The sensitivity we exhibit for image location indicates that any optical errors, such as refractive error, ocular aberrations, pupil decentration, etc., may have noticeable effects on perceived visual direction and distance of objects. The most easily observed effects of these optical factors is a binocular depth illusion called chromostereopsis in which equidistance colored objects appear to lie at the different distances. This dissertation covers a series of theoretical and experimental studies that examined the effects of ocular optics on perceived monocular visual direction and binocular chromostereopsis. Theoretical studies included development of an adequate eye model for predicting chromatic aberration, a major ocular aberration, using geometric optics. Also, a wave optical analysis is used to model the effects of defocus, optical aberrations, Stiles-Crawford effect (SCE) and pupil location on retinal image profiles. Experimental studies used psychophysical methods such as monocular vernier alignment tests, binocular stereoscopic tests, etc. This dissertation concludes: (1) With a decentered large pupil, the SCE reduces defocused image shifts compare to an eye without the SCE. (2) The blurred image location can be predicted by the centroid of the image profile. (3) Chromostereopsis with small pupils can be precisely accounted for by the interocular difference in monocular transverse chromatic aberration. (4) The SCE also plays an important role in the effect of pupil size on chromostereopsis. The reduction of chromostereopsis with large pupils can be accurately predicted by the interocular difference in monocular chromatic diplopia which is also reduced with large pupils. This supports the hypothesis that the effect of pupil size on chromostereopsis is due to monocular mechanisms.

  18. A Comparative Analysis of Three Monocular Passive Ranging Methods on Real Infrared Sequences

    NASA Astrophysics Data System (ADS)

    Bondžulić, Boban P.; Mitrović, Srđan T.; Barbarić, Žarko P.; Andrić, Milenko S.

    2013-09-01

    Three monocular passive ranging methods are analyzed and tested on the real infrared sequences. The first method exploits scale changes of an object in successive frames, while other two use Beer-Lambert's Law. Ranging methods are evaluated by comparing with simultaneously obtained reference data at the test site. Research is addressed on scenarios where multiple sensor views or active measurements are not possible. The results show that these methods for range estimation can provide the fidelity required for object tracking. Maximum values of relative distance estimation errors in near-ideal conditions are less than 8%.

  19. Construction of pixel-level resolution DEMs from monocular images by shape and albedo from shading constrained with low-resolution DEM

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Liu, Wai Chung; Grumpe, Arne; Wöhler, Christian

    2018-06-01

    Lunar Digital Elevation Model (DEM) is important for lunar successful landing and exploration missions. Lunar DEMs are typically generated by photogrammetry or laser altimetry approaches. Photogrammetric methods require multiple stereo images of the region of interest and it may not be applicable in cases where stereo coverage is not available. In contrast, reflectance based shape reconstruction techniques, such as shape from shading (SfS) and shape and albedo from shading (SAfS), apply monocular images to generate DEMs with pixel-level resolution. We present a novel hierarchical SAfS method that refines a lower-resolution DEM to pixel-level resolution given a monocular image with known light source. We also estimate the corresponding pixel-wise albedo map in the process and based on that to regularize the shape reconstruction with pixel-level resolution based on the low-resolution DEM. In this study, a Lunar-Lambertian reflectance model is applied to estimate the albedo map. Experiments were carried out using monocular images from the Lunar Reconnaissance Orbiter Narrow Angle Camera (LRO NAC), with spatial resolution of 0.5-1.5 m per pixel, constrained by the Selenological and Engineering Explorer and LRO Elevation Model (SLDEM), with spatial resolution of 60 m. The results indicate that local details are well recovered by the proposed algorithm with plausible albedo estimation. The low-frequency topographic consistency depends on the quality of low-resolution DEM and the resolution difference between the image and the low-resolution DEM.

  20. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  1. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  2. Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.

    PubMed

    Fischmeister, Florian Ph S; Bauer, Herbert

    2006-10-01

    Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.

  3. Monocular Depth Perception and Robotic Grasping of Novel Objects

    DTIC Science & Technology

    2009-06-01

    resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning ... learning still make sense in these settings? Since many of the cues that are useful for estimating depth can be re-created in synthetic images, we...supervised learning approach to this problem, and use a Markov Random Field (MRF) to model the scene depth as a function of the image features. We show

  4. Virtual three-dimensional blackboard: three-dimensional finger tracking with a single camera

    NASA Astrophysics Data System (ADS)

    Wu, Andrew; Hassan-Shafique, Khurram; Shah, Mubarak; da Vitoria Lobo, N.

    2004-01-01

    We present a method for three-dimensional (3D) tracking of a human finger from a monocular sequence of images. To recover the third dimension from the two-dimensional images, we use the fact that the motion of the human arm is highly constrained owing to the dependencies between elbow and forearm and the physical constraints on joint angles. We use these anthropometric constraints to derive a 3D trajectory of a gesticulating arm. The system is fully automated and does not require human intervention. The system presented can be used as a visualization tool, as a user-input interface, or as part of some gesture-analysis system in which 3D information is important.

  5. Estimating Number of People Using Calibrated Monocular Camera Based on Geometrical Analysis of Surface Area

    NASA Astrophysics Data System (ADS)

    Arai, Hiroyuki; Miyagawa, Isao; Koike, Hideki; Haseyama, Miki

    We propose a novel technique for estimating the number of people in a video sequence; it has the advantages of being stable even in crowded situations and needing no ground-truth data. By analyzing the geometrical relationships between image pixels and their intersection volumes in the real world quantitatively, a foreground image directly indicates the number of people. Because foreground detection is possible even in crowded situations, the proposed method can be applied in such situations. Moreover, it can estimate the number of people in an a priori manner, so it needs no ground-truth data unlike existing feature-based estimation techniques. Experiments show the validity of the proposed method.

  6. Bayesian depth estimation from monocular natural images.

    PubMed

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2017-05-01

    Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.

  7. Grouping of optic flow stimuli during binocular rivalry is driven by monocular information.

    PubMed

    Holten, Vivian; Stuit, Sjoerd M; Verstraten, Frans A J; van der Smagt, Maarten J

    2016-10-01

    During binocular rivalry, perception alternates between two dissimilar images, presented dichoptically. Although binocular rivalry is thought to result from competition at a local level, neighboring image parts with similar features tend to be perceived together for longer durations than image parts with dissimilar features. This simultaneous dominance of two image parts is called grouping during rivalry. Previous studies have shown that this grouping depends on a shared eye-of-origin to a much larger extent than on image content, irrespective of the complexity of a static image. In the current study, we examine whether grouping of dynamic optic flow patterns is also primarily driven by monocular (eye-of-origin) information. In addition, we examine whether image parameters, such as optic flow direction, and partial versus full visibility of the optic flow pattern, affect grouping durations during rivalry. The results show that grouping of optic flow is, as is known for static images, primarily affected by its eye-of-origin. Furthermore, global motion can affect grouping durations, but only under specific conditions. Namely, only when the two full optic flow patterns were presented locally. These results suggest that grouping during rivalry is primarily driven by monocular information even for motion stimuli thought to rely on higher-level motion areas. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Manifolds for pose tracking from monocular video

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  9. Trade-offs arising from mixture of color cueing and monocular, binoptic, and stereoscopic cueing information for simulated rotorcraft flight

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Williams, Steven P.

    1993-01-01

    To provide stereopsis, binocular helmet-mounted display (HMD) systems must trade some of the total field of view available from their two monocular fields to obtain a partial overlap region. The visual field then provides a mixture of cues, with monocular regions on both peripheries and a binoptic (the same image in both eyes) region or, if lateral disparity is introduced to produce two images, a stereoscopic region in the overlapped center. This paper reports on in-simulator assessment of the trade-offs arising from the mixture of color cueing and monocular, binoptic, and stereoscopic cueing information in peripheral monitoring displays as utilized in HMD systems. The accompanying effect of stereoscopic cueing in the tracking information in the central region of the display is also assessed. The pilot's task for the study was to fly at a prescribed height above an undulating pathway in the sky while monitoring a dynamic bar chart displayed in the periphery of their field of view. Control of the simulated rotorcraft was limited to the longitudinal and vertical degrees of freedom to ensure the lateral separation of the viewing conditions of the concurrent tasks.

  10. Linear SFM: A hierarchical approach to solving structure-from-motion problems by decoupling the linear and nonlinear components

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Huang, Shoudong; Dissanayake, Gamini

    2018-07-01

    This paper presents a novel hierarchical approach to solving structure-from-motion (SFM) problems. The algorithm begins with small local reconstructions based on nonlinear bundle adjustment (BA). These are then joined in a hierarchical manner using a strategy that requires solving a linear least squares optimization problem followed by a nonlinear transform. The algorithm can handle ordered monocular and stereo image sequences. Two stereo images or three monocular images are adequate for building each initial reconstruction. The bulk of the computation involves solving a linear least squares problem and, therefore, the proposed algorithm avoids three major issues associated with most of the nonlinear optimization algorithms currently used for SFM: the need for a reasonably accurate initial estimate, the need for iterations, and the possibility of being trapped in a local minimum. Also, by summarizing all the original observations into the small local reconstructions with associated information matrices, the proposed Linear SFM manages to preserve all the information contained in the observations. The paper also demonstrates that the proposed problem formulation results in a sparse structure that leads to an efficient numerical implementation. The experimental results using publicly available datasets show that the proposed algorithm yields solutions that are very close to those obtained using a global BA starting with an accurate initial estimate. The C/C++ source code of the proposed algorithm is publicly available at https://github.com/LiangZhaoPKUImperial/LinearSFM.

  11. Peripheral prism glasses: effects of moving and stationary backgrounds.

    PubMed

    Shen, Jieming; Peli, Eli; Bowers, Alex R

    2015-04-01

    Unilateral peripheral prisms for homonymous hemianopia (HH) expand the visual field through peripheral binocular visual confusion, a stimulus for binocular rivalry that could lead to reduced predominance and partial suppression of the prism image, thereby limiting device functionality. Using natural-scene images and motion videos, we evaluated whether detection was reduced in binocular compared with monocular viewing. Detection rates of nine participants with HH or quadranopia and normal binocularity wearing peripheral prisms were determined for static checkerboard perimetry targets briefly presented in the prism expansion area and the seeing hemifield. Perimetry was conducted under monocular and binocular viewing with targets presented over videos of real-world driving scenes and still frame images derived from those videos. With unilateral prisms, detection rates in the prism expansion area were significantly lower in binocular than in monocular (prism eye) viewing on the motion background (medians, 13 and 58%, respectively, p = 0.008) but not the still frame background (medians, 63 and 68%, p = 0.123). When the stimulus for binocular rivalry was reduced by fitting prisms bilaterally in one HH and one normally sighted subject with simulated HH, prism-area detection rates on the motion background were not significantly different (p > 0.6) in binocular and monocular viewing. Conflicting binocular motion appears to be a stimulus for reduced predominance of the prism image in binocular viewing when using unilateral peripheral prisms. However, the effect was only found for relatively small targets. Further testing is needed to determine the extent to which this phenomenon might affect the functionality of unilateral peripheral prisms in more real-world situations.

  12. Peripheral Prism Glasses: Effects of Moving and Stationary Backgrounds

    PubMed Central

    Shen, Jieming; Peli, Eli; Bowers, Alex R.

    2015-01-01

    Purpose Unilateral peripheral prisms for homonymous hemianopia (HH) expand the visual field through peripheral binocular visual confusion, a stimulus for binocular rivalry that could lead to reduced predominance (partial local suppression) of the prism image and limit device functionality. Using natural-scene images and motion videos, we evaluated whether detection was reduced in binocular compared to monocular viewing. Methods Detection rates of nine participants with HH or quadranopia and normal binocularity wearing peripheral prisms were determined for static checkerboard perimetry targets briefly presented in the prism expansion area and the seeing hemifield. Perimetry was conducted under monocular and binocular viewing with targets presented over videos of real-world driving scenes and still frame images derived from those videos. Results With unilateral prisms, detection rates in the prism expansion area were significantly lower in binocular than monocular (prism eye) viewing on the motion background (medians 13% and 58%, respectively, p = 0.008), but not the still frame background (63% and 68%, p = 0.123). When the stimulus for binocular rivalry was reduced by fitting prisms bilaterally in 1 HH and 1 normally-sighted subject with simulated HH, prism-area detection rates on the motion background were not significantly different (p > 0.6) in binocular and monocular viewing. Conclusions Conflicting binocular motion appears to be a stimulus for reduced predominance of the prism image in binocular viewing when using unilateral peripheral prisms. However, the effect was only found for relatively small targets. Further testing is needed to determine the extent to which this phenomenon might affect the functionality of unilateral peripheral prisms in more real-world situations. PMID:25785533

  13. Changes in dynamics of accommodation after accommodative facility training in myopes and emmetropes.

    PubMed

    Allen, Peter M; Charman, W Neil; Radhakrishnan, Hema

    2010-05-12

    This study evaluates the effect of accommodative facility training in myopes and emmetropes. Monocular accommodative facility was measured in nine myopes and nine emmetropes for distance and near. Subjective facility was recorded with automated flippers and objective measurements were simultaneously taken with a PowerRefractor. Accommodative facility training (a sequence of 5 min monocular right eye, 5 min monocular left eye, 5 min binocular) was given on three consecutive days and facility was re-assessed on the fifth day. The results showed that training improved the facility rate in both groups. The improvement in facility rates were linked to the time constants and peak velocity of accommodation. Some changes in amplitude seen in emmetropes indicate an improvement in facility rate at the expense of an accurate accommodation response. Copyright 2010 Elsevier Ltd. All rights reserved.

  14. Could visual neglect induce amblyopia?

    PubMed

    Bier, J C; Vokaer, M; Fery, P; Garbusinski, J; Van Campenhoudt, G; Blecic, S A; Bartholomé, E J

    2004-12-01

    Oculomotor nerve disease is a common cause of diplopia. When strabismus is present, absence of diplopia has to induce the research of either uncovering of visual fields or monocular suppression, amblyopia or blindness. We describe the case of a 41-year-old woman presenting with right oculomotor paresis and left object-centred visual neglect due to a right fronto-parietal haemorrhage expanding to the right peri-mesencephalic cisterna caused by the rupture of a right middle cerebral artery aneurysm. She never complained of diplopia despite binocular vision and progressive recovery of strabismus, excluding uncovering of visual fields. Since all other causes were excluded in this case, we hypothesise that the absence of diplopia was due to the object-centred visual neglect. Partial internal right oculomotor paresis causes an ocular deviation in abduction; the image being perceived deviated contralaterally to the left. Thus, in our case, the neglect of the left image is equivalent to a right monocular functional blindness. However, bell cancellation test clearly worsened when assessed in left monocular vision confirming that eye patching can worsen attentional visual neglect. In conclusion, our case argues for the possibility of a functional monocular blindness induced by visual neglect. We think that in presence of strabismus, absence of diplopia should induce the search for hemispatial visual neglect when supratentorial lesions are suspected.

  15. Integrating Millimeter Wave Radar with a Monocular Vision Sensor for On-Road Obstacle Detection Applications

    PubMed Central

    Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng

    2011-01-01

    This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver’s visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible. PMID:22164117

  16. Integrating millimeter wave radar with a monocular vision sensor for on-road obstacle detection applications.

    PubMed

    Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng

    2011-01-01

    This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver's visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible.

  17. Differential processing of binocular and monocular gloss cues in human visual cortex

    PubMed Central

    Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.

    2016-01-01

    The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596

  18. The nature of face representations in subcortical regions.

    PubMed

    Gabay, Shai; Burlingham, Charles; Behrmann, Marlene

    2014-07-01

    Studies examining the neural correlates of face perception in humans have focused almost exclusively on the distributed cortical network of face-selective regions. Recently, however, investigations have also identified subcortical correlates of face perception and the question addressed here concerns the nature of these subcortical face representations. To explore this issue, we presented to participants pairs of images sequentially to the same or to different eyes. Superior performance in the former over latter condition implicates monocular, prestriate portions of the visual system. Over a series of five experiments, we manipulated both lower-level (size, location) as well as higher-level (identity) similarity across the pair of faces. A monocular advantage was observed even when the faces in a pair differed in location and in size, implicating some subcortical invariance across lower-level image properties. A monocular advantage was also observed when the faces in a pair were two different images of the same individual, indicating the engagement of subcortical representations in more abstract, higher-level aspects of face processing. We conclude that subcortical structures of the visual system are involved, perhaps interactively, in multiple aspects of face perception, and not simply in deriving initial coarse representations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Importance of phase alignment for interocular suppression.

    PubMed

    Maehara, Goro; Huang, Pi-Chun; Hess, Robert F

    2009-07-01

    We measured contrast thresholds for Gabor targets in the presence of maskers which had higher or lower spatial frequencies than the targets. A high-pass fractal masker elevated target contrast thresholds at low and intermediate pedestal contrasts in both monocular and dichoptic modes of presentation, suggesting that the masking occurs after a monocular processing stage. Moreover we found that a high-pass checkerboard masker elevated thresholds at the low and intermediate pedestal contrasts and that most of this threshold elevation disappeared when the phase of the masker's spatial components were scrambled. This masking was effective only in the dichoptic presentation, not in the monocular presentation. These results indicate that phase alignment of the high spatial frequency components plays a crucial role for interocular suppression. We speculate that phase alignments signal the existence of a luminance contour in the monocular image and that this signal suppresses processing of information in the other eye when there is no corresponding signal in that eye.

  20. Monocular and binocular visual impairment in the UK Biobank study: prevalence, associations and diagnoses.

    PubMed

    McKibbin, Martin; Farragher, Tracey M; Shickle, Darren

    2018-01-01

    To determine the prevalence of, associations with and diagnoses leading to mild visual impairment or worse (logMAR >0.3) in middle-aged adults in the UK Biobank study. Prevalence estimates for monocular and binocular visual impairment were determined for the UK Biobank participants with fundus photographs and spectral domain optical coherence tomography images. Associations with socioeconomic, biometric, lifestyle and medical variables were investigated for cases with visual impairment and matched controls, using multinomial logistic regression models. Self-reported eye history and image grading results were used to identify the primary diagnoses leading to visual impairment for a sample of 25% of cases. For the 65 033 UK Biobank participants, aged 40-69 years and with fundus images, 6682 (10.3%) and 1677 (2.6%) had mild visual impairment or worse in one or both eyes, respectively. Increasing deprivation, age and ethnicity were independently associated with both monocular and binocular visual impairment. No primary diagnosis for the recorded level of visual impairment could be identified for 49.8% of eyes. The most common identifiable diagnoses leading to visual impairment were cataract, amblyopia, uncorrected refractive error and vitreoretinal interface abnormalities. The prevalence of visual impairment in the UK Biobank study cohort is lower than for population-based studies from other industrialised countries. Monocular and binocular visual impairment are associated with increasing deprivation, age and ethnicity. The UK Biobank dataset does not allow confident identification of the causes of visual impairment, and the results may not be applicable to the wider UK population.

  1. Monocular and binocular visual impairment in the UK Biobank study: prevalence, associations and diagnoses

    PubMed Central

    Farragher, Tracey M; Shickle, Darren

    2018-01-01

    Objective To determine the prevalence of, associations with and diagnoses leading to mild visual impairment or worse (logMAR >0.3) in middle-aged adults in the UK Biobank study. Methods and analysis Prevalence estimates for monocular and binocular visual impairment were determined for the UK Biobank participants with fundus photographs and spectral domain optical coherence tomography images. Associations with socioeconomic, biometric, lifestyle and medical variables were investigated for cases with visual impairment and matched controls, using multinomial logistic regression models. Self-reported eye history and image grading results were used to identify the primary diagnoses leading to visual impairment for a sample of 25% of cases. Results For the 65 033 UK Biobank participants, aged 40–69 years and with fundus images, 6682 (10.3%) and 1677 (2.6%) had mild visual impairment or worse in one or both eyes, respectively. Increasing deprivation, age and ethnicity were independently associated with both monocular and binocular visual impairment. No primary diagnosis for the recorded level of visual impairment could be identified for 49.8% of eyes. The most common identifiable diagnoses leading to visual impairment were cataract, amblyopia, uncorrected refractive error and vitreoretinal interface abnormalities. Conclusions The prevalence of visual impairment in the UK Biobank study cohort is lower than for population-based studies from other industrialised countries. Monocular and binocular visual impairment are associated with increasing deprivation, age and ethnicity. The UK Biobank dataset does not allow confident identification of the causes of visual impairment, and the results may not be applicable to the wider UK population. PMID:29657974

  2. Comparison of Prevalence of Diabetic Macular Edema Based on Monocular Fundus Photography vs Optical Coherence Tomography.

    PubMed

    Wang, Yu T; Tadarati, Mongkol; Wolfson, Yulia; Bressler, Susan B; Bressler, Neil M

    2016-02-01

    Diagnosing diabetic macular edema (DME) from monocular fundus photography vs optical coherence tomography (OCT) central subfield thickness (CST) can yield different prevalence rates for DME. Epidemiologic studies and telemedicine screening typically use monocular fundus photography, while treatment of DME uses OCT CST. To compare DME prevalence from monocular fundus photography and OCT. Retrospective cross-sectional study of DME grading based on monocular fundus photographs and OCT images obtained from patients with diabetic retinopathy at a single visit between July 1, 2011, and June 30, 2014, at a university-based practice and analyzed between July 30, 2014, and May 29, 2015. Presence of DME, including clinically significant macular edema (CSME), on monocular fundus photographs used definitions from the Multi-Ethnic Study of Atherosclerosis (MESA) and the National Health and Nutrition Examination Survey (NHANES). Presence of DME on OCT used Diabetic Retinopathy Clinical Research Network eligibility criteria thresholds of CST for trials evaluating anti-vascular endothelial growth factor treatments. Prevalence of DME based on monocular fundus photographs or OCT. A total of 246 eyes of 158 participants (mean [SD] age, 65.0 [11.9] years; 48.7% women; 60.8% white) were included. Among the 246 eyes, the prevalences of DME (61.4%) and CSME (48.5%) based on MESA definitions for monocular fundus photographs were greater than the DME prevalence based on OCT (21.1%) by 40.2% (95% CI, 32.8%-47.7%; P < .001) and 27.2% (95% CI, 19.2%-35.3%; P < .001), respectively. Using NHANES definitions, DME and CSME prevalences from monocular fundus photographs (28.5% and 21.0%, respectively) approximated the DME prevalence from OCT (21.1%). However, among eyes without DME on OCT, 58.2% (95% CI, 51.0%-65.3%) and 18.0% (95% CI, 12.9%-24.2%) were diagnosed as having DME on monocular fundus photographs using MESA and NHANES definitions, respectively, including 47.0% (95% CI, 39.7%-54.5%) and 10.3% (95% CI, 6.3%-15.7%), respectively, with CSME. Among eyes with DME on OCT, 26.9% (95% CI, 15.6%-41.0%) and 32.7% (95% CI, 20.3%-47.1%) were not diagnosed as having either DME or CSME on monocular fundus photographs using MESA and NHANES definitions, respectively. These data suggest that many eyes diagnosed as having DME or CSME on monocular fundus photographs have no DME based on OCT CST, while many eyes diagnosed as not having DME or CSME on monocular fundus photographs have DME on OCT. While limited to 1 clinical practice, caution is suggested when extrapolating prevalence of eyes that may benefit from anti-vascular endothelial growth factor therapy based on epidemiologic surveys using photographs to diagnose DME.

  3. Differential processing of binocular and monocular gloss cues in human visual cortex.

    PubMed

    Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E

    2016-06-01

    The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.

  4. Accommodative Performance of Children With Unilateral Amblyopia

    PubMed Central

    Manh, Vivian; Chen, Angela M.; Tarczy-Hornoch, Kristina; Cotter, Susan A.; Candy, T. Rowan

    2015-01-01

    Purpose. The purpose of this study was to compare the accommodative performance of the amblyopic eye of children with unilateral amblyopia to that of their nonamblyopic eye, and also to that of children without amblyopia, during both monocular and binocular viewing. Methods. Modified Nott retinoscopy was used to measure accommodative performance of 38 subjects with unilateral amblyopia and 25 subjects with typical vision from 3 to 13 years of age during monocular and binocular viewing at target distances of 50, 33, and 25 cm. The relationship between accommodative demand and interocular difference (IOD) in accommodative error was assessed in each group. Results. The mean IOD in monocular accommodative error for amblyopic subjects across all three viewing distances was 0.49 diopters (D) (95% confidence interval [CI], ±1.12 D) in the 180° meridian and 0.54 D (95% CI, ±1.27 D) in the 90° meridian, with the amblyopic eye exhibiting greater accommodative errors on average. Interocular difference in monocular accommodative error increased significantly with increasing accommodative demand; 5%, 47%, and 58% of amblyopic subjects had monocular errors in the amblyopic eye that fell outside the upper 95% confidence limit for the better eye of control subjects at viewing distances of 50, 33, and 25 cm, respectively. Conclusions. When viewing monocularly, children with unilateral amblyopia had greater mean accommodative errors in their amblyopic eyes than in their nonamblyopic eyes, and when compared with control subjects. This could lead to unintended retinal image defocus during patching therapy for amblyopia. PMID:25626970

  5. Monocular correspondence detection for symmetrical objects by template matching

    NASA Astrophysics Data System (ADS)

    Vilmar, G.; Besslich, Philipp W., Jr.

    1990-09-01

    We describe a possibility to reconstruct 3-D information from a single view of an 3-D bilateral symmetric object. The symmetry assumption allows us to obtain a " second view" from a different viewpoint by a simple reflection of the monocular image. Therefore we have to solve the correspondence problem in a special case where known feature-based or area-based binocular approaches fail. In principle our approach is based on a frequency domain template matching of the features on the epipolar lines. During a training period our system " learns" the assignment of correspondence models to image features. The object shape is interpolated when no template matches to the image features. This fact is an important advantage of this methodology because no " real world" image holds the symmetry assumption perfectly. To simplify the training process we used single views on human faces (e. g. passport photos) but our system is trainable on any other kind of objects.

  6. Human Pose Estimation from Monocular Images: A Comprehensive Survey

    PubMed Central

    Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi

    2016-01-01

    Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003

  7. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  8. Deficient Binocular Combination Reveals Mechanisms of Anisometropic Amblyopia: Signal Attenuation and Interocular Inhibition

    PubMed Central

    Huang, Chang-Bing; Zhou, Jiawei; Lu, Zhong-Lin; Zhou, Yifeng

    2012-01-01

    Amblyopia is a developmental disorder that results in deficits of monocular and binocular vision. It's presently unclear whether these deficits result from attenuation of signals in the amblyopic eye, inhibition by signals in the fellow eye, or both. In this study, we characterize the mechanisms underlying anisometropic amblyopia using a binocular phase and contrast combination paradigm and a contrast-gain control model. Subjects dichoptically viewed two slightly different images and reported the perceived contrast and phase of the resulting cyclopean percept. We found that the properties of binocular combination were abnormal in many aspects, which is explained by a combination of (1) attenuated monocular signal in the amblyopic eye, (2) stronger interocular contrast-gain control from the fellow eye to the signal in amblyopic eye (direct interocular inhibition), and (3) stronger interocular contrast-gain control from the fellow eye to the contrast gain control signal from the amblyopic eye (indirect interocular inhibition). We conclude that anisometropic amblyopia led to both monocular and interocular deficits. A complete understanding of the mechanisms underlying amblyopia requires studies of both monocular deficits and binocular interactions. PMID:21546609

  9. The selection of the optimal baseline in the front-view monocular vision system

    NASA Astrophysics Data System (ADS)

    Xiong, Bincheng; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen

    2018-03-01

    In the front-view monocular vision system, the accuracy of solving the depth field is related to the length of the inter-frame baseline and the accuracy of image matching result. In general, a longer length of the baseline can lead to a higher precision of solving the depth field. However, at the same time, the difference between the inter-frame images increases, which increases the difficulty in image matching and the decreases matching accuracy and at last may leads to the failure of solving the depth field. One of the usual practices is to use the tracking and matching method to improve the matching accuracy between images, but this algorithm is easy to cause matching drift between images with large interval, resulting in cumulative error in image matching, and finally the accuracy of solving the depth field is still very low. In this paper, we propose a depth field fusion algorithm based on the optimal length of the baseline. Firstly, we analyze the quantitative relationship between the accuracy of the depth field calculation and the length of the baseline between frames, and find the optimal length of the baseline by doing lots of experiments; secondly, we introduce the inverse depth filtering technique for sparse SLAM, and solve the depth field under the constraint of the optimal length of the baseline. By doing a large number of experiments, the results show that our algorithm can effectively eliminate the mismatch caused by image changes, and can still solve the depth field correctly in the large baseline scene. Our algorithm is superior to the traditional SFM algorithm in time and space complexity. The optimal baseline obtained by a large number of experiments plays a guiding role in the calculation of the depth field in front-view monocular.

  10. Dynamic Human Body Modeling Using a Single RGB Camera.

    PubMed

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  11. Dynamic Human Body Modeling Using a Single RGB Camera

    PubMed Central

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-01-01

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones. PMID:26999159

  12. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  13. Monocular Vision-Based Underwater Object Detection

    PubMed Central

    Zhang, Zhen; Dai, Fengzhao; Bu, Yang; Wang, Huibin

    2017-01-01

    In this paper, we propose an underwater object detection method using monocular vision sensors. In addition to commonly used visual features such as color and intensity, we investigate the potential of underwater object detection using light transmission information. The global contrast of various features is used to initially identify the region of interest (ROI), which is then filtered by the image segmentation method, producing the final underwater object detection results. We test the performance of our method with diverse underwater datasets. Samples of the datasets are acquired by a monocular camera with different qualities (such as resolution and focal length) and setups (viewing distance, viewing angle, and optical environment). It is demonstrated that our ROI detection method is necessary and can largely remove the background noise and significantly increase the accuracy of our underwater object detection method. PMID:28771194

  14. A Monocular Vision Measurement System of Three-Degree-of-Freedom Air-Bearing Test-Bed Based on FCCSP

    NASA Astrophysics Data System (ADS)

    Gao, Zhanyu; Gu, Yingying; Lv, Yaoyu; Xu, Zhenbang; Wu, Qingwen

    2018-06-01

    A monocular vision-based pose measurement system is provided for real-time measurement of a three-degree-of-freedom (3-DOF) air-bearing test-bed. Firstly, a circular plane cooperative target is designed. An image of a target fixed on the test-bed is then acquired. Blob analysis-based image processing is used to detect the object circles on the target. A fast algorithm (FCCSP) based on pixel statistics is proposed to extract the centers of object circles. Finally, pose measurements can be obtained when combined with the centers and the coordinate transformation relation. Experiments show that the proposed method is fast, accurate, and robust enough to satisfy the requirement of the pose measurement.

  15. Depth-estimation-enabled compound eyes

    NASA Astrophysics Data System (ADS)

    Lee, Woong-Bi; Lee, Heung-No

    2018-04-01

    Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.

  16. Pupil responses to near visual demand during human visual development

    PubMed Central

    Bharadwaj, Shrikant R.; Wang, Jingyun; Candy, T. Rowan

    2014-01-01

    Pupil responses of adults to near visual demands are well characterized but those of typically developing infants and children are not. This study determined the following pupil characteristics of infants, children and adults using a PowerRefractor (25 Hz): i) binocular and monocular responses to a cartoon movie that ramped between 80 and 33 cm (20 infants, 20 2–4-yr-olds and 20 adults participated) ii) binocular and monocular response threshold for 0.1 Hz sinusoidal stimuli of 0.25 D, 0.5 D or 0.75 D amplitude (33 infants and 8 adults participated) iii) steady-state stability of pupil responses at 80 cms (8 infants and 8 adults participated). The change in pupil diameter with viewing distance (Δpd) was significantly smaller in infants and 2–4-yr-olds than in adults (p < 0.001) and significantly smaller under monocular than binocular conditions (p < 0.001). The 0.75 D sinusoidal stimulus elicited a significant binocular pupillary response in infants and a significant binocular and monocular pupillary response in adults. Steady-state pupillary fluctuations were similar in infants and adults (p = 0.25). The results suggest that the contribution of pupil size to changes in retinal image quality when tracking slow moving objects may be smaller during development than in adulthood. Smaller monocular Δpd reflects the importance of binocular cues in driving near-pupillary responses. PMID:21482712

  17. Slant Perception Under Stereomicroscopy.

    PubMed

    Horvath, Samantha; Macdonald, Kori; Galeotti, John; Klatzky, Roberta L

    2017-11-01

    Objective These studies used threshold and slant-matching tasks to assess and quantitatively measure human perception of 3-D planar images viewed through a stereomicroscope. The results are intended for use in developing augmented-reality surgical aids. Background Substantial research demonstrates that slant perception is performed with high accuracy from monocular and binocular cues, but less research concerns the effects of magnification. Viewing through a microscope affects the utility of monocular and stereo slant cues, but its impact is as yet unknown. Method Participants performed in a threshold slant-detection task and matched the slant of a tool to a surface. Different stimuli and monocular versus binocular viewing conditions were implemented to isolate stereo cues alone, stereo with perspective cues, accommodation cue only, and cues intrinsic to optical-coherence-tomography images. Results At magnification of 5x, slant thresholds with stimuli providing stereo cues approximated those reported for direct viewing, about 12°. Most participants (75%) who passed a stereoacuity pretest could match a tool to the slant of a surface viewed with stereo at 5x magnification, with mean compressive error of about 20% for optimized surfaces. Slant matching to optical coherence tomography images of the cornea viewed under the microscope was also demonstrated. Conclusion Despite the distortions and cue loss introduced by viewing under the stereomicroscope, most participants were able to detect and interact with slanted surfaces. Application The experiments demonstrated sensitivity to surface slant that supports the development of augmented-reality systems to aid microscope-aided surgery.

  18. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking

    PubMed Central

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku

    2017-01-01

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483

  19. Neural networks application to divergence-based passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1992-01-01

    The purpose of this report is to summarize the state of knowledge and outline the planned work in divergence-based/neural networks approach to the problem of passive ranging derived from optical flow. Work in this and closely related areas is reviewed in order to provide the necessary background for further developments. New ideas about devising a monocular passive-ranging system are then introduced. It is shown that image-plan divergence is independent of image-plan location with respect to the focus of expansion and of camera maneuvers because it directly measures the object's expansion which, in turn, is related to the time-to-collision. Thus, a divergence-based method has the potential of providing a reliable range complementing other monocular passive-ranging methods which encounter difficulties in image areas close to the focus of expansion. Image-plan divergence can be thought of as some spatial/temporal pattern. A neural network realization was chosen for this task because neural networks have generally performed well in various other pattern recognition applications. The main goal of this work is to teach a neural network to derive the divergence from the imagery.

  20. Aerial vehicles collision avoidance using monocular vision

    NASA Astrophysics Data System (ADS)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  1. a Novel Approach to Camera Calibration Method for Smart Phones Under Road Environment

    NASA Astrophysics Data System (ADS)

    Lee, Bijun; Zhou, Jian; Ye, Maosheng; Guo, Yuan

    2016-06-01

    Monocular vision-based lane departure warning system has been increasingly used in advanced driver assistance systems (ADAS). By the use of the lane mark detection and identification, we proposed an automatic and efficient camera calibration method for smart phones. At first, we can detect the lane marker feature in a perspective space and calculate edges of lane markers in image sequences. Second, because of the width of lane marker and road lane is fixed under the standard structural road environment, we can automatically build a transformation matrix between perspective space and 3D space and get a local map in vehicle coordinate system. In order to verify the validity of this method, we installed a smart phone in the `Tuzhi' self-driving car of Wuhan University and recorded more than 100km image data on the road in Wuhan. According to the result, we can calculate the positions of lane markers which are accurate enough for the self-driving car to run smoothly on the road.

  2. Binocular summation for reflexive eye movements

    PubMed Central

    Quaia, Christian; Optican, Lance M.; Cumming, Bruce G.

    2018-01-01

    Psychophysical studies and our own subjective experience suggest that, in natural viewing conditions (i.e., at medium to high contrasts), monocularly and binocularly viewed scenes appear very similar, with the exception of the improved depth perception provided by stereopsis. This phenomenon is usually described as a lack of binocular summation. We show here that there is an exception to this rule: Ocular following eye movements induced by the sudden motion of a large stimulus, which we recorded from three human subjects, are much larger when both eyes see the moving stimulus, than when only one eye does. We further discovered that this binocular advantage is a function of the interocular correlation between the two monocular images: It is maximal when they are identical, and reduced when the two eyes are presented with different images. This is possible only if the neurons that underlie ocular following are sensitive to binocular disparity. PMID:29621384

  3. The Role of Binocular Disparity in Stereoscopic Images of Objects in the Macaque Anterior Intraparietal Area

    PubMed Central

    Romero, Maria C.; Van Dromme, Ilse C. L.; Janssen, Peter

    2013-01-01

    Neurons in the macaque Anterior Intraparietal area (AIP) encode depth structure in random-dot stimuli defined by gradients of binocular disparity, but the importance of binocular disparity in real-world objects for AIP neurons is unknown. We investigated the effect of binocular disparity on the responses of AIP neurons to images of real-world objects during passive fixation. We presented stereoscopic images of natural and man-made objects in which the disparity information was congruent or incongruent with disparity gradients present in the real-world objects, and images of the same objects where such gradients were absent. Although more than half of the AIP neurons were significantly affected by binocular disparity, the great majority of AIP neurons remained image selective even in the absence of binocular disparity. AIP neurons tended to prefer stimuli in which the depth information derived from binocular disparity was congruent with the depth information signaled by monocular depth cues, indicating that these monocular depth cues have an influence upon AIP neurons. Finally, in contrast to neurons in the inferior temporal cortex, AIP neurons do not represent images of objects in terms of categories such as animate-inanimate, but utilize representations based upon simple shape features including aspect ratio. PMID:23408970

  4. Shape and Albedo from Shading (SAfS) for Pixel-Level dem Generation from Monocular Images Constrained by Low-Resolution dem

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chung Liu, Wai; Grumpe, Arne; Wöhler, Christian

    2016-06-01

    Lunar topographic information, e.g., lunar DEM (Digital Elevation Model), is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading), extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading) problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) (0.5 m spatial resolution), constrained by the SELENE and LRO Elevation Model (SLDEM 2015) of 60 m spatial resolution. The results indicate that local details are largely recovered by the algorithm while low frequency topographic consistency is affected by the low-resolution DEM.

  5. Focus information is used to interpret binocular images

    PubMed Central

    Hoffman, David M.; Banks, Martin S.

    2011-01-01

    Focus information—blur and accommodation—is highly correlated with depth in natural viewing. We examined the use of focus information in solving the binocular correspondence problem and in interpreting monocular occlusions. We presented transparent scenes consisting of two planes. Observers judged the slant of the farther plane, which was seen through the nearer plane. To do this, they had to solve the correspondence problem. In one condition, the two planes were presented with sharp rendering on one image plane, as is done in conventional stereo displays. In another condition, the planes were presented on two image planes at different focal distances, simulating focus information in natural viewing. Depth discrimination performance improved significantly when focus information was correct, which shows that the visual system utilizes the information contained in depth-of-field blur in solving binocular correspondence. In a second experiment, we presented images in which one eye could see texture behind an occluder that the other eye could not see. When the occluder's texture was sharp along with the occluded texture, binocular rivalry was prominent. When the occluded and occluding textures were presented with different blurs, rivalry was significantly reduced. This shows that blur aids the interpretation of scene layout near monocular occlusions. PMID:20616139

  6. Optimization of Visual Training for Full Recovery from Severe Amblyopia in Adults

    ERIC Educational Resources Information Center

    Eaton, Nicolette C.; Sheehan, Hanna Marie; Quinlan, Elizabeth M.

    2016-01-01

    The severe amblyopia induced by chronic monocular deprivation is highly resistant to reversal in adulthood. Here we use a rodent model to show that recovery from deprivation amblyopia can be achieved in adults by a two-step sequence, involving enhancement of synaptic plasticity in the visual cortex by dark exposure followed immediately by visual…

  7. Effects of complete monocular deprivation in visuo-spatial memory.

    PubMed

    Cattaneo, Zaira; Merabet, Lotfi B; Bhatt, Ela; Vecchi, Tomaso

    2008-09-30

    Monocular deprivation has been associated with both specific deficits and enhancements in visual perception and processing. In this study, performance on a visuo-spatial memory task was compared in congenitally monocular individuals and sighted control individuals viewing monocularly (i.e., patched) and binocularly. The task required the individuals to view and memorize a series of target locations on two-dimensional matrices. Overall, congenitally monocular individuals performed worse than sighted individuals (with a specific deficit in simultaneously maintaining distinct spatial representations in memory), indicating that the lack of binocular visual experience affects the way visual information is represented in visuo-spatial memory. No difference was observed between the monocular and binocular viewing control groups, suggesting that early monocular deprivation affects the development of cortical mechanisms mediating visuo-spatial cognition.

  8. A Bayesian framework for extracting human gait using strong prior knowledge.

    PubMed

    Zhou, Ziheng; Prügel-Bennett, Adam; Damper, Robert I

    2006-11-01

    Extracting full-body motion of walking people from monocular video sequences in complex, real-world environments is an important and difficult problem, going beyond simple tracking, whose satisfactory solution demands an appropriate balance between use of prior knowledge and learning from data. We propose a consistent Bayesian framework for introducing strong prior knowledge into a system for extracting human gait. In this work, the strong prior is built from a simple articulated model having both time-invariant (static) and time-variant (dynamic) parameters. The model is easily modified to cater to situations such as walkers wearing clothing that obscures the limbs. The statistics of the parameters are learned from high-quality (indoor laboratory) data and the Bayesian framework then allows us to "bootstrap" to accurate gait extraction on the noisy images typical of cluttered, outdoor scenes. To achieve automatic fitting, we use a hidden Markov model to detect the phases of images in a walking cycle. We demonstrate our approach on silhouettes extracted from fronto-parallel ("sideways on") sequences of walkers under both high-quality indoor and noisy outdoor conditions. As well as high-quality data with synthetic noise and occlusions added, we also test walkers with rucksacks, skirts, and trench coats. Results are quantified in terms of chamfer distance and average pixel error between automatically extracted body points and corresponding hand-labeled points. No one part of the system is novel in itself, but the overall framework makes it feasible to extract gait from very much poorer quality image sequences than hitherto. This is confirmed by comparing person identification by gait using our method and a well-established baseline recognition algorithm.

  9. Combining 3D structure of real video and synthetic objects

    NASA Astrophysics Data System (ADS)

    Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon

    1998-04-01

    This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.

  10. Monocular Perceptual Deprivation from Interocular Suppression Temporarily Imbalances Ocular Dominance.

    PubMed

    Kim, Hyun-Woong; Kim, Chai-Youn; Blake, Randolph

    2017-03-20

    Early visual experience sculpts neural mechanisms that regulate the balance of influence exerted by the two eyes on cortical mechanisms underlying binocular vision [1, 2], and experience's impact on this neural balancing act continues into adulthood [3-5]. One recently described, compelling example of adult neural plasticity is the effect of patching one eye for a relatively short period of time: contrary to intuition, monocular visual deprivation actually improves the deprived eye's competitive advantage during a subsequent period of binocular rivalry [6-8], the robust form of visual competition prompted by dissimilar stimulation of the two eyes [9, 10]. Neural concomitants of this improvement in monocular dominance are reflected in measurements of brain responsiveness following eye patching [11, 12]. Here we report that patching an eye is unnecessary for producing this paradoxical deprivation effect: interocular suppression of an ordinarily visible stimulus being viewed by one eye is sufficient to produce shifts in subsequent predominance of that eye to an extent comparable to that produced by patching the eye. Moreover, this imbalance in eye dominance can also be induced by prior, extended viewing of two monocular images differing only in contrast. Regardless of how shifts in eye dominance are induced, the effect decays once the two eyes view stimuli equal in strength. These novel findings implicate the operation of interocular neural gain control that dynamically adjusts the relative balance of activity between the two eyes [13, 14]. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Amaurosis fugax

    MedlinePlus

    ... other symptoms with the vision loss, seek medical attention right away. Alternative Names Transient monocular blindness; Transient monocular visual loss; TMLV; Transient monocular visual loss; Transient binocular ...

  12. Temporal accommodation response measured by photorefractive accommodation measurement device

    NASA Astrophysics Data System (ADS)

    Song, Byoungsub; Leportier, Thibault; Park, Min-Chul

    2017-02-01

    Although accommodation response plays an important role in the human vision system for perception of distance, some three-dimensional (3D) displays offer depth stimuli regardless of the accommodation response. The consequence is that most observers watching 3D displays have complained about visual fatigue. The measurement of the accommodation response is therefore necessary to develop human-friendly 3D displays. However, only few studies about accommodation measurement have been reported. Most of the investigations have been focused on the measurement and analysis of monocular accommodation responses only because the accommodation response works individually in each eye. Moreover, a main eye perceives dominantly the object distance. However, the binocular accommodation response should be examined because both eyes are used to watch the 3D display in natural conditions. The ophthalmic instrument that we developed enabled to measure changes in the accommodation response of the two eyes simultaneously. Two cameras acquired separately the infrared images reflected from each eyes after the reflected beams passed through a cylindrical lens. The changes in the accommodation response could then be estimated from the changes in the astigmatism ratio of the infrared images that were acquired in real time. In this paper, we compared the accommodation responses of main eye between the monocular and the binocular conditions. The two eyes were measured one by one, with only one eye opened, during measurement for monocular condition. Then the two eyes were examined simultaneously for binocular condition. The results showed similar tendencies for main eye accommodation response in both cases.

  13. Comparative evaluation of monocular augmented-reality display for surgical microscopes.

    PubMed

    Rodriguez Palma, Santiago; Becker, Brian C; Lobes, Louis A; Riviere, Cameron N

    2012-01-01

    Medical augmented reality has undergone much development recently. However, there is a lack of studies quantitatively comparing the different display options available. This paper compares the effects of different graphical overlay systems in a simple micromanipulation task with "soft" visual servoing. We compared positioning accuracy in a real-time visually-guided task using Micron, an active handheld tremor-canceling microsurgical instrument, using three different displays: 2D screen, 3D screen, and microscope with monocular image injection. Tested with novices and an experienced vitreoretinal surgeon, display of virtual cues in the microscope via an augmented reality injection system significantly decreased 3D error (p < 0.05) compared to the 2D and 3D monitors when confounding factors such as magnification level were normalized.

  14. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †

    PubMed Central

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-01

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. PMID:29320434

  15. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.

    PubMed

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-10

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.

  16. A Novel Visual Psychometric Test for Light-Induced Discomfort Using Red and Blue Light Stimuli Under Binocular and Monocular Viewing Conditions.

    PubMed

    Zivcevska, Marija; Lei, Shaobo; Blakeman, Alan; Goltz, Herbert C; Wong, Agnes M F

    2018-03-01

    To develop an objective psychophysical method to quantify light-induced visual discomfort, and to measure the effects of viewing condition and stimulus wavelength. Eleven visually normal subjects participated in the study. Their pupils were dilated (2.5% phenylephrine) before the experiment. A Ganzfeld system presented either red (1.5, 19.1, 38.2, 57.3, 76.3, 152.7, 305.3 cd/m2) or blue (1.4, 7.1, 14.3, 28.6, 42.9, 57.1, 71.4 cd/m2) randomized light intensities (1 s each) in four blocks. Constant white-light stimuli (3 cd/m2, 4 s duration) were interleaved with the chromatic trials. Participants reported each stimulus as either "uncomfortably bright" or "not uncomfortably bright." The experiment was done binocularly and monocularly in separate sessions, and the order of color/viewing condition sequence was randomized across participants. The proportion of "uncomfortable" responses was used to generate individual psychometric functions, from which 50% discomfort thresholds were calculated. Light-induced discomfort was higher under blue compared with red light stimulation, both during binocular (t(10) = 3.58, P < 0.01) and monocular viewing (t(10) = 3.15, P = 0.01). There was also a significant difference in discomfort between viewing conditions, with binocular viewing inducing more discomfort than monocular viewing for blue (P < 0.001), but not for red light stimulation. The light-induced discomfort characteristics reported here are consistent with features of the melanopsin-containing intrinsically photosensitive retinal ganglion cell light irradiance pathway, which may mediate photophobia, a prominent feature in many clinical disorders. This is the first psychometric assessment designed around melanopsin spectral properties that can be customized further to assess photophobia in different clinical populations.

  17. Three-dimensional face pose detection and tracking using monocular videos: tool and application.

    PubMed

    Dornaika, Fadi; Raducanu, Bogdan

    2009-08-01

    Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.

  18. Stereo using monocular cues within the tensor voting framework.

    PubMed

    Mordohai, Philippos; Medioni, Gérard

    2006-06-01

    We address the fundamental problem of matching in two static images. The remaining challenges are related to occlusion and lack of texture. Our approach addresses these difficulties within a perceptual organization framework, considering both binocular and monocular cues. Initially, matching candidates for all pixels are generated by a combination of matching techniques. The matching candidates are then embedded in disparity space, where perceptual organization takes place in 3D neighborhoods and, thus, does not suffer from problems associated with scanline or image neighborhoods. The assumption is that correct matches produce salient, coherent surfaces, while wrong ones do not. Matching candidates that are consistent with the surfaces are kept and grouped into smooth layers. Thus, we achieve surface segmentation based on geometric and not photometric properties. Surface overextensions, which are due to occlusion, can be corrected by removing matches whose projections are not consistent in color with their neighbors of the same surface in both images. Finally, the projections of the refined surfaces on both images are used to obtain disparity hypotheses for unmatched pixels. The final disparities are selected after a second tensor voting stage, during which information is propagated from more reliable pixels to less reliable ones. We present results on widely used benchmark stereo pairs.

  19. Age-Dependent Ocular Dominance Plasticity in Adult Mice

    PubMed Central

    Lehmann, Konrad; Löwel, Siegrid

    2008-01-01

    Background Short monocular deprivation (4 days) induces a shift in the ocular dominance of binocular neurons in the juvenile mouse visual cortex but is ineffective in adults. Recently, it has been shown that an ocular dominance shift can still be elicited in young adults (around 90 days of age) by longer periods of deprivation (7 days). Whether the same is true also for fully mature animals is not yet known. Methodology/Principal Findings We therefore studied the effects of different periods of monocular deprivation (4, 7, 14 days) on ocular dominance in C57Bl/6 mice of different ages (25 days, 90–100 days, 109–158 days, 208–230 days) using optical imaging of intrinsic signals. In addition, we used a virtual optomotor system to monitor visual acuity of the open eye in the same animals during deprivation. We observed that ocular dominance plasticity after 7 days of monocular deprivation was pronounced in young adult mice (90–100 days) but significantly weaker already in the next age group (109–158 days). In animals older than 208 days, ocular dominance plasticity was absent even after 14 days of monocular deprivation. Visual acuity of the open eye increased in all age groups, but this interocular plasticity also declined with age, although to a much lesser degree than the optically detected ocular dominance shift. Conclusions/Significance These data indicate that there is an age-dependence of both ocular dominance plasticity and the enhancement of vision after monocular deprivation in mice: ocular dominance plasticity in binocular visual cortex is most pronounced in young animals, reduced but present in adolescence and absent in fully mature animals older than 110 days of age. Mice are thus not basically different in ocular dominance plasticity from cats and monkeys which is an absolutely essential prerequisite for their use as valid model systems of human visual disorders. PMID:18769674

  20. Humanoid monocular stereo measuring system with two degrees of freedom using bionic optical imaging system

    NASA Astrophysics Data System (ADS)

    Du, Jia-Wei; Wang, Xuan-Yin; Zhu, Shi-Qiang

    2017-10-01

    Based on the process by which the spatial depth clue is obtained by a single eye, a monocular stereo vision to measure the depth information of spatial objects was proposed in this paper and a humanoid monocular stereo measuring system with two degrees of freedom was demonstrated. The proposed system can effectively obtain the three-dimensional (3-D) structure of spatial objects of different distances without changing the position of the system and has the advantages of being exquisite, smart, and flexible. The bionic optical imaging system we proposed in a previous paper, named ZJU SY-I, was employed and its vision characteristic was just like the resolution decay of the eye's vision from center to periphery. We simplified the eye's rotation in the eye socket and the coordinated rotation of other organs of the body into two rotations in the orthogonal direction and employed a rotating platform with two rotation degrees of freedom to drive ZJU SY-I. The structure of the proposed system was described in detail. The depth of a single feature point on the spatial object was deduced, as well as its spatial coordination. With the focal length adjustment of ZJU SY-I and the rotation control of the rotation platform, the spatial coordinates of all feature points on the spatial object could be obtained and then the 3-D structure of the spatial object could be reconstructed. The 3-D structure measurement experiments of two spatial objects with different distances and sizes were conducted. Some main factors affecting the measurement accuracy of the proposed system were analyzed and discussed.

  1. Acute Monocular Blindness Due to Orbital Compartment Syndrome Following Pterional Craniotomy.

    PubMed

    Habets, Jeroen G V; Haeren, Roel H L; Lie, Suen A N; Bauer, Noel J C; Dings, Jim T A

    2018-06-01

    We present a case of orbital compartment syndrome (OCS) leading to monocular irreversible blindness following a pterional craniotomy for clipping of an anterior communicating artery aneurysm. OCS is an uncommon but vision-threatening entity requiring urgent decompression to reduce the risk of permanent visual loss. Iatrogenic orbital roof defects are a common finding following pterional craniotomies. However, complications related to these defects are rarely reported. A 65-year-old female who underwent an anterior communicating artery clipping via a pterional approach 4 days before developed proptosis, ocular movement paresis, and irreversible visual impairment following an orthopedic surgery. Computed tomography images revealed an intraorbital cerebrospinal fluid (CSF) collection, which was evacuated via an acute recraniotomy. The next day, proptosis and intraorbital CSF collection on computed tomography images reoccurred and an oral and maxillofacial surgeon evacuated the collection via a blepharoplasty incision and blunt dissection. In addition, the patient was treated with acetazolamide and an external lumbar CSF drainage during 12 days. Hereafter, the CSF collection did not reoccur. Unfortunately, monocular blindness was persistent. We hypothesize the CSF collection occurred due to the combination of a postoperative orbital roof defect and a temporarily increased intracranial pressure during the orthopedic surgery. We plead for more awareness of this severe complication after pterional surgeries and emphasize the importance of 1) strict ophthalmologic examination after pterional craniotomies in case of intracranial pressure increasing events, 2) immediate consultation of an oral and maxillofacial surgeon, and 3) consideration of CSF-draining interventions since symptoms are severely invalidating and irreversible within a couple of hours. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Stereomotion speed perception: contributions from both changing disparity and interocular velocity difference over a range of relative disparities

    NASA Technical Reports Server (NTRS)

    Brooks, Kevin R.; Stone, Leland S.

    2004-01-01

    The role of two binocular cues to motion in depth-changing disparity (CD) and interocular velocity difference (IOVD)- was investigated by measuring stereomotion speed discrimination and static disparity discrimination performance (stereoacuity). Speed discrimination thresholds were assessed both for random dot stereograms (RDS), and for their temporally uncorrelated equivalents, dynamic random dot stereograms (DRDS), at relative disparity pedestals of -19, 0, and +19 arcmin. While RDS stimuli contain both CD and IOVD cues, DRDS stimuli carry only CD information. On average, thresholds were a factor of 1.7 higher for DRDS than for RDS stimuli with no clear effect of relative disparity pedestal. Results were similar for approaching and receding targets. Variations in stimulus duration had no significant effect on thresholds, and there was no observed correlation between stimulus displacement and perceived speed, confirming that subjects responded to stimulus speed in each condition. Stereoacuity was equally good for our RDS and DRDS stimuli, showing that the difference in stereomotion speed discrimination performance for these stimuli was not due to any difference in the precision of the disparity cue. In addition, when we altered stereomotion stimulus trajectory by independently manipulating the speeds and directions of its monocular half-images, perceived stereomotion speed remained accurate. This finding is inconsistent with response strategies based on properties of either monocular half-image motion, or any ad hoc combination of the monocular speeds. We conclude that although subjects are able to discriminate stereomotion speed reliably on the basis of CD information alone, IOVD provides a precise additional cue to stereomotion speed perception.

  3. The Computer Image Generation Applications Study.

    DTIC Science & Technology

    1980-07-01

    1059 7 T62 Tank 759 0 Lexington Carrier 1485 19 Sea Scape 600 1680 Fresnel Lens Optical Landing System (FLOLS) 20 0 Meatball 9 0 T37 Aircraft (LOD#3... Meatball T37 Aircraft NATO 4655 1914 33 new eye point. See also 7.1.5.5 for definition of monocular movement parallax. (g) Multiple Simulations

  4. Case of penetrating orbitocranial injury caused by wood.

    PubMed Central

    Mutlukan, E; Fleck, B W; Cullen, J F; Whittle, I R

    1991-01-01

    A case of retained intraorbital and intracerebral wooden foreign body following an orbitocranial penetrating injury through the lower lid of an adult is described. Initial failure to recognise the true nature of the injury led to intracerebral abscess formation and monocular blindness. Diagnosis and management of such cases are discussed. Images PMID:2043585

  5. Location of planar targets in three space from monocular images

    NASA Technical Reports Server (NTRS)

    Cornils, Karin; Goode, Plesent W.

    1987-01-01

    Many pieces of existing and proposed space hardware that would be targets of interest for a telerobot can be represented as planar or near-planar surfaces. Examples include the biostack modules on the Long Duration Exposure Facility, the panels on Solar Max, large diameter struts, and refueling receptacles. Robust and temporally efficient methods for locating such objects with sufficient accuracy are therefore worth developing. Two techniques that derive the orientation and location of an object from its monocular image are discussed and the results of experiments performed to determine translational and rotational accuracy are presented. Both the quadrangle projection and elastic matching techniques extract three-space information using a minimum of four identifiable target points and the principles of the perspective transformation. The selected points must describe a convex polygon whose geometric characteristics are prespecified in a data base. The rotational and translational accuracy of both techniques was tested at various ranges. This experiment is representative of the sensing requirements involved in a typical telerobot target acquisition task. Both techniques determined target location to an accuracy sufficient for consistent and efficient acquisition by the telerobot.

  6. Dissociative phenomena in congenital monocular elevation deficiency.

    PubMed

    Olson, R J; Scott, W E

    1998-04-01

    Monocular elevation deficiency is characterized by unilateral limitation of elevation in both adduction and abduction and is usually present at birth. Dissociative phenomena such as dissociated vertical deviation are well recognized in association with conditions such as congenital esotropia but much less so in association with conditions such as congenital monocular elevation deficiency. All 129 patients given the diagnosis of monocular elevation deficiency or double elevator palsy in the Pediatric Ophthalmology and Strabismus Clinic at the University of Iowa Hospitals and Clinics between 1971 and 1995 were reviewed. After those with history of trauma, myasthenia gravis, thyroid eye disease, orbital lesions, Brown syndrome, or monocular elevation deficiency with acquired onset were excluded, 31 patients with congenital monocular elevation deficiency remained for retrospective study. First diagnosed at median age 2.6 years (although all were noted by parents at less than 6 months of age) with mean follow-up of 5.0 years (up to 15.5 years), 9 of 31 (29%) developed dissociated vertical deviation in the eye with monocular elevation deficiency, all of whom had undergone strabismus surgery 0 to 9.7 years previously (mean 3.5 years). Those who developed dissociated vertical deviation were generally younger, were followed up longer, and had more accompanying horizontal strabismus than did those who did not develop dissociated vertical deviation. The results did not reach significance. The current study demonstrates that dissociated vertical deviation occurs in association with monocular elevation deficiency.

  7. Enduring critical period plasticity visualized by transcranial flavoprotein imaging in mouse primary visual cortex.

    PubMed

    Tohmi, Manavu; Kitaura, Hiroki; Komagata, Seiji; Kudoh, Masaharu; Shibuki, Katsuei

    2006-11-08

    Experience-dependent plasticity in the visual cortex was investigated using transcranial flavoprotein fluorescence imaging in mice anesthetized with urethane. On- and off-responses in the primary visual cortex were elicited by visual stimuli. Fluorescence responses and field potentials elicited by grating patterns decreased similarly as contrasts of visual stimuli were reduced. Fluorescence responses also decreased as spatial frequency of grating stimuli increased. Compared with intrinsic signal imaging in the same mice, fluorescence imaging showed faster responses with approximately 10 times larger signal changes. Retinotopic maps in the primary visual cortex and area LM were constructed using fluorescence imaging. After monocular deprivation (MD) of 4 d starting from postnatal day 28 (P28), deprived eye responses were suppressed compared with nondeprived eye responses in the binocular zone but not in the monocular zone. Imaging faithfully recapitulated a critical period for plasticity with maximal effects of MD observed around P28 and not in adulthood even under urethane anesthesia. Visual responses were compared before and after MD in the same mice, in which the skull was covered with clear acrylic dental resin. Deprived eye responses decreased after MD, whereas nondeprived eye responses increased. Effects of MD during a critical period were tested 2 weeks after reopening of the deprived eye. Significant ocular dominance plasticity was observed in responses elicited by moving grating patterns, but no long-lasting effect was found in visual responses elicited by light-emitting diode light stimuli. The present results indicate that transcranial flavoprotein fluorescence imaging is a powerful tool for investigating experience-dependent plasticity in the mouse visual cortex.

  8. Ergonomic evaluation of ubiquitous computing with monocular head-mounted display

    NASA Astrophysics Data System (ADS)

    Kawai, Takashi; Häkkinen, Jukka; Yamazoe, Takashi; Saito, Hiroko; Kishi, Shinsuke; Morikawa, Hiroyuki; Mustonen, Terhi; Kaistinen, Jyrki; Nyman, Göte

    2010-01-01

    In this paper, the authors conducted an experiment to evaluate the UX in an actual outdoor environment, assuming the casual use of monocular HMD to view video content while short walking. In conducting the experiment, eight subjects were asked to view news videos on a monocular HMD while walking through a large shopping mall. Two types of monocular HMDs and a hand-held media player were used, and the psycho-physiological responses of the subjects were measured before, during, and after the experiment. The VSQ, SSQ and NASA-TLX were used to assess the subjective workloads and symptoms. The objective indexes were heart rate and stride and a video recording of the environment in front of the subject's face. The results revealed differences between the two types of monocular HMDs as well as between the monocular HMDs and other conditions. Differences between the types of monocular HMDs may have been due to screen vibration during walking, and it was considered as a major factor in the UX in terms of the workload. Future experiments to be conducted in other locations will have higher cognitive loads in order to study the performance and the situation awareness to actual and media environments.

  9. Effect of Display Technology on Perceived Scale of Space.

    PubMed

    Geuss, Michael N; Stefanucci, Jeanine K; Creem-Regehr, Sarah H; Thompson, William B; Mohler, Betty J

    2015-11-01

    Our goal was to evaluate the degree to which display technologies influence the perception of size in an image. Research suggests that factors such as whether an image is displayed stereoscopically, whether a user's viewpoint is tracked, and the field of view of a given display can affect users' perception of scale in the displayed image. Participants directly estimated the size of a gap by matching the distance between their hands to the gap width and judged their ability to pass unimpeded through the gap in one of five common implementations of three display technologies (two head-mounted displays [HMD] and a back-projection screen). Both measures of gap width were similar for the two HMD conditions and the back projection with stereo and tracking. For the displays without tracking, stereo and monocular conditions differed from each other, with monocular viewing showing underestimation of size. Display technologies that are capable of stereoscopic display and tracking of the user's viewpoint are beneficial as perceived size does not differ from real-world estimates. Evaluations of different display technologies are necessary as display conditions vary and the availability of different display technologies continues to grow. The findings are important to those using display technologies for research, commercial, and training purposes when it is important for the displayed image to be perceived at an intended scale. © 2015, Human Factors and Ergonomics Society.

  10. Binocular contrast-gain control for natural scenes: Image structure and phase alignment.

    PubMed

    Huang, Pi-Chun; Dai, Yu-Ming

    2018-05-01

    In the context of natural scenes, we applied the pattern-masking paradigm to investigate how image structure and phase alignment affect contrast-gain control in binocular vision. We measured the discrimination thresholds of bandpass-filtered natural-scene images (targets) under various types of pedestals. Our first experiment had four pedestal types: bandpass-filtered pedestals, unfiltered pedestals, notch-filtered pedestals (which enabled removal of the spatial frequency), and misaligned pedestals (which involved rotation of unfiltered pedestals). Our second experiment featured six types of pedestals: bandpass-filtered, unfiltered, and notch-filtered pedestals, and the corresponding phase-scrambled pedestals. The thresholds were compared for monocular, binocular, and dichoptic viewing configurations. The bandpass-filtered pedestal and unfiltered pedestals showed classic dipper shapes; the dipper shapes of the notch-filtered, misaligned, and phase-scrambled pedestals were weak. We adopted a two-stage binocular contrast-gain control model to describe our results. We deduced that the phase-alignment information influenced the contrast-gain control mechanism before the binocular summation stage and that the phase-alignment information and structural misalignment information caused relatively strong divisive inhibition in the monocular and interocular suppression stages. When the pedestals were phase-scrambled, the elimination of the interocular suppression processing was the most convincing explanation of the results. Thus, our results indicated that both phase-alignment information and similar image structures cause strong interocular suppression. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Quantifying how the combination of blur and disparity affects the perceived depth

    NASA Astrophysics Data System (ADS)

    Wang, Junle; Barkowsky, Marcus; Ricordel, Vincent; Le Callet, Patrick

    2011-03-01

    The influence of a monocular depth cue, blur, on the apparent depth of stereoscopic scenes will be studied in this paper. When 3D images are shown on a planar stereoscopic display, binocular disparity becomes a pre-eminent depth cue. But it induces simultaneously the conflict between accommodation and vergence, which is often considered as a main reason for visual discomfort. If we limit this visual discomfort by decreasing the disparity, the apparent depth also decreases. We propose to decrease the (binocular) disparity of 3D presentations, and to reinforce (monocular) cues to compensate the loss of perceived depth and keep an unaltered apparent depth. We conducted a subjective experiment using a twoalternative forced choice task. Observers were required to identify the larger perceived depth in a pair of 3D images with/without blur. By fitting the result to a psychometric function, we obtained points of subjective equality in terms of disparity. We found that when blur is added to the background of the image, the viewer can perceive larger depth comparing to the images without any blur in the background. The increase of perceived depth can be considered as a function of the relative distance between the foreground and background, while it is insensitive to the distance between the viewer and the depth plane at which the blur is added.

  12. Dichoptic training in adults with amblyopia: Additional stereoacuity gains over monocular training.

    PubMed

    Liu, Xiang-Yun; Zhang, Jun-Yun

    2017-08-04

    Dichoptic training is a recent focus of research on perceptual learning in adults with amblyopia, but whether and how dichoptic training is superior to traditional monocular training is unclear. Here we investigated whether dichoptic training could further boost visual acuity and stereoacuity in monocularly well-trained adult amblyopic participants. During dichoptic training the participants used the amblyopic eye to practice a contrast discrimination task, while a band-filtered noise masker was simultaneously presented in the non-amblyopic fellow eye. Dichoptic learning was indexed by the increase of maximal tolerable noise contrast for successful contrast discrimination in the amblyopic eye. The results showed that practice tripled maximal tolerable noise contrast in 13 monocularly well-trained amblyopic participants. Moreover, the training further improved stereoacuity by 27% beyond the 55% gain from previous monocular training, but unchanged visual acuity of the amblyopic eyes. Therefore our dichoptic training method may produce extra gains of stereoacuity, but not visual acuity, in adults with amblyopia after monocular training. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. The application of diffraction grating in the design of virtual reality (VR) system

    NASA Astrophysics Data System (ADS)

    Chen, Jiekang; Huang, Qitai; Guan, Min

    2017-10-01

    Virtual Reality (VR) products serve for human eyes ultimately, and the optical properties of VR optical systems must be consistent with the characteristic of human eyes. The monocular coaxial VR optical system is simulated in ZEMAX. A diffraction grating is added to the optical surface next to the eye, and the lights emitted from the diffraction grating are deflected, which can forming an asymmetrical field of view(FOV). Then the lateral chromatic aberration caused by the diffraction grating was corrected by the chromatic dispersion of the prism. Finally, the aspheric surface was added to further optimum design. During the optical design of the system, how to balance the dispersion of the diffraction grating and the prism is the main problem. The balance was achieved by adjusting the parameters of the grating and the prism constantly, and then using aspheric surfaces finally. In order to make the asymmetric FOV of the system consistent with the angle of the visual axis, and to ensure the stereo vision area clear, the smaller half FOV of monocular system is required to reach 30°. Eventually, a system with asymmetrical FOV of 30°+40° was designed. In addition, the aberration curve of the system was analyzed by ZEMAX, and the binocular FOV was calculated according to the principle of binocular overlap. The results show that the asymmetry of FOV of VR monocular optical system can fit to human eyes and the imaging quality match for the human visual characteristics. At the same time, the diffraction grating increases binocular FOV, which decreases the requirement for the design FOV of monocular system.

  14. Ocular dominance in layer IV of the cat's visual cortex and the effects of monocular deprivation.

    PubMed Central

    Shatz, C J; Stryker, M P

    1978-01-01

    1. The relation between the physiological pattern of ocular dominance and the anatomical distribution of geniculocortical afferents serving each eye was studied in layer IV of the primary visual cortex of normal and monocularly deprived cats. 2. One eye was injected with radioactive label. After allowing sufficient time for transeuronal transport, micro-electrode recordings were made, and the geniculocoritcal afferents serving the injected eye were located autoradiographically. 3. In layer IV of normal cats, cell were clustered according to eye preference, and fewer cells were binocularly driven than in other layers. Points of transition between groups of cells dominated by one eye and those dominated by the other were marked with electrolytic lesions. A good correspondence was found between the location of cells dominated by the injected eye and the patches of radioactively labelled geniculocortical afferents. 4. Following prolonged early monocular deprivation, the patches of geniculocortical afferents in layer IV serving the deprived eye were smaller, and those serving the non-deprived eye larger, than normal. Again there was a coincidence between the patches of radioactively labelled afferents and the location of cells dominated by the injected eye. 5. The deprived eye was found to dominate a substantial fraction (22%) of cortical cells in the fourth layer. In other cortical layers, only 7% of the cells were dominated by the deprived eye. 6. These findings suggest that the thalamocortical projection is physically rearranged as a consequence of monocular deprivation, as has been demonstrated for layer IVc of the monkey's visual cortex (Hubel, Wiesel & Le Vay, 1977). Images Plate 1 Plate 2 Plate 3 Plate 4 Plate 5 Plate 6 PMID:702379

  15. A High Spatial Resolution Depth Sensing Method Based on Binocular Structured Light

    PubMed Central

    Yao, Huimin; Ge, Chenyang; Xue, Jianru; Zheng, Nanning

    2017-01-01

    Depth information has been used in many fields because of its low cost and easy availability, since the Microsoft Kinect was released. However, the Kinect and Kinect-like RGB-D sensors show limited performance in certain applications and place high demands on accuracy and robustness of depth information. In this paper, we propose a depth sensing system that contains a laser projector similar to that used in the Kinect, and two infrared cameras located on both sides of the laser projector, to obtain higher spatial resolution depth information. We apply the block-matching algorithm to estimate the disparity. To improve the spatial resolution, we reduce the size of matching blocks, but smaller matching blocks generate lower matching precision. To address this problem, we combine two matching modes (binocular mode and monocular mode) in the disparity estimation process. Experimental results show that our method can obtain higher spatial resolution depth without loss of the quality of the range image, compared with the Kinect. Furthermore, our algorithm is implemented on a low-cost hardware platform, and the system can support the resolution of 1280 × 960, and up to a speed of 60 frames per second, for depth image sequences. PMID:28397759

  16. Monocular display unit for 3D display with correct depth perception

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  17. Interocular velocity difference contributes to stereomotion speed perception

    NASA Technical Reports Server (NTRS)

    Brooks, Kevin R.

    2002-01-01

    Two experiments are presented assessing the contributions of the rate of change of disparity (CD) and interocular velocity difference (IOVD) cues to stereomotion speed perception. Using a two-interval forced-choice paradigm, the perceived speed of directly approaching and receding stereomotion and of monocular lateral motion in random dot stereogram (RDS) targets was measured. Prior adaptation using dysjunctively moving random dot stimuli induced a velocity aftereffect (VAE). The degree of interocular correlation in the adapting images was manipulated to assess the effectiveness of each cue. While correlated adaptation involved a conventional RDS stimulus, containing both IOVD and CD cues, uncorrelated adaptation featured an independent dot array in each monocular half-image, and hence lacked a coherent disparity signal. Adaptation produced a larger VAE for stereomotion than for monocular lateral motion, implying effects at neural sites beyond that of binocular combination. For motion passing through the horopter, correlated and uncorrelated adaptation stimuli produced equivalent stereomotion VAEs. The possibility that these results were due to the adaptation of a CD mechanism through random matches in the uncorrelated stimulus was discounted in a control experiment. Here both simultaneous and sequential adaptation of left and right eyes produced similar stereomotion VAEs. Motion at uncrossed disparities was also affected by both correlated and uncorrelated adaptation stimuli, but showed a significantly greater VAE in response to the former. These results show that (1) there are two separate, specialised mechanisms for encoding stereomotion: one through IOVD, the other through CD; (2) the IOVD cue dominates the perception of stereomotion speed for stimuli passing through the horopter; and (3) at a disparity pedestal both the IOVD and the CD cues have a significant influence.

  18. Sparse reconstruction of liver cirrhosis from monocular mini-laparoscopic sequences

    NASA Astrophysics Data System (ADS)

    Marcinczak, Jan Marek; Painer, Sven; Grigat, Rolf-Rainer

    2015-03-01

    Mini-laparoscopy is a technique which is used by clinicians to inspect the liver surface with ultra-thin laparoscopes. However, so far no quantitative measures based on mini-laparoscopic sequences are possible. This paper presents a Structure from Motion (SfM) based methodology to do 3D reconstruction of liver cirrhosis from mini-laparoscopic videos. The approach combines state-of-the-art tracking, pose estimation, outlier rejection and global optimization to obtain a sparse reconstruction of the cirrhotic liver surface. Specular reflection segmentation is included into the reconstruction framework to increase the robustness of the reconstruction. The presented approach is evaluated on 15 endoscopic sequences using three cirrhotic liver phantoms. The median reconstruction accuracy ranges from 0.3 mm to 1 mm.

  19. Comparison of Subjective Refraction under Binocular and Monocular Conditions in Myopic Subjects.

    PubMed

    Kobashi, Hidenaga; Kamiya, Kazutaka; Handa, Tomoya; Ando, Wakako; Kawamorita, Takushi; Igarashi, Akihito; Shimizu, Kimiya

    2015-07-28

    To compare subjective refraction under binocular and monocular conditions, and to investigate the clinical factors affecting the difference in spherical refraction between the two conditions. We examined thirty eyes of 30 healthy subjects. Binocular and monocular refraction without cycloplegia was measured through circular polarizing lenses in both eyes, using the Landolt-C chart of the 3D visual function trainer-ORTe. Stepwise multiple regression analysis was used to assess the relations among several pairs of variables and the difference in spherical refraction in binocular and monocular conditions. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition (p < 0.001), whereas no significant differences were seen in subjective cylindrical refraction (p = 0.99). The explanatory variable relevant to the difference in spherical refraction between binocular and monocular conditions was the binocular spherical refraction (p = 0.032, partial regression coefficient B = 0.029) (adjusted R(2) = 0.230). No significant correlation was seen with other clinical factors. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition. Eyes with higher degrees of myopia are more predisposed to show the large difference in spherical refraction between these two conditions.

  20. Comparison of Subjective Refraction under Binocular and Monocular Conditions in Myopic Subjects

    PubMed Central

    Kobashi, Hidenaga; Kamiya, Kazutaka; Handa, Tomoya; Ando, Wakako; Kawamorita, Takushi; Igarashi, Akihito; Shimizu, Kimiya

    2015-01-01

    To compare subjective refraction under binocular and monocular conditions, and to investigate the clinical factors affecting the difference in spherical refraction between the two conditions. We examined thirty eyes of 30 healthy subjects. Binocular and monocular refraction without cycloplegia was measured through circular polarizing lenses in both eyes, using the Landolt-C chart of the 3D visual function trainer-ORTe. Stepwise multiple regression analysis was used to assess the relations among several pairs of variables and the difference in spherical refraction in binocular and monocular conditions. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition (p < 0.001), whereas no significant differences were seen in subjective cylindrical refraction (p = 0.99). The explanatory variable relevant to the difference in spherical refraction between binocular and monocular conditions was the binocular spherical refraction (p = 0.032, partial regression coefficient B = 0.029) (adjusted R2 = 0.230). No significant correlation was seen with other clinical factors. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition. Eyes with higher degrees of myopia are more predisposed to show the large difference in spherical refraction between these two conditions. PMID:26218972

  1. Loss of Neurofilament Labeling in the Primary Visual Cortex of Monocularly Deprived Monkeys

    PubMed Central

    Duffy, Kevin R.; Livingstone, Margaret S.

    2009-01-01

    Visual experience during early life is important for the development of neural organizations that support visual function. Closing one eye (monocular deprivation) during this sensitive period can cause a reorganization of neural connections within the visual system that leaves the deprived eye functionally disconnected. We have assessed the pattern of neurofilament labeling in monocularly deprived macaque monkeys to examine the possibility that a cytoskeleton change contributes to deprivation-induced reorganization of neural connections within the primary visual cortex (V-1). Monocular deprivation for three months starting around the time of birth caused a significant loss of neurofilament labeling within deprived-eye ocular dominance columns. Three months of monocular deprivation initiated in adulthood did not produce a loss of neurofilament labeling. The evidence that neurofilament loss was found only when deprivation occurred during the sensitive period supports the notion that the loss permits restructuring of deprived-eye neural connections within the visual system. These results provide evidence that, in addition to reorganization of LGN inputs, the intrinsic circuitry of V-1 neurons is altered when monocular deprivation occurs early in development. PMID:15563721

  2. Robust range estimation with a monocular camera for vision-based forward collision warning system.

    PubMed

    Park, Ki-Yeong; Hwang, Sun-Young

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.

  3. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    PubMed Central

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344

  4. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  5. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  6. Emergence of binocular functional properties in a monocular neural circuit

    PubMed Central

    Ramdya, Pavan; Engert, Florian

    2010-01-01

    Sensory circuits frequently integrate converging inputs while maintaining precise functional relationships between them. For example, in mammals with stereopsis, neurons at the first stages of binocular visual processing show a close alignment of receptive-field properties for each eye. Still, basic questions about the global wiring mechanisms that enable this functional alignment remain unanswered, including whether the addition of a second retinal input to an otherwise monocular neural circuit is sufficient for the emergence of these binocular properties. We addressed this question by inducing a de novo binocular retinal projection to the larval zebrafish optic tectum and examining recipient neuronal populations using in vivo two-photon calcium imaging. Notably, neurons in rewired tecta were predominantly binocular and showed matching direction selectivity for each eye. We found that a model based on local inhibitory circuitry that computes direction selectivity using the topographic structure of both retinal inputs can account for the emergence of this binocular feature. PMID:19160507

  7. Stereo imaging with spaceborne radars

    NASA Technical Reports Server (NTRS)

    Leberl, F.; Kobrick, M.

    1983-01-01

    Stereo viewing is a valuable tool in photointerpretation and is used for the quantitative reconstruction of the three dimensional shape of a topographical surface. Stereo viewing refers to a visual perception of space by presenting an overlapping image pair to an observer so that a three dimensional model is formed in the brain. Some of the observer's function is performed by machine correlation of the overlapping images - so called automated stereo correlation. The direct perception of space with two eyes is often called natural binocular vision; techniques of generating three dimensional models of the surface from two sets of monocular image measurements is the topic of stereology.

  8. Diffractive-optical correlators: chances to make optical image preprocessing as intelligent as human vision

    NASA Astrophysics Data System (ADS)

    Lauinger, Norbert

    2004-10-01

    The human eye is a good model for the engineering of optical correlators. Three prominent intelligent functionalities in human vision could in the near future become realized by a new diffractive-optical hardware design of optical imaging sensors: (1) Illuminant-adaptive RGB-based color Vision, (2) Monocular 3D Vision based on RGB data processing, (3) Patchwise fourier-optical Object-Classification and Identification. The hardware design of the human eye has specific diffractive-optical elements (DOE's) in aperture and in image space and seems to execute the three jobs at -- or not far behind -- the loci of the images of objects.

  9. Joint optic disc and cup boundary extraction from monocular fundus images.

    PubMed

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Cross-orientation masking in human color vision: application of a two-stage model to assess dichoptic and monocular sources of suppression.

    PubMed

    Kim, Yeon Jin; Gheiratmand, Mina; Mullen, Kathy T

    2013-05-28

    Cross-orientation masking (XOM) occurs when the detection of a test grating is masked by a superimposed grating at an orthogonal orientation, and is thought to reveal the suppressive effects mediating contrast normalization. Medina and Mullen (2009) reported that XOM was greater for chromatic than achromatic stimuli at equivalent spatial and temporal frequencies. Here we address whether the greater suppression found in binocular color vision originates from a monocular or interocular site, or both. We measure monocular and dichoptic masking functions for red-green color contrast and achromatic contrast at three different spatial frequencies (0.375, 0.75, and 1.5 cpd, 2 Hz). We fit these functions with a modified two-stage masking model (Meese & Baker, 2009) to extract the monocular and interocular weights of suppression. We find that the weight of monocular suppression is significantly higher for color than achromatic contrast, whereas dichoptic suppression is similar for both. These effects are invariant across spatial frequency. We then apply the model to the binocular masking data using the measured values of the monocular and interocular sources of suppression and show that these are sufficient to account for color binocular masking. We conclude that the greater strength of chromatic XOM has a monocular origin that transfers through to the binocular site.

  11. Template-Based 3D Reconstruction of Non-rigid Deformable Object from Monocular Video

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Peng, Xiaodong; Zhou, Wugen; Liu, Bo; Gerndt, Andreas

    2018-06-01

    In this paper, we propose a template-based 3D surface reconstruction system of non-rigid deformable objects from monocular video sequence. Firstly, we generate a semi-dense template of the target object with structure from motion method using a subsequence video. This video can be captured by rigid moving camera orienting the static target object or by a static camera observing the rigid moving target object. Then, with the reference template mesh as input and based on the framework of classical template-based methods, we solve an energy minimization problem to get the correspondence between the template and every frame to get the time-varying mesh to present the deformation of objects. The energy terms combine photometric cost, temporal and spatial smoothness cost as well as as-rigid-as-possible cost which can enable elastic deformation. In this paper, an easy and controllable solution to generate the semi-dense template for complex objects is presented. Besides, we use an effective iterative Schur based linear solver for the energy minimization problem. The experimental evaluation presents qualitative deformation objects reconstruction results with real sequences. Compare against the results with other templates as input, the reconstructions based on our template have more accurate and detailed results for certain regions. The experimental results show that the linear solver we used performs better efficiency compared to traditional conjugate gradient based solver.

  12. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.

    PubMed

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan

    2016-03-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.

  13. Replicating and extending Bourdon's (1902) experiment on motion parallax.

    PubMed

    Ono, Hiroshi; Lillakas, Linda; Kapoor, Anjani; Wong, Irene

    2013-01-01

    Bourdon conducted the first laboratory experiment on observer-produced motion parallax as a cue to depth. In three experiments, we replicated and extended Bourdon's experiment. In experiment 1, we reproduced his finding: when the two cues, motion parallax and relative height, were combined, accuracy of depth perception was high, and when the two cues were in conflict, accuracy was lower. In experiment 2, the relative height cue was replaced with relative retinal image size. As in experiment 1, when the two cues (motion parallax and relative retinal image size) were combined, accuracy was high, but when they were in conflict, it was lower. In experiment 3, the stimuli from experiments 1 and 2 were viewed monocularly with head movement and binocularly without head movement. In the binocular conditions, accuracy, certainty, and the extent of perceived depth were higher than in the monocular condition. In the conflict conditions, accuracy, certainty, and the extent of perceived depth were lower than in the no-conflict condition, but the extent of perceived motion was larger. These results are discussed in terms of recent findings about the effectiveness of motion parallax as a cue for depth.

  14. Fusion of monocular cues to detect man-made structures in aerial imagery

    NASA Technical Reports Server (NTRS)

    Shufelt, Jefferey; Mckeown, David M.

    1991-01-01

    The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.

  15. Dichoptic stimulation improves detection of glaucoma with multifocal visual evoked potentials.

    PubMed

    Arvind, Hemamalini; Klistorner, Alexander; Graham, Stuart; Grigg, John; Goldberg, Ivan; Klistorner, Asya; Billson, Frank A

    2007-10-01

    To determine whether simultaneous binocular (dichoptic) stimulation for multifocal visual evoked potentials (mfVEP) detects glaucomatous defects and decreases intereye variability. Twenty-eight patients with glaucoma and 30 healthy subjects underwent mfVEP on monocular and dichoptic stimulation. Dichoptic stimulation was presented with the use of virtual reality goggles (recording time, 7 minutes). Monocular mfVEPs were recorded sequentially for each eye (recording time, 10 minutes). Comparison of mean relative asymmetry coefficient (RAC; calculated as difference in amplitudes between eyes/sum of amplitudes of both eyes at each segment) on monocular and dichoptic mfVEP revealed significantly lower RAC on dichoptic (0.003 +/- 0.03) compared with monocular testing (-0.02 +/- 0.04; P = 0.002). In all 28 patients, dichoptic mfVEP identified defects with excellent topographic correspondence. Of 56 hemifields (28 eyes), 33 had Humphrey visual field (HFA) scotomas, all of which were detected by dichoptic mfVEP. Among 23 hemifields with normal HFA, two were abnormal on monocular and dichoptic mfVEP. Five hemifields (five patients) normal on HFA and monocular mfVEP were abnormal on dichoptic mfVEP. In all five patients, corresponding rim changes were observed on disc photographs. Mean RAC of glaucomatous eyes was significantly higher on dichoptic (0.283 +/- 0.18) compared with monocular (0.199 +/- 0.12) tests (P = 0.0006). Dichoptic mfVEP not only detects HFA losses, it may identify early defects in areas unaffected on HFA and monocular mfVEP while reducing testing time by 30%. Asymmetry was tighter among healthy subjects but wider in patients with glaucoma on simultaneous binocular stimulation, which is potentially a new tool in the early detection of glaucoma.

  16. Development of an immersive virtual reality head-mounted display with high performance.

    PubMed

    Wang, Yunqi; Liu, Weiqi; Meng, Xiangxiang; Fu, Hanyi; Zhang, Daliang; Kang, Yusi; Feng, Rui; Wei, Zhonglun; Zhu, Xiuqing; Jiang, Guohua

    2016-09-01

    To resolve the contradiction between large field of view and high resolution in immersive virtual reality (VR) head-mounted displays (HMDs), an HMD monocular optical system with a large field of view and high resolution was designed. The system was fabricated by adopting aspheric technology with CNC grinding and a high-resolution LCD as the image source. With this monocular optical system, an HMD binocular optical system with a wide-range continuously adjustable interpupillary distance was achieved in the form of partially overlapping fields of view (FOV) combined with a screw adjustment mechanism. A fast image processor-centered LCD driver circuit and an image preprocessing system were also built to address binocular vision inconsistency in the partially overlapping FOV binocular optical system. The distortions of the HMD optical system with a large field of view were measured. Meanwhile, the optical distortions in the display and the trapezoidal distortions introduced during image processing were corrected by a calibration model for reverse rotations and translations. A high-performance not-fully-transparent VR HMD device with high resolution (1920×1080) and large FOV [141.6°(H)×73.08°(V)] was developed. The full field-of-view average value of angular resolution is 18.6  pixels/degree. With the device, high-quality VR simulations can be completed under various scenarios, and the device can be utilized for simulated trainings in aeronautics, astronautics, and other fields with corresponding platforms. The developed device has positive practical significance.

  17. Effect of field of view and monocular viewing on angular size judgements in an outdoor scene

    NASA Technical Reports Server (NTRS)

    Denz, E. A.; Palmer, E. A.; Ellis, S. R.

    1980-01-01

    Observers typically overestimate the angular size of distant objects. Significantly, overestimations are greater in outdoor settings than in aircraft visual-scene simulators. The effect of field of view and monocular and binocular viewing conditions on angular size estimation in an outdoor field was examined. Subjects adjusted the size of a variable triangle to match the angular size of a standard triangle set at three greater distances. Goggles were used to vary the field of view from 11.5 deg to 90 deg for both monocular and binocular viewing. In addition, an unrestricted monocular and binocular viewing condition was used. It is concluded that neither restricted fields of view similar to those present in visual simulators nor the restriction of monocular viewing causes a significant loss in depth perception in outdoor settings. Thus, neither factor should significantly affect the depth realism of visual simulators.

  18. Recovering stereo vision by squashing virtual bugs in a virtual reality environment.

    PubMed

    Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M

    2016-06-19

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Author(s).

  19. Recovering stereo vision by squashing virtual bugs in a virtual reality environment

    PubMed Central

    Vedamurthy, Indu; Knill, David C.; Huang, Samuel J.; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne

    2016-01-01

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity—the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task—a ‘bug squashing’ game—in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269607

  20. Decreased cortical activation in response to a motion stimulus in anisometropic amblyopic eyes using functional magnetic resonance imaging.

    PubMed

    Bonhomme, Gabrielle R; Liu, Grant T; Miki, Atsushi; Francis, Ellie; Dobre, M-C; Modestino, Edward J; Aleman, David O; Haselgrove, John C

    2006-12-01

    Motion perception abnormalities and extrastriate abnormalities have been suggested in amblyopia. Functional MRI (fMRI) and motion stimuli were used to study whether interocular differences in activation are detectable in motion-sensitive cortical areas in patients with anisometropic amblyopia. We performed fMRI at 1.5 T 4 control subjects (20/20 OU), 1 with monocular suppression (20/25), and 2 with anisometropic amblyopia (20/60, 20/800). Monocular suppression was thought to be form fruste of amblyopia. The experimental stimulus consisted of expanding and contracting concentric rings, whereas the control condition consisted of stationary concentric rings. Activation was determined by contrasting the 2 conditions for each eye. Significant fMRI activation and comparable right and left eye activation was found in V3a and V5 in all control subjects (Average z-values in L vs R contrast 0.42, 0.43) and in the subject with monocular suppression (z = 0.19). The anisometropes exhibited decreased extrastriate activation in their amblyopic eyes compared with the fellow eyes (zs = 2.12, 2.76). Our data suggest motion-sensitive cortical structures may be less active when anisometropic amblyopic eyes are stimulated with moving rings. These results support the hypothesis that extrastriate cortex is affected in anisometropic amblyopia. Although suggestive of a magnocellular defect, the exact mechanism is unclear.

  1. Nogo Receptor 1 Limits Ocular Dominance Plasticity but not Turnover of Axonal Boutons in a Model of Amblyopia

    PubMed Central

    Frantz, Michael G.; Kast, Ryan J.; Dorton, Hilary M.; Chapman, Katherine S.; McGee, Aaron W.

    2016-01-01

    The formation and stability of dendritic spines on excitatory cortical neurons are correlated with adult visual plasticity, yet how the formation, loss, and stability of postsynaptic spines register with that of presynaptic axonal varicosities is unknown. Monocular deprivation has been demonstrated to increase the rate of formation of dendritic spines in visual cortex. However, we find that monocular deprivation does not alter the dynamics of intracortical axonal boutons in visual cortex of either adult wild-type (WT) mice or adult NgR1 mutant (ngr1−/−) mice that retain critical period visual plasticity. Restoring normal vision for a week following long-term monocular deprivation (LTMD), a model of amblyopia, partially restores ocular dominance (OD) in WT and ngr1−/− mice but does not alter the formation or stability of axonal boutons. Both WT and ngr1−/− mice displayed a rapid return of normal OD within 8 days after LTMD as measured with optical imaging of intrinsic signals. In contrast, single-unit recordings revealed that ngr1−/− exhibited greater recovery of OD by 8 days post-LTMD. Our findings support a model of structural plasticity in which changes in synaptic connectivity are largely postsynaptic. In contrast, axonal boutons appear to be stable during changes in cortical circuit function. PMID:25662716

  2. Action Control: Independent Effects of Memory and Monocular Viewing on Reaching Accuracy

    ERIC Educational Resources Information Center

    Westwood, D.A.; Robertson, C.; Heath, M.

    2005-01-01

    Evidence suggests that perceptual networks in the ventral visual pathway are necessary for action control when targets are viewed with only one eye, or when the target must be stored in memory. We tested whether memory-linked (i.e., open-loop versus memory-guided actions) and monocular-linked effects (i.e., binocular versus monocular actions) on…

  3. Monocular Patching May Induce Ipsilateral “Where” Spatial Bias

    PubMed Central

    Chen, Peii; Erdahl, Lillian; Barrett, Anna M.

    2009-01-01

    Spatial bias is an asymmetry of perception and/or representation of spatial information —“where” bias —, or of spatially directed actions — “aiming” bias. A monocular patch may induce contralateral “where” spatial bias (the Sprague effect; Sprague (1966) Science, 153, 1544–1547). However, an ipsilateral patch-induced spatial bias may be observed if visual occlusion results in top-down, compensatory re-allocation of spatial perceptual or representational resources toward the region of visual deprivation. Tactile distraction from a monocular patch may also contribute to an ipsilateral bias. To examine these hypotheses, neurologically normal adults bisected horizontal lines at baseline without a patch, while wearing a monocular patch, and while wearing tactile-only and visual-only monocular occlusion. We fractionated “where” and “aiming” spatial bias components using a video apparatus to reverse visual feedback for half of the test trials. The results support monocular patch-induced ipsilateral “where” spatial errors, which are not consistent with the Sprague effect. Further, the present findings suggested that the induced ipsilateral bias may be primarily induced by visual deprivation, consistent with compensatory “where” resource re-allocation. PMID:19100274

  4. Relating binocular and monocular vision in strabismic and anisometropic amblyopia.

    PubMed

    Agrawal, Ritwick; Conner, Ian P; Odom, J V; Schwartz, Terry L; Mendola, Janine D

    2006-06-01

    To examine deficits in monocular and binocular vision in adults with amblyopia and to test the following 2 hypotheses: (1) Regardless of clinical subtype, the degree of impairment in binocular integration predicts the pattern of monocular acuity deficits. (2) Subjects who lack binocular integration exhibit the most severe interocular suppression. Seven subjects with anisometropia, 6 subjects with strabismus, and 7 control subjects were tested. Monocular tests included Snellen acuity, grating acuity, Vernier acuity, and contrast sensitivity. Binocular tests included Titmus stereo test, binocular motion integration, and dichoptic contrast masking. As expected, both groups showed deficits in monocular acuity, with subjects with strabismus showing greater deficits in Vernier acuity. Both amblyopic groups were then characterized according to the degree of residual stereoacuity and binocular motion integration ability, and 67% of subjects with strabismus compared with 29% of subjects with anisometropia were classified as having "nonbinocular" vision according to our criterion. For this nonbinocular group, Vernier acuity is most impaired. In addition, the nonbinocular group showed the most dichoptic contrast masking of the amblyopic eye and the least dichoptic contrast masking of the fellow eye. The degree of residual binocularity and interocular suppression predicts monocular acuity and may be a significant etiological mechanism of vision loss.

  5. Monocular precrash vehicle detection: features and classifiers.

    PubMed

    Sun, Zehang; Bebis, George; Miller, Ronald

    2006-07-01

    Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on-road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this work is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as principal component analysis, wavelets, and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs). Based on our evaluation results, we have developed an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Ford's proprietary low-light camera, achieving an average detection rate of 10 Hz. Our vehicle detection algorithm consists of two main steps: a multiscale driven hypothesis generation step and an appearance-based hypothesis verification step. During the hypothesis generation step, image locations where vehicles might be present are extracted. This step uses multiscale techniques not only to speed up detection, but also to improve system robustness. The appearance-based hypothesis verification step verifies the hypotheses using Gabor features and SVMs. The system has been tested in Ford's concept vehicle under different traffic conditions (e.g., structured highway, complex urban streets, and varying weather conditions), illustrating good performance.

  6. MONOCULAR DIPLOPIA DUE TO SPHEROCYLINDRICAL REFRACTIVE ERRORS (AN AMERICAN OPHTHALMOLOGICAL SOCIETY THESIS)

    PubMed Central

    Archer, Steven M.

    2007-01-01

    Purpose Ordinary spherocylindrical refractive errors have been recognized as a cause of monocular diplopia for over a century, yet explanation of this phenomenon using geometrical optics has remained problematic. This study tests the hypothesis that the diffraction theory treatment of refractive errors will provide a more satisfactory explanation of monocular diplopia. Methods Diffraction theory calculations were carried out for modulation transfer functions, point spread functions, and line spread functions under conditions of defocus, astigmatism, and mixed spherocylindrical refractive errors. Defocused photographs of inked and projected black lines were made to demonstrate the predicted consequences of the theoretical calculations. Results For certain amounts of defocus, line spread functions resulting from spherical defocus are predicted to have a bimodal intensity distribution that could provide the basis for diplopia with line targets. Multimodal intensity distributions are predicted in point spread functions and provide a basis for diplopia or polyopia of point targets under conditions of astigmatism. The predicted doubling effect is evident in defocused photographs of black lines, but the effect is not as robust as the subjective experience of monocular diplopia. Conclusions Monocular diplopia due to ordinary refractive errors can be predicted from diffraction theory. Higher-order aberrations—such as spherical aberration—are not necessary but may, under some circumstances, enhance the features of monocular diplopia. The physical basis for monocular diplopia is relatively subtle, and enhancement by neural processing is probably needed to account for the robustness of the percept. PMID:18427616

  7. Combined measurement system for double shield tunnel boring machine guidance based on optical and visual methods.

    PubMed

    Lin, Jiarui; Gao, Kai; Gao, Yang; Wang, Zheng

    2017-10-01

    In order to detect the position of the cutting shield at the head of a double shield tunnel boring machine (TBM) during the excavation, this paper develops a combined measurement system which is mainly composed of several optical feature points, a monocular vision sensor, a laser target sensor, and a total station. The different elements of the combined system are mounted on the TBM in suitable sequence, and the position of the cutting shield in the reference total station frame is determined by coordinate transformations. Subsequently, the structure of the feature points and matching technique for them are expounded, the position measurement method based on monocular vision is presented, and the calibration methods for the unknown relationships among different parts of the system are proposed. Finally, a set of experimental platforms to simulate the double shield TBM is established, and accuracy verification experiments are conducted. Experimental results show that the mean deviation of the system is 6.8 mm, which satisfies the requirements of double shield TBM guidance.

  8. Mapping number to space in the two hemispheres of the avian brain.

    PubMed

    Rugani, Rosa; Vallortigara, Giorgio; Regolin, Lucia

    2016-09-01

    Pre-verbal infants and non-human animals associate small numbers with the left space and large numbers with the right space. Birds and primates, trained to identify a given position in a sagittal series of identical positions, whenever required to respond on a left/right oriented series, referred the given position starting from the left end. Here, we extended this evidence by selectively investigating the role of either cerebral hemisphere, using the temporary monocular occlusion technique. In birds, lacking the corpus callosum, visual input is fed mainly to the contralateral hemisphere. We trained 4-day-old chicks to identify the 4th element in a sagittal series of 10 identical elements. At test, the series was identical but left/right oriented. Test was conducted in right monocular, left monocular or binocular condition of vision. Right monocular chicks pecked at the 4th right element; left monocular and binocular chicks pecked at the 4th left element. Data on monocular chicks demonstrate that both hemispheres deal with an ordinal (sequential) task. Data on binocular chicks indicate that the left bias is linked to a right hemisphere dominance, that allocates the attention toward the left hemispace. This constitutes a first step towards understanding the neural basis of number space mapping. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Peripheral Prism Glasses: Effects of Dominance, Suppression and Background

    PubMed Central

    Ross, Nicole C.; Bowers, Alex R.; Optom, M.C.; Peli, Eli

    2012-01-01

    Purpose Unilateral peripheral prisms for homonymous hemianopia (HH) place different images on corresponding peripheral retinal points, a rivalrous situation in which local suppression of the prism image could occur and thus limit device functionality. Detection with peripheral prisms has primarily been evaluated using conventional perimetry where binocular rivalry is unlikely to occur. We quantified detection over more visually complex backgrounds and examined the effects of ocular dominance. Methods Detection rates of 8 participants with HH or quadranopia and normal binocularity wearing unilateral peripheral prism glasses were determined for static perimetry targets briefly presented in the prism expansion area (in the blind hemifield) and the seeing hemifield, under monocular and binocular viewing, over uniform gray and more complex patterned backgrounds. Results Participants with normal binocularity had mixed sensory ocular dominance, demonstrated no difference in detection rates when prisms were fitted on the side of the HH or the opposite side (p>0.2), and had detection rates in the expansion area that were not different for monocular and binocular viewing over both backgrounds (p>0.4). However, two participants with abnormal binocularity and strong ocular dominance demonstrated reduced detection in the expansion area when prisms were fitted in front of the non-dominant eye. Conclusions We found little evidence of local suppression of the peripheral prism image for HH patients with normal binocularity. However, in cases of strong ocular dominance, consideration should be given to fitting prisms before the dominant eye. Although these results are promising, further testing in more realistic conditions including image motion is needed. PMID:22885783

  10. Towards Unmanned Systems for Dismounted Operations in the Canadian Forces

    DTIC Science & Technology

    2011-01-01

    LIDAR , and RADAR) and lower power/mass, passive imaging techniques such as structure from motion and simultaneous localisation and mapping ( SLAM ...sensors and learning algorithms. 5.1.2 Simultaneous localisation and mapping SLAM algorithms concurrently estimate a robot pose and a map of unique...locations and vehicle pose are part of the SLAM state vector and are estimated in each update step. AISS developed a monocular camera-based SLAM

  11. Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination

    PubMed Central

    Fasano, Giancarmine; Grassi, Michele

    2017-01-01

    In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective-n-Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal. PMID:28946651

  12. Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination.

    PubMed

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2017-09-24

    In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective- n -Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal.

  13. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    PubMed Central

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  14. Early Alcohol Exposure Disrupts Visual Cortex Plasticity in Mice

    PubMed Central

    Lantz, Crystal L.; Wang, Weili; Medina, Alexandre E.

    2012-01-01

    There is growing evidence that deficits in neuronal plasticity underlie the cognitive problems seen in fetal alcohol spectrum disorders (FASD). However, the mechanisms behind these deficits are not clear. Here we test the effects of early alcohol exposure on ocular dominance plasticity (ODP) in mice and the reversibility of these effects by phosphodiesterase (PDE) inhibitors. Mouse pups were exposed to 5 g/kg of 25% ethanol i.p. on postnatal days (P) 5, 7 and 9. This type of alcohol exposure mimics binge drinking during the third trimester equivalent of human gestation. To assess ocular dominance plasticity animals were monocularly deprived at P21 for 10 days, and tested using optical imaging of intrinsic signals. During the period of monocular deprivation animals were treated with vinpocetine (20mg/kg; PDE1 inhibitor), rolipram (1.25 mg/Kg; PDE4 inhibitor), vardenafil (3 mg/Kg; PDE5 inhibitor) or vehicle solution. Monocular deprivation resulted in the expected shift in ocular dominance of the binocular zone in saline controls but not in the ethanol group. While vinpocetine successfully restored ODP in the ethanol group, rolipram and vardenafil did not. However, when rolipram and vardenafil were given simultaneously ODP was restored. PDE4 and PDE5 are specific to cAMP and cGMP respectively, while PDE1 acts on both of these nucleotides. Our findings suggest that the combined activation of the cAMP and cGMP cascades may be a good approach to improve neuronal plasticity in FASD models. PMID:22617459

  15. A noniterative greedy algorithm for multiframe point correspondence.

    PubMed

    Shafique, Khurram; Shah, Mubarak

    2005-01-01

    This paper presents a framework for finding point correspondences in monocular image sequences over multiple frames. The general problem of multiframe point correspondence is NP-hard for three or more frames. A polynomial time algorithm for a restriction of this problem is presented and is used as the basis of the proposed greedy algorithm for the general problem. The greedy nature of the proposed algorithm allows it to be used in real-time systems for tracking and surveillance, etc. In addition, the proposed algorithm deals with the problems of occlusion, missed detections, and false positives by using a single noniterative greedy optimization scheme and, hence, reduces the complexity of the overall algorithm as compared to most existing approaches where multiple heuristics are used for the same purpose. While most greedy algorithms for point tracking do not allow for entry and exit of the points from the scene, this is not a limitation for the proposed algorithm. Experiments with real and synthetic data over a wide range of scenarios and system parameters are presented to validate the claims about the performance of the proposed algorithm.

  16. Modeling Of A Monocular, Full-Color, Laser-Scanning, Helmet-Mounted Display for Aviator Situational Awareness

    DTIC Science & Technology

    2017-03-27

    USAARL Report No. 2017-10 Modeling of a Monocular, Full -Color, Laser- Scanning, Helmet-Mounted Display for Aviator Situational Awareness By Thomas...RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 27-03-2017 Final 2002-2003 Modeling of a Monocular, Full -Color, Laser-Scanning, Helmet...was the idea of modeling HMDs by producing computer imagery for an observer to evaluate the quality of symbology. HMD, ANVIS, HGU-56P, Virtual

  17. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  18. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia.

    PubMed

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-02-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n = 13) were asked to complete two psychophysical supra-threshold binocular summation tasks: (1) binocular phase combination and (2) dichoptic global motion coherence before and after monocular training to investigate this question. We showed that these participants benefited from monocular training in terms of binocular combination. More importantly, the improvements observed with the area under log CSF (AULCSF) were found to be correlated with the improvements in binocular phase combination.

  19. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia

    PubMed Central

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-01-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n = 13) were asked to complete two psychophysical supra-threshold binocular summation tasks: (1) binocular phase combination and (2) dichoptic global motion coherence before and after monocular training to investigate this question. We showed that these participants benefited from monocular training in terms of binocular combination. More importantly, the improvements observed with the area under log CSF (AULCSF) were found to be correlated with the improvements in binocular phase combination. PMID:26829898

  20. Optimization of visual training for full recovery from severe amblyopia in adults

    PubMed Central

    Eaton, Nicolette C.; Sheehan, Hanna Marie

    2016-01-01

    The severe amblyopia induced by chronic monocular deprivation is highly resistant to reversal in adulthood. Here we use a rodent model to show that recovery from deprivation amblyopia can be achieved in adults by a two-step sequence, involving enhancement of synaptic plasticity in the visual cortex by dark exposure followed immediately by visual training. The perceptual learning induced by visual training contributes to the recovery of vision and can be optimized to drive full recovery of visual acuity in severely amblyopic adults. PMID:26787781

  1. Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.

    PubMed

    D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K

    2014-04-17

    When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).

  2. Color constrains depth in da Vinci stereopsis for camouflage but not occlusion.

    PubMed

    Wardle, Susan G; Gillam, Barbara J

    2013-12-01

    Monocular regions that occur with binocular viewing of natural scenes can produce a strong perception of depth--"da Vinci stereopsis." They occur either when part of the background is occluded in one eye, or when a nearer object is camouflaged against a background surface in one eye's view. There has been some controversy over whether da Vinci depth is constrained by geometric or ecological factors. Here we show that the color of the monocular region constrains the depth perceived from camouflage, but not occlusion, as predicted by ecological considerations. Quantitative depth was found in both cases, but for camouflage only when the color of the monocular region matched the binocular background. Unlike previous reports, depth failed even when nonmatching colors satisfied conditions for perceptual transparency. We show that placing a colored line at the boundary between the binocular and monocular regions is sufficient to eliminate depth from camouflage. When both the background and the monocular region contained vertical contours that could be fused, some observers appeared to use fusion, and others da Vinci constraints, supporting the existence of a separate da Vinci mechanism. The results show that da Vinci stereopsis incorporates color constraints and is more complex than previously assumed.

  3. Blood flow velocity in monocular retinoblastoma assessed by color doppler

    PubMed Central

    Bonanomi, Maria Teresa B C; Saito, Osmar C; de Lima, Patricia Picciarelli; Bonanomi, Roberta Chizzotti; Chammas, Maria Cristina

    2015-01-01

    OBJECTIVE: To analyze the flow of retrobulbar vessels in retinoblastoma by color Doppler imaging. METHODS: A prospective study of monocular retinoblastoma treated by enucleation between 2010 and 2014. The examination comprised fundoscopy, magnetic resonance imaging, ultrasonography and color Doppler imaging. The peak blood velocities in the central retinal artery and central retinal vein of tumor-containing eyes (tuCRAv and tuCRVv, respectively) were assessed. The velocities were compared with those for normal eyes (nlCRAv and nlCRVv) and correlated with clinical and pathological findings. Tumor dimensions in the pathological sections were compared with those in magnetic resonance imaging and ultrasonography and were correlated with tuCRAv and tuCRVv. In tumor-containing eyes, the resistivity index in the central retinal artery and the pulse index in the central retinal vein were studied in relation to all variables. RESULTS: Eighteen patients were included. Comparisons between tuCRAv and nlCRAv and between tuCRVv and nlCRVv revealed higher velocities in tumor-containing eyes (p<0.001 for both), with a greater effect in the central retinal artery than in the central retinal vein (p=0.024). Magnetic resonance imaging and ultrasonography measurements were as reliable as pathology assessments (p=0.675 and p=0.375, respectively). A positive relationship was found between tuCRAv and the tumor volume (p=0.027). The pulse index in the central retinal vein was lower in male patients (p=0.017) and in eyes with optic nerve invasion (p=0.0088). CONCLUSIONS: TuCRAv and tuCRVv are higher in tumor-containing eyes than in normal eyes. Magnetic resonance imaging and ultrasonography measurements are reliable. The tumor volume is correlated with a higher tuCRAv and a reduced pulse in the central retinal vein is correlated with male sex and optic nerve invasion. PMID:26735219

  4. Paradoxical monocular stereopsis and perspective vergence

    NASA Technical Reports Server (NTRS)

    Enright, J. T.

    1989-01-01

    The question of how to most effectively convey depth in a picture is a multifaceted problem, both because of potential limitations of the chosen medium (stereopsis, image motion), and because effectiveness can be defined in various ways. Practical applications usually focus on information transfer, i.e., effective techniques for evoking recognition of implied depth relationships, but this issue depends on subjective judgements which are difficult to scale when stimuli are above threshold. Two new approaches to this question are proposed here which are based on alternative criteria for effectiveness. Paradoxical monocular stereopsis is a remarkably compelling impression of depth which is evoked during one-eyed viewing of only certain illustrations; it can be unequivocally recognized because the feeling of depth collapses when one shifts to binocular viewing. An exploration of the stimulus properties which are effective for this phenomenon may contribute useful answers for the more general perceptual problem. Positive vergence is an eye-movement response associated with changes of fixation point within a picture which implies depth; it also arises only during monocular viewing. The response is directionally appropriate (i.e., apparently nearer objects evoke convergence, and vice versa), but the magnitude of the response can be altered consistently by making relatively minor changes in the illustration. The cross-subject agreement in changes of response magnitude would permit systematic exploration to determine which stimulus configurations are most effective in evoking perspective vergence, with quantitative answers based upon this involuntary reflex. It may well be that most effective pictures in this context will embody features which would increase effectiveness of pictures in a more general sense.

  5. A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.

    PubMed

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-07-03

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.

  6. The Enright phenomenon. Stereoscopic distortion of perceived driving speed induced by monocular pupil dilation.

    PubMed

    Carkeet, Andrew; Wood, Joanne M; McNeill, Kylie M; McNeill, Hamish J; James, Joanna A; Holder, Leigh S

    The Enright phenomenon describes the distortion in speed perception experienced by an observer looking sideways from a moving vehicle when viewing with interocular differences in retinal image brightness, usually induced by neutral density filters. We investigated whether the Enright phenomenon could be induced with monocular pupil dilation using tropicamide. We tested 17 visually normal young adults on a closed road driving circuit. Participants were asked to travel at Goal Speeds of 40km/h and 60km/h while looking sideways from the vehicle with: (i) both eyes with undilated pupils; (ii) both eyes with dilated pupils; (iii) with the leading eye only dilated; and (iv) the trailing eye only dilated. For each condition we recorded actual driving speed. With the pupil of the leading eye dilated participants drove significantly faster (by an average of 3.8km/h) than with both eyes dilated (p=0.02); with the trailing eye dilated participants drove significantly slower (by an average of 3.2km/h) than with both eyes dilated (p<0.001). The speed, with the leading eye dilated, was faster by an average of 7km/h than with the trailing eye dilated (p<0.001). There was no significant difference between driving speeds when viewing with both eyes either dilated or undilated (p=0.322). Our results are the first to show a measurable change in driving behaviour following monocular pupil dilation and support predictions based on the Enright phenomenon. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  7. Correcting intermittent central suppression improves binocular marksmanship.

    PubMed

    Hussey, Eric S

    2007-04-01

    Intermittent central suppression (ICS) is a defect in normal binocular (two-eyed) vision that causes confusion in visual detail. ICS is a repetitive intermittent loss of visual sensation in the central area of vision. As the central vision of either eye "turns on and off", aiming errors in sight can occur that must be corrected when both eyes are seeing again. Any aiming errors in sight might be expected to interfere with marksmanship during two-eyed seeing. We compared monocular (one-eyed, patched) and binocular (two-eyed) marksmanship with pistol shooting with an Army ROTC cadet before and after successful therapy for diagnosed ICS. Pretreatment, monocular marksmanship was significantly better than binocular marksmanship, suggesting defective binocularity reduced accuracy. After treatment for ICS, binocular and monocular marksmanship were essentially the same. Results confirmed predictions that with increased visual stability from correcting the suppression, binocular and monocular marksmanship accuracies should merge.

  8. Depth interval estimates from motion parallax and binocular disparity beyond interaction space.

    PubMed

    Gillam, Barbara; Palmisano, Stephen A; Govan, Donovan G

    2011-01-01

    Static and dynamic observers provided binocular and monocular estimates of the depths between real objects lying well beyond interaction space. On each trial, pairs of LEDs were presented inside a dark railway tunnel. The nearest LED was always 40 m from the observer, with the depth separation between LED pairs ranging from 0 up to 248 m. Dynamic binocular viewing was found to produce the greatest (ie most veridical) estimates of depth magnitude, followed next by static binocular viewing, and then by dynamic monocular viewing. (No significant depth was seen with static monocular viewing.) We found evidence that both binocular and monocular dynamic estimates of depth were scaled for the observation distance when the ground plane and walls of the tunnel were visible up to the nearest LED. We conclude that both motion parallax and stereopsis provide useful long-distance depth information and that motion-parallax information can enhance the degree of stereoscopic depth seen.

  9. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  10. The role of transparency in da Vinci stereopsis.

    PubMed

    Zannoli, Marina; Mamassian, Pascal

    2011-10-15

    The majority of natural scenes contains zones that are visible to one eye only. Past studies have shown that these monocular regions can be seen at a precise depth even though there are no binocular disparities that uniquely constrain their locations in depth. In the so-called da Vinci stereopsis configuration, the monocular region is a vertical line placed next to a binocular rectangular occluder. The opacity of the occluder has been mentioned to be a necessary condition to obtain da Vinci stereopsis. However, this opacity constraint has never been empirically tested. In the present study, we tested whether da Vinci stereopsis and perceptual transparency can interact using a classical da Vinci configuration in which the opacity of the occluder varied. We used two different monocular objects: a line and a disk. We found no effect of the opacity of the occluder on the perceived depth of the monocular object. A careful analysis of the distribution of perceived depth revealed that the monocular object was perceived at a depth that increased with the distance between the object and the occluder. The analysis of the skewness of the distributions was not consistent with a double fusion explanation, favoring an implication of occlusion geometry in da Vinci stereopsis. A simple model that includes the geometry of the scene could account for the results. In summary, the mechanism responsible to locate monocular regions in depth is not sensitive to the material properties of objects, suggesting that da Vinci stereopsis is solved at relatively early stages of disparity processing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Efficient Multi-Concept Visual Classifier Adaptation in Changing Environments

    DTIC Science & Technology

    2016-09-01

    yet to be discussed in existing supervised multi-concept visual perception systems used in robotics applications.1,5–7 Anno - tation of images is...Autonomous robot navigation in highly populated pedestrian zones. J Field Robotics. 2015;32(4):565–589. 3. Milella A, Reina G, Underwood J . A self...learning framework for statistical ground classification using RADAR and monocular vision. J Field Robotics. 2015;32(1):20–41. 4. Manjanna S, Dudek G

  12. Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

    DTIC Science & Technology

    2011-08-01

    Intelligence (AGI). For example, it promises to unlock vast sets of training data , such as Google Images, which have previously been inaccessible to...development of this skill holds great promise for e orts, like Emer, that aim to create an Artifcial General Intelligence (AGI). For example, it promises to...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send

  13. Chromatic interocular-switch rivalry.

    PubMed

    Christiansen, Jens H; D'Antona, Anthony D; Shevell, Steven K

    2017-05-01

    Interocular-switch rivalry (also known as stimulus rivalry) is a kind of binocular rivalry in which two rivalrous images are swapped between the eyes several times a second. The result is stable periods of one image and then the other, with stable intervals that span many eye swaps (Logothetis, Leopold, & Sheinberg, 1996). Previous work used this close kin of binocular rivalry with rivalrous forms. Experiments here test whether chromatic interocular-switch rivalry, in which the swapped stimuli differ in only chromaticity, results in slow alternation between two colors. Swapping equiluminant rivalrous chromaticities at 3.75 Hz resulted in slow perceptual color alternation, with one or the other color often continuously visible for two seconds or longer (during which there were 15+ eye swaps). A well-known theory for sustained percepts from interocular-switch rivalry with form is inhibitory competition between binocular neurons driven by monocular neurons with matched orientation tuning in each eye; such binocular neurons would produce a stable response when a given orientation is swapped between the eyes. A similar model can account for the percepts here from chromatic interocular-switch rivalry and is underpinned by the neurophysiological finding that color-preferring binocular neurons are driven by monocular neurons from each eye with well-matched chromatic selectivity (Peirce, Solomon, Forte, & Lennie, 2008). In contrast to chromatic interocular-switch rivalry, luminance interocular-switch rivalry with swapped stimuli that differ in only luminance did not result in slowly alternating percepts of different brightnesses.

  14. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    PubMed Central

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-01-01

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203

  15. Magnetic resonance imaging demonstrates compartmental muscle mechanisms of human vertical fusional vergence

    PubMed Central

    Clark, Robert A.

    2015-01-01

    Vertical fusional vergence (VFV) normally compensates for slight vertical heterophorias. We employed magnetic resonance imaging to clarify extraocular muscle contributions to VFV induced by monocular two-prism diopter (1.15°) base-up prism in 14 normal adults. Fusion during prism viewing requires monocular infraduction. Scans were repeated without prism, and with prism shifted contralaterally. Contractility indicated by morphometric indexes was separately analyzed in medial and lateral vertical rectus and superior oblique (SO) putative compartments, and superior and inferior horizontal rectus extraocular muscle putative compartments, but in the whole inferior oblique (IO). Images confirmed appropriate VFV that was implemented by the inferior rectus (IR) medial compartment contracting ipsilateral and relaxing contralateral to prism. There was no significant contractility in the IR lateral compartment. The superior but not inferior lateral rectus (LR) compartment contracted significantly in the prism viewing eye, but not contralateral to prism. The IO contracted ipsilateral but not contralateral to the prism. In the infraducting eye, the SO medial compartment relaxed significantly, while the lateral compartment was unchanged; contralateral to prism, the SO lateral compartment contracted, while the medial compartment was unchanged. There was no contractility in the superior or medial rectus muscles in either eye. There was no globe retraction. We conclude that the vertical component of VFV is primarily implemented by IR medial compartment contraction. Since appropriate vertical rotation is not directly implemented, or is opposed, by associated differential LR and SO compartmental activity, and IO contraction, these actions probably implement a torsional component of VFV. PMID:25589593

  16. Long-range traveling waves of activity triggered by local dichoptic stimulation in V1 of behaving monkeys

    PubMed Central

    Yang, Zhiyong; Heeger, David J.; Blake, Randolph

    2014-01-01

    Traveling waves of cortical activity, in which local stimulation triggers lateral spread of activity to distal locations, have been hypothesized to play an important role in cortical function. However, there is conflicting physiological evidence for the existence of spreading traveling waves of neural activity triggered locally. Dichoptic stimulation, in which the two eyes view dissimilar monocular patterns, can lead to dynamic wave-like fluctuations in visual perception and therefore, provides a promising means for identifying and studying cortical traveling waves. Here, we used voltage-sensitive dye imaging to test for the existence of traveling waves of activity in the primary visual cortex of awake, fixating monkeys viewing dichoptic stimuli. We find clear traveling waves that are initiated by brief, localized contrast increments in one of the monocular patterns and then, propagate at speeds of ∼30 mm/s. These results demonstrate that under an appropriate visual context, circuitry in visual cortex in alert animals is capable of supporting long-range traveling waves triggered by local stimulation. PMID:25343785

  17. Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion.

    PubMed

    Leimkuhler, Thomas; Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter

    2018-06-01

    We propose a system to infer binocular disparity from a monocular video stream in real-time. Different from classic reconstruction of physical depth in computer vision, we compute perceptually plausible disparity, that is numerically inaccurate, but results in a very similar overall depth impression with plausible overall layout, sharp edges, fine details and agreement between luminance and disparity. We use several simple monocular cues to estimate disparity maps and confidence maps of low spatial and temporal resolution in real-time. These are complemented by spatially-varying, appearance-dependent and class-specific disparity prior maps, learned from example stereo images. Scene classification selects this prior at runtime. Fusion of prior and cues is done by means of robust MAP inference on a dense spatio-temporal conditional random field with high spatial and temporal resolution. Using normal distributions allows this in constant-time, parallel per-pixel work. We compare our approach to previous 2D-to-3D conversion systems in terms of different metrics, as well as a user study and validate our notion of perceptually plausible disparity.

  18. Perception of time to contact of slow- and fast-moving objects using monocular and binocular motion information.

    PubMed

    Fath, Aaron J; Lind, Mats; Bingham, Geoffrey P

    2018-04-17

    The role of the monocular-flow-based optical variable τ in the perception of the time to contact of approaching objects has been well-studied. There are additional contributions from binocular sources of information, such as changes in disparity over time (CDOT), but these are less understood. We conducted an experiment to determine whether an object's velocity affects which source is most effective for perceiving time to contact. We presented participants with stimuli that simulated two approaching squares. During approach the squares disappeared, and participants indicated which square would have contacted them first. Approach was specified by (a) only disparity-based information, (b) only monocular flow, or (c) all sources of information in normal viewing conditions. As expected, participants were more accurate at judging fast objects when only monocular flow was available than when only CDOT was. In contrast, participants were more accurate judging slow objects with only CDOT than with only monocular flow. For both ranges of velocity, the condition with both information sources yielded performance equivalent to the better of the single-source conditions. These results show that different sources of motion information are used to perceive time to contact and play different roles in allowing for stable perception across a variety of conditions.

  19. The effects of left and right monocular viewing on hemispheric activation.

    PubMed

    Wang, Chao; Burtis, D Brandon; Ding, Mingzhou; Mo, Jue; Williamson, John B; Heilman, Kenneth M

    2018-03-01

    Prior research has revealed that whereas activation of the left hemisphere primarily increases the activity of the parasympathetic division of the autonomic nervous system, right-hemisphere activation increases the activity of the sympathetic division. In addition, each hemisphere primarily receives retinocollicular projections from the contralateral eye. A prior study reported that pupillary dilation was greater with left- than with right-eye monocular viewing. The goal of this study was to test the alternative hypotheses that this asymmetric pupil dilation with left-eye viewing was induced by activation of the right-hemispheric-mediated sympathetic activity, versus a reduction of left-hemisphere-mediated parasympathetic activity. Thus, this study was designed to learn whether there are changes in hemispheric activation, as measured by alteration of spontaneous alpha activity, during right versus left monocular viewing. High-density electroencephalography (EEG) was recorded from healthy participants viewing a crosshair with their right, left, or both eyes. There was a significantly less alpha power over the right hemisphere's parietal-occipital area with left and binocular viewing than with right-eye monocular viewing. The greater relative reduction of right-hemisphere alpha activity during left than during right monocular viewing provides further evidence that left-eye viewing induces greater increase in right-hemisphere activation than does right-eye viewing.

  20. Art in the eye of the beholder: the perception of art during monocular viewing.

    PubMed

    Finney, Glen Raymond; Heilman, Kenneth M

    2008-03-01

    To explore whether monocular viewing affects judgment of art. Each superior colliculus receives optic nerve fibers primarily from the contralateral eye, and visual input to each colliculus activates the ipsilateral hemisphere. In previous studies, monocular viewing influenced performance on visual-spatial and verbal memory tasks. Eight college-educated subjects, 6 men and 2 women, monocularly viewed 10 paintings with the right eye and another 10 with the left. Subjects had not previously seen the paintings. Each time, 5 paintings were abstract expressionist and 5 were impressionist. The orders of eye viewing and painting viewed were pseudorandomized and counterbalanced. Subjects rated on a 1 to 10 scale 4 qualities of the paintings: representation, aesthetics (beauty), novelty, and closure (completeness). Paintings in the abstract expressionist style had a significant difference in the rating of novelty; the paintings were rated more novel when viewed with the left eye than with the right eye. There was a trend for rating paintings as having more closure when viewing with the right eye than with the left. Impressionist paintings show no differences. Monocular viewing influences artistic judgments; novelty being rated higher when viewed with the left eye. Asymmetric projections from each eye and hemispheric specialization are posited to explain these differences.

  1. Maturation of Binocular, Monocular Grating Acuity and of the Visual Interocular Difference in the First 2 Years of Life.

    PubMed

    Costa, Marcelo Fernandes; de Cássia Rodrigues Matos França, Valtenice; Barboni, Mirella Teles Salgueiro; Ventura, Dora Fix

    2018-05-01

    The sweep visual evoked potential method (sVEP) is a powerful tool for measurement of visual acuity in infants. Despite the applicability and reliability of the technique in measuring visual functions the understanding of sVEP acuity maturation and how interocular difference of acuity develops in early infancy, as well as the availability of normality ranges, are rare in the literature. We measured binocular and monocular sVEPS acuities in 481 healthy infants aged from birth to 24 months without ophthalmological diseases. Binocular sVEP acuity was significantly higher than monocular visual acuities for almost all ages. Maturation of monocular sVEP acuity showed 2 longer critical periods while binocular acuity showed three maturation periods in the same age range. We found a systematic variation of the mean interocular acuity difference (IAD) range according to age from 1.45 cpd at birth to 0.31 cpd at 24 months. An additional contribution was the determination of sVEP acuity norms for the entire age range. We conclude that binocular and monocular sVEP acuities have distinct growth curves reflecting different maturation profiles for each function. Differences in IAD range shorten according to age and they should be considered in using the sVEP acuity measurements for clinical diagnosis as amblyopia.

  2. Capture of visual direction in dynamic vergence is reduced with flashed monocular lines.

    PubMed

    Jaschinski, Wolfgang; Jainta, Stephanie; Schürer, Michael

    2006-08-01

    The visual direction of a continuously presented monocular object is captured by the visual direction of a closely adjacent binocular object, which questions the reliability of nonius lines for measuring vergence. This was shown by Erkelens, C. J., and van Ee, R. (1997a,b) [Capture of the visual direction: An unexpected phenomenon in binocular vision. Vision Research, 37, 1193-1196; Capture of the visual direction of monocular objects by adjacent binocular objects. Vision Research, 37, 1735-1745] stimulating dynamic vergence by a counter phase oscillation of two square random-dot patterns (one to each eye) that contained a smaller central dot-free gap (of variable width) with a vertical monocular line oscillating in phase with the random-dot pattern of the respective eye; subjects adjusted the motion-amplitude of the line until it was perceived as (nearly) stationary. With a continuously presented monocular line, we replicated capture of visual direction provided the dot-free gap was narrow: the adjusted motion-amplitude of the line was similar as the motion-amplitude of the random-dot pattern, although large vergence errors occurred. However, when we flashed the line for 67 ms at the moments of maximal and minimal disparity of the vergence stimulus, we found that the adjusted motion-amplitude of the line was smaller; thus, the capture effect appeared to be reduced with flashed nonius lines. Accordingly, we found that the objectively measured vergence gain was significantly correlated (r=0.8) with the motion-amplitude of the flashed monocular line when the separation between the line and the fusion contour was at least 32 min arc. In conclusion, if one wishes to estimate the dynamic vergence response with psychophysical methods, effects of capture of visual direction can be reduced by using flashed nonius lines.

  3. Shrinkage of X cells in the lateral geniculate nucleus after monocular deprivation revealed by FoxP2 labeling.

    PubMed

    Duffy, Kevin R; Holman, Kaitlyn D; Mitchell, Donald E

    2014-05-01

    The parallel processing of visual features by distinct neuron populations is a central characteristic of the mammalian visual system. In the A laminae of the cat dorsal lateral geniculate nucleus (dLGN), parallel processing streams originate from two principal neuron types, called X and Y cells. Disruption of visual experience early in life by monocular deprivation has been shown to alter the structure and function of Y cells, but the extent to which deprivation influences X cells remains less clear. A transcription factor, FoxP2, has recently been shown to selectively label X cells in the ferret dLGN and thus provides an opportunity to examine whether monocular deprivation alters the soma size of X cells. In this study, FoxP2 labeling was examined in the dLGN of normal and monocularly deprived cats. The characteristics of neurons labeled for FoxP2 were consistent with FoxP2 being a marker for X cells in the cat dLGN. Monocular deprivation for either a short (7 days) or long (7 weeks) duration did not alter the density of FoxP2-positive neurons between nondeprived and deprived dLGN layers. However, for each deprived animal examined, measurement of the cross-sectional area of FoxP2-positive neurons (X cells) revealed that within deprived layers, X cells were smaller by approximately 20% after 7 days of deprivation, and by approximately 28% after 7 weeks of deprivation. The observed alteration to the cross-sectional area of X cells indicates that perturbation of this major pathway contributes to the functional impairments that develop from monocular deprivation.

  4. Sensor fusion of monocular cameras and laser rangefinders for line-based Simultaneous Localization and Mapping (SLAM) tasks in autonomous mobile robots.

    PubMed

    Zhang, Xinzheng; Rad, Ahmad B; Wong, Yiu-Kwong

    2012-01-01

    This paper presents a sensor fusion strategy applied for Simultaneous Localization and Mapping (SLAM) in dynamic environments. The designed approach consists of two features: (i) the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any momentary pause of dynamic objects in laser data. (ii) The second characteristic is a modified multi-sensor point estimation fusion SLAM (MPEF-SLAM) that incorporates two individual Extended Kalman Filter (EKF) based SLAM algorithms: monocular and laser SLAM. The error of the localization in fused SLAM is reduced compared with those of individual SLAM. Additionally, a new data association technique based on the homography transformation matrix is developed for monocular SLAM. This data association method relaxes the pleonastic computation. The experimental results validate the performance of the proposed sensor fusion and data association method.

  5. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    PubMed

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

  6. Object localization in handheld thermal images for fireground understanding

    NASA Astrophysics Data System (ADS)

    Vandecasteele, Florian; Merci, Bart; Jalalvand, Azarakhsh; Verstockt, Steven

    2017-05-01

    Despite the broad application of the handheld thermal imaging cameras in firefighting, its usage is mostly limited to subjective interpretation by the person carrying the device. As remedies to overcome this limitation, object localization and classification mechanisms could assist the fireground understanding and help with the automated localization, characterization and spatio-temporal (spreading) analysis of the fire. An automated understanding of thermal images can enrich the conventional knowledge-based firefighting techniques by providing the information from the data and sensing-driven approaches. In this work, transfer learning is applied on multi-labeling convolutional neural network architectures for object localization and recognition in monocular visual, infrared and multispectral dynamic images. Furthermore, the possibility of analyzing fire scene images is studied and their current limitations are discussed. Finally, the understanding of the room configuration (i.e., objects location) for indoor localization in reduced visibility environments and the linking with Building Information Models (BIM) are investigated.

  7. RISK FACTORS FOR FOUR-YEAR INCIDENT VISUAL IMPAIRMENT AND BLINDNESS: THE LOS ANGELES LATINO EYE STUDY

    PubMed Central

    Yonekawa, Yoshihiro; Varma, Rohit; Choudhury, Farzana; Torres, Mina; Azen, Stanley P.

    2016-01-01

    Purpose To identify independent risk factors for incident visual impairment (VI) and monocular blindness. Design Population-based prospective cohort study. Participants 4,658 Latinos aged 40 years in the Los Angeles Latino Eye Study (LALES) Methods A detailed history and comprehensive ophthalmological examination was performed at baseline and at the 4-year follow-up on 4,658 Latinos aged 40 years and older from Los Angeles, California. Incident VI was defined as best corrected visual acuity (BCVA) of <20/40 and >20/200 in the better-seeing eye at the 4 year follow-up examination in persons who had a BCVA of ≥20/40 in the better seeing eye at baseline. Incident monocular blindness was defined as BCVA of ≤20/200 in one eye at follow-up in persons who had a BCVA >20/200 in both eyes at baseline. Socio-demographic and clinical risk factors identified at the baseline interview and examination and associated with incident VI and loss of vision were determined using multivariable regression. Odds ratios (OR) were calculated for those variables that were independently associated with visual impairment and monocular blindness. Main Outcome Measures ORs for various risk factors for incident VI and monocular blindness Results Independent risk factors for incident VI were older age (70–79 years OR=4.8, ≥80 years OR=17.9), being unemployment (OR=3.5), and having diabetes mellitus (OR=2.2). Independent risk factors for monocular blindness were being retired (OR=3.4) or widowed (OR=3.7), having diabetes mellitus (OR=2.1) or any ocular disease (OR=5.6) at baseline. Persons with self-reported excellent/good vision were less likely to develop VI or monocular blindness (OR=0.4–0.5). Conclusion Our data highlight that older Latinos and Latinos with diabetes mellitus or self-reported eye diseases are at high risk of developing vision loss. Furthermore, being unemployed, widowed or retired confers an independent risk of monocular blindness. Interventions that prevent, treat, and focus on the modifiable factors may reduce the burden of vision loss in this fastest growing segment of the United States population. PMID:21788079

  8. Risk factors for four-year incident visual impairment and blindness: the Los Angeles Latino Eye Study.

    PubMed

    Yonekawa, Yoshihiro; Varma, Rohit; Choudhury, Farzana; Torres, Mina; Azen, Stanley P

    2011-09-01

    To identify independent risk factors for incident visual impairment (VI) and monocular blindness. Population-based prospective cohort study. A total of 4658 Latinos aged 40 years in the Los Angeles Latino Eye Study (LALES). A detailed history and comprehensive ophthalmologic examination was performed at baseline and at the 4-year follow-up on 4658 Latinos aged ≥40 years from Los Angeles, California. Incident VI was defined as best-corrected visual acuity (BCVA) of <20/40 and >20/200 in the better-seeing eye at the 4-year follow-up examination in persons who had a BCVA of ≥20/40 in the better-seeing eye at baseline. Incident monocular blindness was defined as BCVA of ≤20/200 in 1 eye at follow-up in persons who had a BCVA >20/200 in both eyes at baseline. Sociodemographic and clinical risk factors identified at the baseline interview and examination and associated with incident VI and loss of vision were determined using multivariable regression. Odds ratios (ORs) were calculated for those variables that were independently associated with VI and monocular blindness. Odds ratios for various risk factors for incident VI and monocular blindness. Independent risk factors for incident VI were older age (70-79 years, OR 4.8; ≥80 years OR 17.9), unemployment (OR 3.5), and diabetes mellitus (OR 2.2). Independent risk factors for monocular blindness were being retired (OR 3.4) or widowed (OR 3.7) and having diabetes mellitus (OR 2.1) or any ocular disease (OR 5.6) at baseline. Persons with self-reported excellent/good vision were less likely to develop VI or monocular blindness (OR 0.4-0.5). Our data highlight that older Latinos and Latinos with diabetes mellitus or self-reported eye diseases are at high risk of developing vision loss. Furthermore, being unemployed, widowed, or retired confers an independent risk of monocular blindness. Interventions that prevent, treat, and focus on the modifiable factors may reduce the burden of vision loss in this fastest growing segment of the US population. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  9. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  10. Sudden monocular blindness associated with homozygous B-thalassemia in a young Liberian.

    PubMed

    Njoh, J; York, S

    1990-05-01

    A 16-year old Liberian female presented with sudden monocular blindness. Physical examination and laboratory investigations were normal except that the patient had homozygous B-Thalassemia (HbA 58%, HbF 5% and HbA2 7.0%). Family study revealed that both parents had B-Thalassemia trait. We feel that the association of sudden monocular blindness with homozygous B-Thalassemia which has not been reported before, is not fortuitous but causal. It is therefore suggested that homozygous B-Thalassemia be added to the list of haemoglobinopathies (HbAS, SS and SC) that have been reported to cause blindness as complication.

  11. A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System

    PubMed Central

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-01-01

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972

  12. Modified Monovision With Spherical Aberration to Improve Presbyopic Through-Focus Visual Performance

    PubMed Central

    Zheleznyak, Len; Sabesan, Ramkumar; Oh, Je-Sun; MacRae, Scott; Yoon, Geunyoung

    2013-01-01

    Purpose. To investigate the impact on visual performance of modifying monovision with monocularly induced spherical aberration (SA) to increase depth of focus (DoF), thereby enhancing binocular through-focus visual performance. Methods. A binocular adaptive optics (AO) vision simulator was used to correct both eyes' native aberrations and induce traditional (TMV) and modified (MMV) monovision corrections. TMV was simulated with 1.5 diopters (D) of anisometropia (dominant eye at distance, nondominant eye at near). Zernike primary SA was induced in the nondominant eye in MMV. A total of four MMV conditions were tested with various amounts of SA (±0.2 and ±0.4 μm) and fixed anisometropia (1.5 D). Monocular and binocular visual acuity (VA) and contrast sensitivity (CS) at 10 cyc/deg and binocular summation were measured through-focus in three cyclopledged subjects with 4-mm pupils. Results. MMV with positive SA had a larger benefit for intermediate distances (1.5 lines at 1.0 D) than with negative SA, compared with TMV. Negative SA had a stronger benefit in VA at near. DoF of all MMV conditions was 3.5 ± 0.5 D (mean) as compared with TMV (2.7 ± 0.3 D). Through-focus CS at 10 cyc/deg was significantly reduced with MMV as compared to TMV only at intermediate object distances, however was unaffected at distance. Binocular summation was absent at all object distances except 0.5 D, where it improved in MMV by 19% over TMV. Conclusions. Modified monovision with SA improves through-focus VA and DoF as compared with traditional monovision. Binocular summation also increased as interocular similarity of image quality increased due to extended monocular DoF. PMID:23557742

  13. Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments on monocular distance learning

    NASA Astrophysics Data System (ADS)

    van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario

    2017-11-01

    Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.

  14. Association between visual impairment and patient-reported visual disability at different stages of cataract surgery.

    PubMed

    Acosta-Rojas, E Ruthy; Comas, Mercè; Sala, Maria; Castells, Xavier

    2006-10-01

    To evaluate the association between visual impairment (visual acuity, contrast sensitivity, stereopsis) and patient-reported visual disability at different stages of cataract surgery. A cohort of 104 patients aged 60 years and over with bilateral cataract was assessed preoperatively, after first-eye surgery (monocular pseudophakia) and after second-eye surgery (binocular pseudophakia). Partial correlation coefficients (PCC) and linear regression models were calculated. In patients with bilateral cataracts, visual disability was associated with visual acuity (PCC = -0.30) and, to a lesser extent, with contrast sensitivity (PCC = 0.16) and stereopsis (PCC = -0.09). In monocular and binocular pseudophakia, visual disability was more strongly associated with stereopsis (PCC = -0.26 monocular and -0.51 binocular) and contrast sensitivity (PCC = 0.18 monocular and 0.34 binocular) than with visual acuity (PCC = -0.18 monocular and -0.18 binocular). Visual acuity, contrast sensitivity and stereopsis accounted for between 17% and 42% of variance in visual disability. The association of visual impairment with patient-reported visual disability differed at each stage of cataract surgery. Measuring other forms of visual impairment independently from visual acuity, such as contrast sensitivity or stereopsis, could be important in evaluating both needs and outcomes in cataract surgery. More comprehensive assessment of the impact of cataract on patients should include measurement of both visual impairment and visual disability.

  15. Efficient hybrid monocular-stereo approach to on-board video-based traffic sign detection and tracking

    NASA Astrophysics Data System (ADS)

    Marinas, Javier; Salgado, Luis; Arróspide, Jon; Camplani, Massimo

    2012-01-01

    In this paper we propose an innovative method for the automatic detection and tracking of road traffic signs using an onboard stereo camera. It involves a combination of monocular and stereo analysis strategies to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. Firstly, an adaptive color and appearance based detection is applied at single camera level to generate a set of traffic sign hypotheses. In turn, stereo information allows for sparse 3D reconstruction of potential traffic signs through a SURF-based matching strategy. Namely, the plane that best fits the cloud of 3D points traced back from feature matches is estimated using a RANSAC based approach to improve robustness to outliers. Temporal consistency of the 3D information is ensured through a Kalman-based tracking stage. This also allows for the generation of a predicted 3D traffic sign model, which is in turn used to enhance the previously mentioned color-based detector through a feedback loop, thus improving detection accuracy. The proposed solution has been tested with real sequences under several illumination conditions and in both urban areas and highways, achieving very high detection rates in challenging environments, including rapid motion and significant perspective distortion.

  16. Vertical viewing angle enhancement for the 360  degree integral-floating display using an anamorphic optic system.

    PubMed

    Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Yoo, Kwan-Hee; Baasantseren, Ganbat; Park, Jae-Hyeung; Kim, Eun-Soo; Kim, Nam

    2014-04-15

    We propose a 360 degree integral-floating display with an enhanced vertical viewing angle. The system projects two-dimensional elemental image arrays via a high-speed digital micromirror device projector and reconstructs them into 3D perspectives with a lens array. Double floating lenses relate initial 3D perspectives to the center of a vertically curved convex mirror. The anamorphic optic system tailors the initial 3D perspectives horizontally and vertically disperse light rays more widely. By the proposed method, the entire 3D image provides both monocular and binocular depth cues, a full-parallax demonstration with high-angular ray density and an enhanced vertical viewing angle.

  17. Chromatic interocular-switch rivalry

    PubMed Central

    Christiansen, Jens H.; D'Antona, Anthony D.; Shevell, Steven K.

    2017-01-01

    Interocular-switch rivalry (also known as stimulus rivalry) is a kind of binocular rivalry in which two rivalrous images are swapped between the eyes several times a second. The result is stable periods of one image and then the other, with stable intervals that span many eye swaps (Logothetis, Leopold, & Sheinberg, 1996). Previous work used this close kin of binocular rivalry with rivalrous forms. Experiments here test whether chromatic interocular-switch rivalry, in which the swapped stimuli differ in only chromaticity, results in slow alternation between two colors. Swapping equiluminant rivalrous chromaticities at 3.75 Hz resulted in slow perceptual color alternation, with one or the other color often continuously visible for two seconds or longer (during which there were 15+ eye swaps). A well-known theory for sustained percepts from interocular-switch rivalry with form is inhibitory competition between binocular neurons driven by monocular neurons with matched orientation tuning in each eye; such binocular neurons would produce a stable response when a given orientation is swapped between the eyes. A similar model can account for the percepts here from chromatic interocular-switch rivalry and is underpinned by the neurophysiological finding that color-preferring binocular neurons are driven by monocular neurons from each eye with well-matched chromatic selectivity (Peirce, Solomon, Forte, & Lennie, 2008). In contrast to chromatic interocular-switch rivalry, luminance interocular-switch rivalry with swapped stimuli that differ in only luminance did not result in slowly alternating percepts of different brightnesses. PMID:28510624

  18. Contrast summation across eyes and space is revealed along the entire dipper function by a "Swiss cheese" stimulus.

    PubMed

    Meese, Tim S; Baker, Daniel H

    2011-01-27

    Previous contrast discrimination experiments have shown that luminance contrast is summed across ocular (T. S. Meese, M. A. Georgeson, & D. H. Baker, 2006) and spatial (T. S. Meese & R. J. Summers, 2007) dimensions at threshold and above. However, is this process sufficiently general to operate across the conjunction of eyes and space? Here we used a "Swiss cheese" stimulus where the blurred "holes" in sine-wave carriers were of equal area to the blurred target ("cheese") regions. The locations of the target regions in the monocular image pairs were interdigitated across eyes such that their binocular sum was a uniform grating. When pedestal contrasts were above threshold, the monocular neural images contained strong evidence that the high-contrast regions in the two eyes did not overlap. Nevertheless, sensitivity to dual contrast increments (i.e., to contrast increments in different locations in the two eyes) was a factor of ∼1.7 greater than to single increments (i.e., increments in a single eye), comparable with conventional binocular summation. This provides evidence for a contiguous area summation process that operates at all contrasts and is influenced little, if at all, by eye of origin. A three-stage model of contrast gain control fitted the results and possessed the properties of ocularity invariance and area invariance owing to its cascade of normalization stages. The implications for a population code for pattern size are discussed.

  19. The moon illusion: a test of the vestibular hypothesis under monocular viewing conditions.

    PubMed

    Carter, D S

    1977-12-01

    The results of earlier monocular experiments on the moon illusion have been either negative or confounded. To test the role of vestibular function, 24 subjects made forced-choice distance comparisons between stimuli mounted in translucent tubes. The stimulus tube for standard distance could be positioned in three viewing angles (45 degrees up, horizontal, and 45 degrees down). A comparison tube adjustable for distance was mounted horizontally. There was a greater perception of depth in the downward looking condition. The relatively weak effects are discussed in terms of a two-hypothesis explanation of the real-life moon illusion and the poor cues for depth perception in monocular viewing.

  20. Independent motion detection with a rival penalized adaptive particle filter

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Hübner, Wolfgang; Arens, Michael

    2014-10-01

    Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.

  1. Binocular visual training to promote recovery from monocular deprivation.

    PubMed

    Murphy, Kathryn M; Roumeliotis, Grayson; Williams, Kate; Beston, Brett R; Jones, David G

    2015-01-08

    Abnormal early visual experience often leads to poor vision, a condition called amblyopia. Two recent approaches to treating amblyopia include binocular therapies and intensive visual training. These reflect the emerging view that amblyopia is a binocular deficit caused by increased neural noise and poor signal-in-noise integration. Most perceptual learning studies have used monocular training; however, a recent study has shown that binocular training is effective for improving acuity in adult human amblyopes. We used an animal model of amblyopia, based on monocular deprivation, to compare the effect of binocular training either during or after the critical period for ocular dominance plasticity (early binocular training vs. late binocular training). We used a high-contrast, orientation-in-noise stimulus to drive the visual cortex because neurophysiological findings suggest that binocular training may allow the nondeprived eye to teach the deprived eye's circuits to function. We found that both early and late binocular training promoted good visual recovery. Surprisingly, we found that monocular deprivation caused a permanent deficit in the vision of both eyes, which became evident only as a sleeper effect following many weeks of visual training. © 2015 ARVO.

  2. Robust and fast pedestrian detection method for far-infrared automotive driving assistance systems

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Zhuang, Jiajun; Ma, Jun

    2013-09-01

    Despite considerable effort has been contributed to night-time pedestrian detection for automotive driving assistance systems recent years, robust and real-time pedestrian detection is by no means a trivial task and is still underway due to the moving cameras, uncontrolled outdoor environments, wide range of possible pedestrian presentations and the stringent performance criteria for automotive applications. This paper presents an alternative night-time pedestrian detection method using monocular far-infrared (FIR) camera, which includes two modules (regions of interest (ROIs) generation and pedestrian recognition) in a cascade fashion. Pixel-gradient oriented vertical projection is first proposed to estimate the vertical image stripes that might contain pedestrians, and then local thresholding image segmentation is adopted to generate ROIs more accurately within the estimated vertical stripes. A novel descriptor called PEWHOG (pyramid entropy weighted histograms of oriented gradients) is proposed to represent FIR pedestrians in recognition module. Specifically, PEWHOG is used to capture both the local object shape described by the entropy weighted distribution of oriented gradient histograms and its pyramid spatial layout. Then PEWHOG is fed to a three-branch structured classifier using support vector machines (SVM) with histogram intersection kernel (HIK). An off-line training procedure combining both the bootstrapping and early-stopping strategy is introduced to generate a more robust classifier by exploiting hard negative samples iteratively. Finally, multi-frame validation is utilized to suppress some transient false positives. Experimental results on FIR video sequences from various scenarios demonstrate that the presented method is effective and promising.

  3. Using Stereo Vision to Support the Automated Analysis of Surveillance Videos

    NASA Astrophysics Data System (ADS)

    Menze, M.; Muhle, D.

    2012-07-01

    Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people's positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people's position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.

  4. Monocular zones in stereoscopic scenes: A useful source of information for human binocular vision?

    NASA Astrophysics Data System (ADS)

    Harris, Julie M.

    2010-02-01

    When an object is closer to an observer than the background, the small differences between right and left eye views are interpreted by the human brain as depth. This basic ability of the human visual system, called stereopsis, lies at the core of all binocular three-dimensional (3-D) perception and related technological display development. To achieve stereopsis, it is traditionally assumed that corresponding locations in the right and left eye's views must first be matched, then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole story. At every object-background boundary, there are regions of the background that only one eye can see because, in the other eye's view, the foreground object occludes that region of background. Such monocular zones do not have a corresponding match in the other eye's view and can thus cause problems for depth extraction algorithms. In this paper I will discuss evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose problems for our human visual systems, rather, our visual systems can extract depth from such zones. I review the relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular zones, for stereo display technology and depth compression algorithms.

  5. Binocular rivalry from invisible patterns

    PubMed Central

    Zou, Jinyou; He, Sheng; Zhang, Peng

    2016-01-01

    Binocular rivalry arises when incompatible images are presented to the two eyes. If the two eyes’ conflicting features are invisible, leading to identical perceptual interpretations, does rivalry competition still occur? Here we investigated whether binocular rivalry can be induced from conflicting but invisible spatial patterns. A chromatic grating counterphase flickering at 30 Hz appeared uniform, but produced significant tilt aftereffect and orientation-selective adaptation. The invisible pattern also generated significant BOLD activities in the early visual cortex, with minimal response in the parietal and frontal cortical areas. Compared with perceptually matched uniform stimuli, a monocularly presented invisible chromatic grating enhanced the rivalry competition with a low-contrast visible grating presented to the other eye. Furthermore, switching from a uniform field to a perceptually matched invisible chromatic grating produced interocular suppression at approximately 200 ms after onset of the invisible grating. Experiments using briefly presented monocular probes revealed evidence for sustained rivalry competition between two invisible gratings during continuous dichoptic presentations. These findings indicate that even without visible interocular conflict, and with minimal engagement of frontoparietal cortex and consciousness related top-down feedback, perceptually identical patterns with invisible conflict features produce rivalry competition in the early visual cortex. PMID:27354535

  6. When two eyes are better than one in prehension: monocular viewing and end-point variance.

    PubMed

    Loftus, Andrea; Servos, Philip; Goodale, Melvyn A; Mendarozqueta, Nicole; Mon-Williams, Mark

    2004-10-01

    Previous research has suggested that binocular vision plays an important role in prehension. It has been shown that removing binocular vision affects (negatively) both the planning and on-line control of prehension. It has been suggested that the adverse impact of removing binocular vision is because monocular viewing results in an underestimation of target distance in visuomotor tasks. This suggestion is based on the observation that the kinematics of prehension are altered when viewing monocularly. We argue that it is not possible to draw unambiguous conclusions regarding the accuracy of distance perception from these data. In experiment 1, we found data that contradict the idea that a consistent visuomotor underestimation of target distance is an inevitable consequence of monocular viewing. Our data did show, however, that positional variance increases under monocular viewing. We provide an alternative explanation for the kinematic changes found when binocular vision is removed. Our account is based on the changes in movement kinematics that occur when end-point variance is altered following the removal of binocular vision. We suggest that the removal of binocular vision leads to greater perceptual uncertainty (e.g. less precise stimulus cues), resulting in changes in the kinematics of the movement (longer duration movements). Our alternative account reconciles some differences within the research literature. We conducted a series of experiments to explore further the issue of when binocular information is advantageous in prehension. Three subsequent experiments were employed which varied binocular/monocular viewing in selectively lit conditions. Experiment 2 explored the differences in prehension measured between monocular and binocular viewing in a full cue environment with a continuous view of the target object. Experiment 3 required participants to reach, under a monocular or binocular view, for a continuously visible self-illuminated target object in an otherwise dark room. In Experiment 3, the participant could neither see the target object nor the reaching hand following initiation of the prehension movement. Our results suggest that binocular vision contributes to prehension by providing additional information (cues) to the nervous system. These cues appear to be weighted differentially according to the particular constellation of stimulus cues available to the participants when reaching to grasp. One constant advantage of a binocular view appears to be the provision of on-line information regarding the position of the hand relative to the target. In reduced cue conditions (i.e. where a view of the target object is lost following initiation of the movement), binocular information regarding target location appears to be particularly useful in the initial programming of reach distance. Our results are a step towards establishing the specific contributions that binocular vision makes to the control of prehension.

  7. Visual response time to colored stimuli in peripheral retina - Evidence for binocular summation

    NASA Technical Reports Server (NTRS)

    Haines, R. F.

    1977-01-01

    Simple onset response time (RT) experiments, previously shown to exhibit binocular summation effects for white stimuli along the horizontal meridian, were performed for red and green stimuli along 5 oblique meridians. Binocular RT was significantly shorter than monocular RT for a 45-min-diameter spot of red, green, or white light within eccentricities of about 50 deg from the fovea. Relatively large meridian differences were noted that appear to be due to the degree to which the images fall on corresponding retinal areas.

  8. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    ERIC Educational Resources Information Center

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  9. Monocular Deprivation in Adult Mice Alters Visual Acuity and Single-Unit Activity

    ERIC Educational Resources Information Center

    Evans, Scott; Lickey, Marvin E.; Pham, Tony A.; Fischer, Quentin S.; Graves, Aundrea

    2007-01-01

    It has been discovered recently that monocular deprivation in young adult mice induces ocular dominance plasticity (ODP). This contradicts the traditional belief that ODP is restricted to a juvenile critical period. However, questions remain. ODP of young adults has been observed only using methods that are indirectly related to vision, and the…

  10. The effect of monocular target blur on simulated telerobotic manipulation

    NASA Technical Reports Server (NTRS)

    Liu, Andrew; Stark, Lawrence

    1991-01-01

    A simulation involving three types of telerobotic tasks that require information about the spatial position of objects is reported. This is similar to the results of psychophysical experiments examining the effect of blur on stereoacuity. It is suggested that other psychophysical experimental results could be used to predict operator performance for other telerobotic tasks. It is demonstrated that refractive errors in the helmet-mounted stereo display system can affect performance in the three types of telerobotic tasks. The results of two sets of experiments indicate that monocular target blur of two diopters or more degrades stereo display performance to the level of monocular displays. This indicates that moderate levels of visual degradation that affect the operator's stereoacuity may eliminate the performance advantage of stereo displays.

  11. Measurement of the flux of ultra high energy cosmic rays by the stereo technique

    NASA Astrophysics Data System (ADS)

    High Resolution Fly'S Eye Collaboration; Abbasi, R. U.; Abu-Zayyad, T.; Al-Seady, M.; Allen, M.; Amann, J. F.; Archbold, G.; Belov, K.; Belz, J. W.; Bergman, D. R.; Blake, S. A.; Brusova, O. A.; Burt, G. W.; Cannon, C.; Cao, Z.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Gray, R. C.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G.; Hüntemeyer, P.; Ivanov, D.; Jones, B. F.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Loh, E. C.; Maestas, M. M.; Manago, N.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; Moore, S. A.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Rodriguez, D.; Sasaki, M.; Schnetzer, S. R.; Scott, L. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Springer, R. W.; Stokes, B. T.; Stratton, S. R.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Wiencke, L. R.; Zech, A.; Zhang, B. K.; Zhang, X.; Zhang, Y.; High Resolution Fly's Eye Collaboration

    2009-08-01

    The High Resolution Fly’s Eye (HiRes) experiment has measured the flux of ultrahigh energy cosmic rays using the stereoscopic air fluorescence technique. The HiRes experiment consists of two detectors that observe cosmic ray showers via the fluorescence light they emit. HiRes data can be analyzed in monocular mode, where each detector is treated separately, or in stereoscopic mode where they are considered together. Using the monocular mode the HiRes collaboration measured the cosmic ray spectrum and made the first observation of the Greisen-Zatsepin-Kuzmin cutoff. In this paper we present the cosmic ray spectrum measured by the stereoscopic technique. Good agreement is found with the monocular spectrum in all details.

  12. Evolution of stereoscopic imaging in surgery and recent advances

    PubMed Central

    Schwab, Katie; Smith, Ralph; Brown, Vanessa; Whyte, Martin; Jourdan, Iain

    2017-01-01

    In the late 1980s the first laparoscopic cholecystectomies were performed prompting a sudden rise in technological innovations as the benefits and feasibility of minimal access surgery became recognised. Monocular laparoscopes provided only two-dimensional (2D) viewing with reduced depth perception and contributed to an extended learning curve. Attention turned to producing a usable three-dimensional (3D) endoscopic view for surgeons; utilising different technologies for image capture and image projection. These evolving visual systems have been assessed in various research environments with conflicting outcomes of success and usability, and no overall consensus to their benefit. This review article aims to provide an explanation of the different types of technologies, summarise the published literature evaluating 3D vs 2D laparoscopy, to explain the conflicting outcomes, and discuss the current consensus view. PMID:28874957

  13. Machine vision guided sensor positioning system for leaf temperature assessment

    NASA Technical Reports Server (NTRS)

    Kim, Y.; Ling, P. P.; Janes, H. W. (Principal Investigator)

    2001-01-01

    A sensor positioning system was developed for monitoring plants' well-being using a non-contact sensor. Image processing algorithms were developed to identify a target region on a plant leaf. A novel algorithm to recover view depth was developed by using a camera equipped with a computer-controlled zoom lens. The methodology has improved depth recovery resolution over a conventional monocular imaging technique. An algorithm was also developed to find a maximum enclosed circle on a leaf surface so the conical field-of-view of an infrared temperature sensor could be filled by the target without peripheral noise. The center of the enclosed circle and the estimated depth were used to define the sensor 3-D location for accurate plant temperature measurement.

  14. Contrast masking in strabismic amblyopia: attenuation, noise, interocular suppression and binocular summation.

    PubMed

    Baker, Daniel H; Meese, Tim S; Hess, Robert F

    2008-07-01

    To investigate amblyopic contrast vision at threshold and above we performed pedestal-masking (contrast discrimination) experiments with a group of eight strabismic amblyopes using horizontal sinusoidal gratings (mainly 3c/deg) in monocular, binocular and dichoptic configurations balanced across eye (i.e. five conditions). With some exceptions in some observers, the four main results were as follows. (1) For the monocular and dichoptic conditions, sensitivity was less in the amblyopic eye than in the good eye at all mask contrasts. (2) Binocular and monocular dipper functions superimposed in the good eye. (3) Monocular masking functions had a normal dipper shape in the good eye, but facilitation was diminished in the amblyopic eye. (4) A less consistent result was normal facilitation in dichoptic masking when testing the good eye, but a loss of this when testing the amblyopic eye. This pattern of amblyopic results was replicated in a normal observer by placing a neutral density filter in front of one eye. The two-stage model of binocular contrast gain control [Meese, T.S., Georgeson, M.A. & Baker, D.H. (2006). Binocular contrast vision at and above threshold. Journal of Vision 6, 1224-1243.] was 'lesioned' in several ways to assess the form of the amblyopic deficit. The most successful model involves attenuation of signal and an increase in noise in the amblyopic eye, and intact stages of interocular suppression and binocular summation. This implies a behavioural influence from monocular noise in the amblyopic visual system as well as in normal observers with an ND filter over one eye.

  15. Brief monocular deprivation as an assay of short-term visual sensory plasticity in schizophrenia - "the binocular effect".

    PubMed

    Foxe, John J; Yeap, Sherlyn; Leavitt, Victoria M

    2013-01-01

    Visual sensory processing deficits are consistently observed in schizophrenia, with clear amplitude reduction of the visual evoked potential (VEP) during the initial 50-150 ms of processing. Similar deficits are seen in unaffected first-degree relatives and drug-naïve first-episode patients, pointing to these deficits as potential endophenotypic markers. Schizophrenia is also associated with deficits in neural plasticity, implicating dysfunction of both glutamatergic and GABAergic systems. Here, we sought to understand the intersection of these two domains, asking whether short-term plasticity during early visual processing is specifically affected in schizophrenia. Brief periods of monocular deprivation (MD) induce relatively rapid changes in the amplitude of the early VEP - i.e., short-term plasticity. Twenty patients and 20 non-psychiatric controls participated. VEPs were recorded during binocular viewing, and were compared to the sum of VEP responses during brief monocular viewing periods (i.e., Left-eye + Right-eye viewing). Under monocular conditions, neurotypical controls exhibited an effect that patients failed to demonstrate. That is, the amplitude of the summed monocular VEPs was robustly greater than the amplitude elicited binocularly during the initial sensory processing period. In patients, this "binocular effect" was absent. Patients were all medicated. Ideally, this study would also include first-episode unmedicated patients. These results suggest that short-term compensatory mechanisms that allow healthy individuals to generate robust VEPs in the context of MD are not effectively activated in patients with schizophrenia. This simple assay may provide a useful biomarker of short-term plasticity in the psychotic disorders and a target endophenotype for therapeutic interventions.

  16. Saliency Detection for Stereoscopic 3D Images in the Quaternion Frequency Domain

    NASA Astrophysics Data System (ADS)

    Cai, Xingyu; Zhou, Wujie; Cen, Gang; Qiu, Weiwei

    2018-06-01

    Recent studies have shown that a remarkable distinction exists between human binocular and monocular viewing behaviors. Compared with two-dimensional (2D) saliency detection models, stereoscopic three-dimensional (S3D) image saliency detection is a more challenging task. In this paper, we propose a saliency detection model for S3D images. The final saliency map of this model is constructed from the local quaternion Fourier transform (QFT) sparse feature and global QFT log-Gabor feature. More specifically, the local QFT feature measures the saliency map of an S3D image by analyzing the location of a similar patch. The similar patch is chosen using a sparse representation method. The global saliency map is generated by applying the wake edge-enhanced gradient QFT map through a band-pass filter. The results of experiments on two public datasets show that the proposed model outperforms existing computational saliency models for estimating S3D image saliency.

  17. An evaluation of the lag of accommodation using photorefraction.

    PubMed

    Seidemann, Anne; Schaeffel, Frank

    2003-02-01

    The lag of accommodation which occurs in most human subjects during reading has been proposed to explain the association between reading and myopia. However, the measured lags are variable among different published studies and current knowledge on its magnitude rests largely on measurements with the Canon R-1 autorefractor. Therefore, we have measured it with another technique, eccentric infrared photorefraction (the PowerRefractor), and studied how it can be modified. Particular care was taken to ensure correct calibration of the instrument. Ten young adult subjects were refracted both in the fixation axis of the right eye and from the midline between both eyes, while they read text both monocularly and binocularly at 1.5, 2, 3, 4 and 5 D distance ("group 1"). A second group of 10 subjects ("group 2"), measured from the midline between both eyes, was studied to analyze the effects of binocular vs monocular vision, addition of +1 or +2 D lenses, and of letter size. Spherical equivalents (SE) were analyzed in all cases. The lag of accommodation was variable among subjects (standard deviations among groups and viewing distances ranging from 0.18 to 1.07 D) but was significant when the measurements were done in the fixation axis (0.35 D at 3 D target distance to 0.60 D at 5 D with binocular vision; p<0.01 or better all cases). Refracting from the midline between both eyes tended to underestimate the lag of accommodation although this was significant only at 5 D (ANOVA: p<0.0001, post hoc t-test: p<0.05). There was a small improvement in accommodation precision with binocular compared to monocular viewing but significance was reached only for the 5 D reading target (group 1--lags for a 3/4/5 D target: 0.35 vs 0.41 D/0.48 vs 0.47 D/0.60 vs 0.66 D, ANOVA: p<0.0001, post hoc t-test: p<0.05; group 2--0.29 vs 0.12 D, 0.33 vs 0.16 D, 0.23 vs -0.31 D, ANOVA: p<0.0001, post hoc t-test: p<0.05). Adjusting the letter height for constant angular subtense (0.2 deg) induced scarcely more accommodation than keeping letter size constantly at 3.5 mm (ANOVA: p<0.0001, post hoc t-test: n.s.). Positive trial lenses reduced the lag of accommodation under monocular viewing conditions and even reversed it with binocular vision. After consideration of possible sources of measurement error, the lag of accommodation measured with photorefraction at 3 D (0.41 D SE monocular and 0.35 D SE binocular) was in the range of published values from the Canon R-1 autorefractor. With the measured lag, simulations of the retinal images for a diffraction limited eye suggest surprisingly poor letter contrast on the retina.

  18. a Variant of Lsd-Slam Capable of Processing High-Speed Low-Framerate Monocular Datasets

    NASA Astrophysics Data System (ADS)

    Schmid, S.; Fritsch, D.

    2017-11-01

    We develop a new variant of LSD-SLAM, called C-LSD-SLAM, which is capable of performing monocular tracking and mapping in high-speed low-framerate situations such as those of the KITTI datasets. The methods used here are robust against the influence of erronously triangulated points near the epipolar direction, which otherwise causes tracking divergence.

  19. Effect of Monocular Deprivation on Rabbit Neural Retinal Cell Densities

    PubMed Central

    Mwachaka, Philip Maseghe; Saidi, Hassan; Odula, Paul Ochieng; Mandela, Pamela Idenya

    2015-01-01

    Purpose: To describe the effect of monocular deprivation on densities of neural retinal cells in rabbits. Methods: Thirty rabbits, comprised of 18 subject and 12 control animals, were included and monocular deprivation was achieved through unilateral lid suturing in all subject animals. The rabbits were observed for three weeks. At the end of each week, 6 experimental and 3 control animals were euthanized, their retinas was harvested and processed for light microscopy. Photomicrographs of the retina were taken and imported into FIJI software for analysis. Results: Neural retinal cell densities of deprived eyes were reduced along with increasing period of deprivation. The percentage of reductions were 60.9% (P < 0.001), 41.6% (P = 0.003), and 18.9% (P = 0.326) for ganglion, inner nuclear, and outer nuclear cells, respectively. In non-deprived eyes, cell densities in contrast were increased by 116% (P < 0.001), 52% (P < 0.001) and 59.6% (P < 0.001) in ganglion, inner nuclear, and outer nuclear cells, respectively. Conclusion: In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye. PMID:26425316

  20. Effect of Monocular Deprivation on Rabbit Neural Retinal Cell Densities.

    PubMed

    Mwachaka, Philip Maseghe; Saidi, Hassan; Odula, Paul Ochieng; Mandela, Pamela Idenya

    2015-01-01

    To describe the effect of monocular deprivation on densities of neural retinal cells in rabbits. Thirty rabbits, comprised of 18 subject and 12 control animals, were included and monocular deprivation was achieved through unilateral lid suturing in all subject animals. The rabbits were observed for three weeks. At the end of each week, 6 experimental and 3 control animals were euthanized, their retinas was harvested and processed for light microscopy. Photomicrographs of the retina were taken and imported into FIJI software for analysis. Neural retinal cell densities of deprived eyes were reduced along with increasing period of deprivation. The percentage of reductions were 60.9% (P < 0.001), 41.6% (P = 0.003), and 18.9% (P = 0.326) for ganglion, inner nuclear, and outer nuclear cells, respectively. In non-deprived eyes, cell densities in contrast were increased by 116% (P < 0.001), 52% (P < 0.001) and 59.6% (P < 0.001) in ganglion, inner nuclear, and outer nuclear cells, respectively. In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  1. Psycho-physiological effects of head-mounted displays in ubiquitous use

    NASA Astrophysics Data System (ADS)

    Kawai, Takashi; Häkkinen, Jukka; Oshima, Keisuke; Saito, Hiroko; Yamazoe, Takashi; Morikawa, Hiroyuki; Nyman, Göte

    2011-02-01

    In this study, two experiments were conducted to evaluate the psycho-physiological effects by practical use of monocular head-mounted display (HMD) in a real-world environment, based on the assumption of consumer-level applications as viewing video content and receiving navigation information while walking. In the experiment 1, the workload was examined for different types of presenting stimuli using an HMD (monocular or binocular, see-through or non-see-through). The experiment 2 focused on the relationship between the real-world environment and the visual information presented using a monocular HMD. The workload was compared between a case where participants walked while viewing video content without relation to the real-world environment, and a case where participants walked while viewing visual information to augment the real-world environment as navigations.

  2. SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.

    PubMed

    Chen, Long; Tang, Wen; John, Nigel W; Wan, Tao Ruan; Zhang, Jian Jun

    2018-05-01

    While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations based on a robust 3D calibration. We demonstrate the clinical relevance of our proposed system through two examples: (a) measurement of the surface; (b) depth cues in monocular endoscopy. The performance and accuracy evaluations of the proposed framework consist of two steps. First, we have created a computer-generated endoscopy simulation video to quantify the accuracy of the camera tracking by comparing the results of the video camera tracking with the recorded ground-truth camera trajectories. The accuracy of the surface reconstruction is assessed by evaluating the Root Mean Square Distance (RMSD) of surface vertices of the reconstructed mesh with that of the ground truth 3D models. An error of 1.24 mm for the camera trajectories has been obtained and the RMSD for surface reconstruction is 2.54 mm, which compare favourably with previous approaches. Second, in vivo laparoscopic videos are used to examine the quality of accurate AR based annotation and measurement, and the creation of depth cues. These results show the potential promise of our geometry-aware AR technology to be used in MIS surgical scenes. The results show that the new framework is robust and accurate in dealing with challenging situations such as the rapid endoscopy camera movements in monocular MIS scenes. Both camera tracking and surface reconstruction based on a sparse point cloud are effective and operated in real-time. This demonstrates the potential of our algorithm for accurate AR localization and depth augmentation with geometric cues and correct surface measurements in MIS with monocular endoscopes. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Charles Miller Fisher: the 65th anniversary of the publication of his groundbreaking study "Transient Monocular Blindness Associated with Hemiplegia".

    PubMed

    Araújo, Tiago Fernando Souza de; Lange, Marcos; Zétola, Viviane H; Massaro, Ayrton; Teive, Hélio A G

    2017-10-01

    Charles Miller Fisher is considered the father of modern vascular neurology and one of the giants of neurology in the 20th century. This historical review emphasizes Prof. Fisher's magnificent contribution to vascular neurology and celebrates the 65th anniversary of the publication of his groundbreaking study, "Transient Monocular Blindness Associated with Hemiplegia."

  4. Object-based connectedness facilitates matching.

    PubMed

    Koning, Arno; van Lier, Rob

    2003-10-01

    In two matching tasks, participants had to match two images of object pairs. Image-based (IB) connectedness refers to connectedness between the objects in an image. Object-based (OB) connectedness refers to connectedness between the interpreted objects. In Experiment 1, a monocular depth cue (shadow) was used to distinguish different relation types between object pairs. Three relation types were created: IB/OB-connected objects, IB/OB-disconnected objects, and IB-connected/OB-disconnected objects. It was found that IB/OB-connected objects were matched faster than IB/OB-disconnected objects. Objects that were IB-connected/OB-disconnected were matched equally to IB/OB-disconnected objects. In Experiment 2, stereoscopic presentation was used. With relation types comparable to those in Experiment 1, it was again found that OB connectedness determined speed of matching, rather than IB connectedness. We conclude that matching of projections of three-dimensional objects depends more on OB connectedness than on IB connectedness.

  5. VISIDEP™: visual image depth enhancement by parallax induction

    NASA Astrophysics Data System (ADS)

    Jones, Edwin R.; McLaurin, A. P.; Cathey, LeConte

    1984-05-01

    The usual descriptions of depth perception have traditionally required the simultaneous presentation of disparate views presented to separate eyes with the concomitant demand that the resulting binocular parallax be horizontally aligned. Our work suggests that the visual input information is compared in a short-term memory buffer which permits the brain to compute depth as it is normally perceived. However, the mechanism utilized is also capable of receiving and processing the stereographic information even when it is received monocularly or when identical inputs are simultaneously fed to both eyes. We have also found that the restriction to horizontally displaced images is not a necessary requirement and that improvement in image acceptability is achieved by the use of vertical parallax. Use of these ideas permit the presentation of three-dimensional scenes on flat screens in full color without the encumbrance of glasses or other viewing aids.

  6. Comparing the fixational and functional preferred retinal location in a pointing task

    PubMed Central

    Sullivan, Brian; Walker, Laura

    2016-01-01

    Patients with central vision loss (CVL) typically adopt eccentric viewing strategies using a preferred retinal locus (PRL) in peripheral retina. Clinically, the PRL is defined monocularly as the area of peripheral retina used to fixate small stimuli. It is not clear if this fixational PRL describes the same portion of peripheral retina used during dynamic binocular eye-hand coordination tasks. We studied this question with four participants each with a unique CVL history. Using a scanning laser ophthalmoscope, we measured participants’ monocular visual fields and the location and stability of their fixational PRLs. Participants’ monocular and binocular visual fields were also evaluated using a computer monitor and eye tracker. Lastly, eye-hand coordination was tested over several trials where participants pointed to and touched a small target on a touchscreen monitor. Trials were blocked and carried out monocularly and binocularly, with a target appearing at 5° or 15° from screen center, in one of 8 locations. During pointing, our participants often exhibited long movement durations, an increased number of eye movements and impaired accuracy, especially in monocular conditions. However, these compensatory changes in behavior did not consistently worsen when loci beyond the fixational PRL were used. While fixational PRL size, location and fixation stability provide a necessary description of behavior, they are not sufficient to capture the pointing PRL used in this task. Generally, patients use a larger portion of peripheral retina than one might expect from measures of the fixational PRL alone, when pointing to a salient target without time constraints. While the fixational and pointing PRLs often overlap, the fixational PRL does not predict the large area of peripheral retina that can be used. PMID:26440864

  7. Relationship between contrast sensitivity test and disease severity in multiple sclerosis patients.

    PubMed

    Soler García, A; González Gómez, A; Figueroa-Ortiz, L C; García-Ben, A; García-Campos, J

    2014-09-01

    To assess the importance of the Pelli-Robson contrast sensitivity test in multiple sclerosis patients according to the Expanded Disability Status Scale (EDSS). A total of 62 patients with multiple sclerosis were included in a retrospective study. Patients were enrolled from the Neurology Department to Neuroophthalmology at Virgen de la Victoria Hospital. Patients were classified into 3 groups according to EDSS: group A) lower than 1.5, group B) between 1.5 and 3.5 and group C) greater than 3.5. Visual acuity and monocular and binocular contrast sensitivity were performed with Snellen and Pelli-Robson tests respectively. Twelve disease-free control participants were also recruited. Correlations between parameter changes were analyzed. The mean duration of the disease was 81.54±35.32 months. Monocular and binocular Pelli-Robson mean values in the control group were 1.82±0.10 and 1.93±0.43 respectively, and 1.61±0.29 and 1.83±0.19 in multiple sclerosis patients. There were statistically significant differences in the monocular analysis for a level of significance P<.05. Mean monocular and binocular Pelli-Robson values in relation to gravity level were, in group A: 1.66±0.24 and 1.90±0.98, group B: 1.64±0.21 and 1.82±0.16, and group C: 1.47±0.45 and 1.73±0.32 respectively. Group differences were statistically significant in both tests: P=.05 and P=.027. Monocular and binocular contrast discrimination analyzed using the Pelli-Robson test was found to be significantly lower when the severity level, according EDSS, increases in multiple sclerosis patients. Copyright © 2013 Sociedad Española de Oftalmología. Published by Elsevier Espana. All rights reserved.

  8. mRNAs coding for neurotransmitter receptors and voltage-gated sodium channels in the adult rabbit visual cortex after monocular deafferentiation

    PubMed Central

    Nguyen, Quoc-Thang; Matute, Carlos; Miledi, Ricardo

    1998-01-01

    It has been postulated that, in the adult visual cortex, visual inputs modulate levels of mRNAs coding for neurotransmitter receptors in an activity-dependent manner. To investigate this possibility, we performed a monocular enucleation in adult rabbits and, 15 days later, collected their left and right visual cortices. Levels of mRNAs coding for voltage-activated sodium channels, and for receptors for kainate/α-amino-3-hydroxy-5-methylisoxazole-4-propionic acid (AMPA), N-methyl-d-aspartate (NMDA), γ-aminobutyric acid (GABA), and glycine were semiquantitatively estimated in the visual cortices ipsilateral and contralateral to the lesion by the Xenopus oocyte/voltage-clamp expression system. This technique also allowed us to study some of the pharmacological and physiological properties of the channels and receptors expressed in the oocytes. In cells injected with mRNA from left or right cortices of monocularly enucleated and control animals, the amplitudes of currents elicited by kainate or AMPA, which reflect the abundance of mRNAs coding for kainate and AMPA receptors, were similar. There was no difference in the sensitivity to kainate and in the voltage dependence of the kainate response. Responses mediated by NMDA, GABA, and glycine were unaffected by monocular enucleation. Sodium channel peak currents, activation, steady-state inactivation, and sensitivity to tetrodotoxin also remained unchanged after the enucleation. Our data show that mRNAs for major neurotransmitter receptors and ion channels in the adult rabbit visual cortex are not obviously modified by monocular deafferentiation. Thus, our results do not support the idea of a widespread dynamic modulation of mRNAs coding for receptors and ion channels by visual activity in the rabbit visual system. PMID:9501250

  9. Binocular vision in amblyopia: structure, suppression and plasticity.

    PubMed

    Hess, Robert F; Thompson, Benjamin; Baker, Daniel H

    2014-03-01

    The amblyopic visual system was once considered to be structurally monocular. However, it now evident that the capacity for binocular vision is present in many observers with amblyopia. This has led to new techniques for quantifying suppression that have provided insights into the relationship between suppression and the monocular and binocular visual deficits experienced by amblyopes. Furthermore, new treatments are emerging that directly target suppressive interactions within the visual cortex and, on the basis of initial data, appear to improve both binocular and monocular visual function, even in adults with amblyopia. The aim of this review is to provide an overview of recent studies that have investigated the structure, measurement and treatment of binocular vision in observers with strabismic, anisometropic and mixed amblyopia. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  10. Measurement of the Flux of Ultrahigh Energy Cosmic Rays from Monocular Observations by the High Resolution Fly's Eye Experiment

    NASA Astrophysics Data System (ADS)

    Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Bellido, J. A.; Belov, K.; Belz, J. W.; Bergman, D. R.; Cao, Z.; Clay, R. W.; Cooper, M. D.; Dai, H.; Dawson, B. R.; Everett, A. A.; Fedorova, Yu. A.; Girard, J. H.; Gray, R. C.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hüntemeyer, P.; Jones, B. F.; Jui, C. C.; Kieda, D. B.; Kim, K.; Kirn, M. A.; Loh, E. C.; Manago, N.; Marek, L. J.; Martens, K.; Martin, G.; Matthews, J. A.; Matthews, J. N.; Meyer, J. R.; Moore, S. A.; Morrison, P.; Moosman, A. N.; Mumford, J. R.; Munro, M. W.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M.; Sarracino, J. S.; Sasaki, M.; Schnetzer, S. R.; Shen, P.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Taylor, S. F.; Thomas, S. B.; Thompson, T. N.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Vanderveen, T. D.; Zech, A.; Zhang, X.

    2004-04-01

    We have measured the cosmic ray spectrum above 1017.2 eV using the two air-fluorescence detectors of the High Resolution Fly's Eye observatory operating in monocular mode. We describe the detector, phototube, and atmospheric calibrations, as well as the analysis techniques for the two detectors. We fit the spectrum to a model consisting of galactic and extragalactic sources.

  11. Cues for the control of ocular accommodation and vergence during postnatal human development.

    PubMed

    Bharadwaj, Shrikant R; Candy, T Rowan

    2008-12-22

    Accommodation and vergence help maintain single and focused visual experience while an object moves in depth. The relative importance of retinal blur and disparity, the primary sensory cues to accommodation and vergence, is largely unknown during development; a period when growth of the eye and head necessitate continual recalibration of egocentric space. Here we measured the developmental importance of retinal disparity in 192 typically developing subjects (1.9 months to 46 years). Subjects viewed high-contrast cartoon targets with naturalistic spatial frequency spectra while their accommodation and vergence responses were measured from both eyes using a PowerRefractor. Accommodative gain was reduced during monocular viewing relative to full binocular viewing, even though the fixating eye generated comparable tracking eye movements in the two conditions. This result was consistent across three forms of monocular occlusion. The accommodative gain was lowest in infants and only reached adult levels by 7 to 10 years of age. As expected, the gain of vergence was also reduced in monocular conditions. When 4- to 6-year-old children read 20/40-sized letters, their monocular accommodative gain reached adult-like levels. In summary, binocular viewing appears necessary under naturalistic viewing conditions to generate full accommodation and vergence responses in typically developing humans.

  12. Cues for the control of ocular accommodation and vergence during postnatal human development

    PubMed Central

    Bharadwaj, Shrikant R.; Candy, T. Rowan

    2009-01-01

    Accommodation and vergence help maintain single and focused visual experience while an object moves in depth. The relative importance of retinal blur and disparity, the primary sensory cues to accommodation and vergence, is largely unknown during development; a period when growth of the eye and head necessitate continual recalibration of egocentric space. Here we measured the developmental importance of retinal disparity in 192 typically developing subjects (1.9 months to 46 years). Subjects viewed high-contrast cartoon targets with naturalistic spatial frequency spectra while their accommodation and vergence responses were measured from both eyes using a PowerRefractor. Accommodative gain was reduced during monocular viewing relative to full binocular viewing, even though the fixating eye generated comparable tracking eye movements in the two conditions. This result was consistent across three forms of monocular occlusion. The accommodative gain was lowest in infants and only reached adult levels by 7 to 10 years of age. As expected, the gain of vergence was also reduced in monocular conditions. When 4- to 6-year-old children read 20/40-sized letters, their monocular accommodative gain reached adult-like levels. In summary, binocular viewing appears necessary under naturalistic viewing conditions to generate full accommodation and vergence responses in typically developing humans. PMID:19146280

  13. Design of and normative data for a new computer based test of ocular torsion.

    PubMed

    Vaswani, Reena S; Mudgil, Ananth V

    2004-01-01

    To evaluate a new clinically practical and dynamic test for quantifying torsional binocular eye alignment changes which may occur in the change from monocular to binocular viewing conditions. The test was developed using a computer with Lotus Freelance Software, binoculars with prisms and colored filters. The subject looks through binoculars at the computer screen two meters away. For monocular vision, six concentric blue circles, a blue horizontal line and a tilted red line were displayed on the screen. For binocular vision, white circles replaced blue circles. The subject was asked to orient the lines parallel to each other. The difference in tilt (degrees) between the subjective parallel and fixed horizontal position is the torsional alignment of the eye. The time to administer the test was approximately two minutes. In 70 Normal subjects, average age 16 years, the mean degree of cyclodeviation tilt in the right eye was 0.6 degrees for monocular viewing conditions and 0.7 degrees for binocular viewing conditions, with a standard deviation of approximately one degree. There was no "statistically significant" difference between monocular and binocular viewing. This computer based test is a simple, computerized, non-invasive test that has a potential for use in the diagnosis of cyclovertical strabismus. Currently, there is no commercially available test for this purpose.

  14. Audiovisual plasticity following early abnormal visual experience: Reduced McGurk effect in people with one eye.

    PubMed

    Moro, Stefania S; Steeves, Jennifer K E

    2018-04-13

    Previously, we have shown that people who have had one eye surgically removed early in life during visual development have enhanced sound localization [1] and lack visual dominance, commonly observed in binocular and monocular (eye-patched) viewing controls [2]. Despite these changes, people with one eye integrate auditory and visual components of multisensory events optimally [3]. The current study investigates how people with one eye perceive the McGurk effect, an audiovisual illusion where a new syllable is perceived when visual lip movements do not match the corresponding sound [4]. We compared individuals with one eye to binocular and monocular viewing controls and found that they have a significantly smaller McGurk effect compared to binocular controls. Additionally, monocular controls tended to perceive the McGurk effect less often than binocular controls suggesting a small transient modulation of the McGurk effect. These results suggest altered weighting of the auditory and visual modalities with both short and long-term monocular viewing. These results indicate the presence of permanent adaptive perceptual accommodations in people who have lost one eye early in life that may serve to mitigate the loss of binocularity during early brain development. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  15. Reconstruction of the optical system of personalized eye models by using magnetic resonance imaging.

    PubMed

    Sun, Han-Yin; Lee, Chi-Hung; Chuang, Chun-Chao

    2016-11-10

    This study presents a practical method for reconstructing the optical system of personalized eye models by using magnetic resonance imaging (MRI). Monocular images were obtained from a young (20-year-old) healthy subject viewing at a near point (10 cm). Each magnetic resonance image was first analyzed using several commercial software to capture the profile of each optical element of the human eye except for the anterior lens surface, which could not be determined because it overlapped the ciliary muscle. The missing profile was substituted with a modified profile from a generic eye model. After the data-including the refractive indices from a generic model-were input in ZEMAX, we obtained a reasonable initial layout. By further considering the resolution of the MRI, the model was optimized to match the optical performance of a healthy eye. The main benefit of having a personalized eye model is the ability to quantitatively identify wide-angle ocular aberrations, which were corrected by the designed free-form spectacle lens.

  16. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    PubMed

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  17. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    PubMed Central

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-01-01

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284

  18. Robust image features: concentric contrasting circles and their image extraction

    NASA Astrophysics Data System (ADS)

    Gatrell, Lance B.; Hoff, William A.; Sklair, Cheryl W.

    1992-03-01

    Many computer vision tasks can be simplified if special image features are placed on the objects to be recognized. A review of special image features that have been used in the past is given and then a new image feature, the concentric contrasting circle, is presented. The concentric contrasting circle image feature has the advantages of being easily manufactured, easily extracted from the image, robust extraction (true targets are found, while few false targets are found), it is a passive feature, and its centroid is completely invariant to the three translational and one rotational degrees of freedom and nearly invariant to the remaining two rotational degrees of freedom. There are several examples of existing parallel implementations which perform most of the extraction work. Extraction robustness was measured by recording the probability of correct detection and the false alarm rate in a set of images of scenes containing mockups of satellites, fluid couplings, and electrical components. A typical application of concentric contrasting circle features is to place them on modeled objects for monocular pose estimation or object identification. This feature is demonstrated on a visually challenging background of a specular but wrinkled surface similar to a multilayered insulation spacecraft thermal blanket.

  19. Measurement of the flux of ultrahigh energy cosmic rays from monocular observations by the High Resolution Fly's Eye experiment.

    PubMed

    Abbasi, R U; Abu-Zayyad, T; Amann, J F; Archbold, G; Bellido, J A; Belov, K; Belz, J W; Bergman, D R; Cao, Z; Clay, R W; Cooper, M D; Dai, H; Dawson, B R; Everett, A A; Fedorova, Yu A; Girard, J H V; Gray, R C; Hanlon, W F; Hoffman, C M; Holzscheiter, M H; Hüntemeyer, P; Jones, B F; Jui, C C H; Kieda, D B; Kim, K; Kirn, M A; Loh, E C; Manago, N; Marek, L J; Martens, K; Martin, G; Matthews, J A J; Matthews, J N; Meyer, J R; Moore, S A; Morrison, P; Moosman, A N; Mumford, J R; Munro, M W; Painter, C A; Perera, L; Reil, K; Riehle, R; Roberts, M; Sarracino, J S; Sasaki, M; Schnetzer, S R; Shen, P; Simpson, K M; Sinnis, G; Smith, J D; Sokolsky, P; Song, C; Springer, R W; Stokes, B T; Taylor, S F; Thomas, S B; Thompson, T N; Thomson, G B; Tupa, D; Westerhoff, S; Wiencke, L R; VanderVeen, T D; Zech, A; Zhang, X

    2004-04-16

    We have measured the cosmic ray spectrum above 10(17.2) eV using the two air-fluorescence detectors of the High Resolution Fly's Eye observatory operating in monocular mode. We describe the detector, phototube, and atmospheric calibrations, as well as the analysis techniques for the two detectors. We fit the spectrum to a model consisting of galactic and extragalactic sources.

  20. Three-dimensional motion aftereffects reveal distinct direction-selective mechanisms for binocular processing of motion through depth.

    PubMed

    Czuba, Thaddeus B; Rokers, Bas; Guillet, Kyle; Huk, Alexander C; Cormack, Lawrence K

    2011-09-26

    Motion aftereffects are historically considered evidence for neuronal populations tuned to specific directions of motion. Despite a wealth of motion aftereffect studies investigating 2D (frontoparallel) motion mechanisms, there is a remarkable dearth of psychophysical evidence for neuronal populations selective for the direction of motion through depth (i.e., tuned to 3D motion). We compared the effects of prolonged viewing of unidirectional motion under dichoptic and monocular conditions and found large 3D motion aftereffects that could not be explained by simple inheritance of 2D monocular aftereffects. These results (1) demonstrate the existence of neurons tuned to 3D motion as distinct from monocular 2D mechanisms, (2) show that distinct 3D direction selectivity arises from both interocular velocity differences and changing disparities over time, and (3) provide a straightforward psychophysical tool for further probing 3D motion mechanisms. © ARVO

  1. Environmental Enrichment Promotes Plasticity and Visual Acuity Recovery in Adult Monocular Amblyopic Rats

    PubMed Central

    Bonaccorsi, Joyce; Cenni, Maria Cristina; Sale, Alessandro; Maffei, Lamberto

    2012-01-01

    Loss of visual acuity caused by abnormal visual experience during development (amblyopia) is an untreatable pathology in adults. In some occasions, amblyopic patients loose vision in their better eye owing to accidents or illnesses. While this condition is relevant both for its clinical importance and because it represents a case in which binocular interactions in the visual cortex are suppressed, it has scarcely been studied in animal models. We investigated whether exposure to environmental enrichment (EE) is effective in triggering recovery of vision in adult amblyopic rats rendered monocular by optic nerve dissection in their normal eye. By employing both electrophysiological and behavioral assessments, we found a full recovery of visual acuity in enriched rats compared to controls reared in standard conditions. Moreover, we report that EE modulates the expression of GAD67 and BDNF. The non invasive nature of EE renders this paradigm promising for amblyopia therapy in adult monocular people. PMID:22509358

  2. Three-dimensional motion aftereffects reveal distinct direction-selective mechanisms for binocular processing of motion through depth

    PubMed Central

    Czuba, Thaddeus B.; Rokers, Bas; Guillet, Kyle; Huk, Alexander C.; Cormack, Lawrence K.

    2013-01-01

    Motion aftereffects are historically considered evidence for neuronal populations tuned to specific directions of motion. Despite a wealth of motion aftereffect studies investigating 2D (frontoparallel) motion mechanisms, there is a remarkable dearth of psychophysical evidence for neuronal populations selective for the direction of motion through depth (i.e., tuned to 3D motion). We compared the effects of prolonged viewing of unidirectional motion under dichoptic and monocular conditions and found large 3D motion aftereffects that could not be explained by simple inheritance of 2D monocular aftereffects. These results (1) demonstrate the existence of neurons tuned to 3D motion as distinct from monocular 2D mechanisms, (2) show that distinct 3D direction selectivity arises from both interocular velocity differences and changing disparities over time, and (3) provide a straightforward psychophysical tool for further probing 3D motion mechanisms. PMID:21945967

  3. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments †

    PubMed Central

    Guerra, Edmundo

    2018-01-01

    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation. PMID:29701722

  4. Thinking in z-space: flatness and spatial narrativity

    NASA Astrophysics Data System (ADS)

    Zone, Ray

    2012-03-01

    Now that digital technology has accessed the Z-space in cinema, narrative artistry is at a loss. Motion picture professionals no longer can readily resort to familiar tools. A new language and new linguistics for Z-axis storytelling are necessary. After first examining the roots of monocular thinking in painting, prior modes of visual narrative in twodimensional cinema obviating true binocular stereopsis can be explored, particularly montage, camera motion and depth of field, with historic examples. Special attention is paid to the manner in which monocular cues for depth have been exploited to infer depth on a planar screen. Both the artistic potential and visual limitations of actual stereoscopic depth as a filmmaking language are interrogated. After an examination of the historic basis of monocular thinking in visual culture, a context for artistic exploration of the use of the z-axis as a heightened means of creating dramatic and emotional impact upon the viewer is illustrated.

  5. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments.

    PubMed

    Trujillo, Juan-Carlos; Munguia, Rodrigo; Guerra, Edmundo; Grau, Antoni

    2018-04-26

    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.

  6. Adaptation to interocular differences in blur

    PubMed Central

    Kompaniez, Elysse; Sawides, Lucie; Marcos, Susana; Webster, Michael A.

    2013-01-01

    Adaptation to a blurred image causes a physically focused image to appear too sharp, and shifts the point of subjective focus toward the adapting blur, consistent with a renormalization of perceived focus. We examined whether and how this adaptation normalizes to differences in blur between the two eyes, which can routinely arise from differences in refractive errors. Observers adapted to images filtered to simulate optical defocus or different axes of astigmatism, as well as to images that were isotropically blurred or sharpened by varying the slope of the amplitude spectrum. Adaptation to the different types of blur produced strong aftereffects that showed strong transfer across the eyes, as assessed both in a monocular adaptation task and in a contingent adaptation task in which the two eyes were simultaneously exposed to different blur levels. Selectivity for the adapting eye was thus generally weak. When one eye was exposed to a sharper image than the other, the aftereffects also tended to be dominated by the sharper image. Our results suggest that while short-term adaptation can rapidly recalibrate the perception of blur, it cannot do so independently for the two eyes, and that the binocular adaptation of blur is biased by the sharper of the two eyes' retinal images. PMID:23729770

  7. Vergence accommodation and monocular closed loop blur accommodation have similar dynamic characteristics.

    PubMed

    Suryakumar, Rajaraman; Meyers, Jason P; Irving, Elizabeth L; Bobier, William R

    2007-02-01

    Retinal blur and disparity are two different sensory signals known to cause a change in accommodative response. These inputs have differing neurological correlates that feed into a final common pathway. The purpose of this study was to investigate the dynamic properties of monocular blur driven accommodation and binocular disparity driven vergence-accommodation (VA) in human subjects. The results show that when response amplitudes are matched, blur accommodation and VA share similar dynamic properties.

  8. [Acute monocular loss of vision : Differential diagnostic considerations apart from the internistic etiological clarification].

    PubMed

    Rickmann, A; Macek, M A; Szurman, P; Boden, K

    2017-08-03

    We report the case of acute painless monocular loss of vision in a 53-year-old man. An interdisciplinary etiological evaluation remained without pathological findings with respect to arterial branch occlusion. A reevaluation of the patient history led to a possible association with the administration of phosphodiesterase type 5 inhibitor (PDE5 inhibitor). A critical review of the literature on PDE5 inhibitor administration with ocular participation was performed.

  9. Evaluating the speed of visual recovery following thin-flap LASIK with a femtosecond laser.

    PubMed

    Durrie, Daniel S; Brinton, Jason P; Avila, Michele R; Stahl, Erin D

    2012-09-01

    To investigate the speed of visual recovery following myopic thin-flap LASIK with a femtosecond laser. This pilot study prospectively evaluated 20 eyes from 10 patients who underwent bilateral simultaneous LASIK with the Femto LDV Crystal Line femtosecond laser (Ziemer Ophthalmic Systems AG) used to create a circular flap of 9.0-mm diameter and 110-μm thickness followed by photoablation with the Allegretto Wave Eye-Q (WaveLight AG) excimer laser. Binocular and monocular uncorrected distance visual acuity (UDVA), monocular contrast sensitivity, and a patient questionnaire were evaluated during the first hours, 1 day, and 1 month postoperatively. For monocular UDVA, 100% of eyes were 20/40 at 1 hour and 100% were 20/25 at 4 hours. For binocular UDVA, all patients achieved 20/32 by 30 minutes and 20/20 by 4 hours. Low frequency contrast sensitivity returned to preoperative baseline by 1 hour (P=.73), and showed a statistically significant improvement over baseline by 4 hours (P=.01). High frequency monocular contrast sensitivity returned to preoperative baseline by 4 hours (P=.48), and showed a statistically significant improvement by 1 month (P=.04). At 2 and 4 hours, 50% and 100% of patients, respectively, indicated that they would feel comfortable driving. Visual recovery after thin-flap femtosecond LASIK is rapid, occurring within the first few hours after surgery. Copyright 2012, SLACK Incorporated.

  10. Reduction in spontaneous firing of mouse excitatory layer 4 cortical neurons following visual classical conditioning

    NASA Astrophysics Data System (ADS)

    Bekisz, Marek; Shendye, Ninad; Raciborska, Ida; Wróbel, Andrzej; Waleszczyk, Wioletta J.

    2017-08-01

    The process of learning induces plastic changes in neuronal network of the brain. Our earlier studies on mice showed that classical conditioning in which monocular visual stimulation was paired with an electric shock to the tail enhanced GABA immunoreactivity within layer 4 of the monocular part of the primary visual cortex (V1), contralaterally to the stimulated eye. In the present experiment we investigated whether the same classical conditioning paradigm induces changes of neuronal excitability in this cortical area. Two experimental groups were used: mice that underwent 7-day visual classical conditioning and controls. Patch-clamp whole-cell recordings were performed from ex vivo slices of mouse V1. The slices were perfused with the modified artificial cerebrospinal fluid, the composition of which better mimics the brain interstitial fluid in situ and induces spontaneous activity. The neuronal excitability was characterized by measuring the frequency of spontaneous action potentials. We found that layer 4 star pyramidal cells located in the monocular representation of the "trained" eye in V1 had lower frequency of spontaneous activity in comparison with neurons from the same cortical region of control animals. Weaker spontaneous firing indicates decreased general excitability of star pyramidal neurons within layer 4 of the monocular representation of the "trained" eye in V1. Such effect could result from enhanced inhibitory processes accompanying learning in this cortical area.

  11. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    NASA Astrophysics Data System (ADS)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  12. Solving da Vinci stereopsis with depth-edge-selective V2 cells

    PubMed Central

    Assee, Andrew; Qian, Ning

    2007-01-01

    We propose a new model for da Vinci stereopsis based on a coarse-to-fine disparity-energy computation in V1 and disparity-boundary-selective units in V2. Unlike previous work, our model contains only binocular cells, relies on distributed representations of disparity, and has a simple V1-to-V2 feedforward structure. We demonstrate with random dot stereograms that the V2 stage of our model is able to determine the location and the eye-of-origin of monocularly occluded regions and improve disparity map computation. We also examine a few related issues. First, we argue that since monocular regions are binocularly defined, they cannot generally be detected by monocular cells. Second, we show that our coarse-to-fine V1 model for conventional stereopsis explains double matching in Panum’s limiting case. This provides computational support to the notion that the perceived depth of a monocular bar next to a binocular rectangle may not be da Vinci stereopsis per se (Gillam et al., 2003). Third, we demonstrate that some stimuli previously deemed invalid have simple, valid geometric interpretations. Our work suggests that studies of da Vinci stereopsis should focus on stimuli more general than the bar-and-rectangle type and that disparity-boundary-selective V2 cells may provide a simple physiological mechanism for da Vinci stereopsis. PMID:17698163

  13. Frequency-doubling technology perimetry and multifocal visual evoked potential in glaucoma, suspected glaucoma, and control patients

    PubMed Central

    Kanadani, Fabio N; Mello, Paulo AA; Dorairaj, Syril K; Kanadani, Tereza CM

    2014-01-01

    Introduction The gold standard in functional glaucoma evaluation is standard automated perimetry (SAP). However, SAP depends on the reliability of the patients’ responses and other external factors; therefore, other technologies have been developed for earlier detection of visual field changes in glaucoma patients. The frequency-doubling perimetry (FDT) is believed to detect glaucoma earlier than SAP. The multifocal visual evoked potential (mfVEP) is an objective test for functional evaluation. Objective To evaluate the sensitivity and specificity of FDT and mfVEP tests in normal, suspect, and glaucomatous eyes and compare the monocular and interocular mfVEP. Methods Ninety-five eyes from 95 individuals (23 controls, 33 glaucoma suspects, 39 glaucomatous) were enrolled. All participants underwent a full ophthalmic examination, followed by SAP, FDT, and mfVEP tests. Results The area under the curve for mean deviation and pattern standard deviation were 0.756 and 0.761, respectively, for FDT, 0.564 and 0.512 for signal and alpha for interocular mfVEP, and 0.568 and 0.538 for signal and alpha for monocular mfVEP. This difference between monocular and interocular mfVEP was not significant. Conclusion The FDT Matrix was superior to mfVEP in glaucoma detection. The difference between monocular and interocular mfVEP in the diagnosis of glaucoma was not significant. PMID:25075173

  14. Unilateral blindness with third cranial nerve palsy and abnormal enhancement of extraocular muscles on magnetic resonance imaging of orbit after the ingestion of methanol.

    PubMed

    Chung, Tae Nyoung; Kim, Sun Wook; Park, Yoo Seok; Park, Incheol

    2010-05-01

    Methanol is generally known to cause visual impairment and various systemic manifestations. There are a few reported specific findings for methanol intoxication on magnetic resonance imaging (MRI) of the brain. A case is reported of unilateral blindness with third cranial nerve palsy oculus sinister (OS) after the ingestion of methanol. Unilateral damage of the retina and optic nerve were confirmed by fundoscopy, flourescein angiography, visual evoked potential and electroretinogram. The optic nerve and extraocular muscles (superior rectus, medial rectus, inferior rectus and inferior oblique muscle) were enhanced by gadolinium-DTPA on MRI of the orbit. This is the first case report of permanent monocular blindness with confirmed unilateral damage of the retina and optic nerve, combined with third cranial nerve palsy after methanol ingestion.

  15. The effect of monocular and binocular viewing on the accommodation response to real targets in emmetropia and myopia.

    PubMed

    Seidel, Dirk; Gray, Lyle S; Heron, Gordon

    2005-04-01

    Decreased blur-sensitivity found in myopia has been linked with reduced accommodation responses and myopigenesis. Although the mechanism for myopia progression remains unclear, it is commonly known that myopic patients rarely report near visual symptoms and are generally very sensitive to small changes in their distance prescription. This experiment investigated the effect of monocular and binocular viewing on static and dynamic accommodation in emmetropes and myopes for real targets to monitor whether inaccuracies in the myopic accommodation response are maintained when a full set of visual cues, including size and disparity, is available. Monocular and binocular steady-state accommodation responses were measured with a Canon R1 autorefractor for target vergences ranging from 0-5 D in emmetropes (EMM), late-onset myopes (LOM), and early-onset myopes (EOM). Dynamic closed-loop accommodation responses for a stationary target at 0.25 m and step stimuli of two different magnitudes were recorded for both monocular and binocular viewing. All refractive groups showed similar accommodation stimulus response curves consistent with previously published data. Viewing a stationary near target monocularly, LOMs demonstrated slightly larger accommodation microfluctuations compared with EMMs and EOMs; however, this difference was absent under binocular viewing conditions. Dynamic accommodation step responses revealed significantly (p < 0.05) longer response times for the myopic subject groups for a number of step stimuli. No significant difference in either reaction time or the number of correct responses for a given number of step-vergence changes was found between the myopic groups and EMMs. When viewing real targets with size and disparity cues available, no significant differences in the accuracy of static and dynamic accommodation responses were found among EMM, EOM, and LOM. The results suggest that corrected myopes do not experience dioptric blur levels that are substantially different from emmetropes when they view free space targets.

  16. Disambiguation of Necker cube rotation by monocular and binocular depth cues: Relative effectiveness for establishing long-term bias

    PubMed Central

    Backus, Benjamin T.; Jain, Anshul

    2011-01-01

    The apparent direction of rotation of perceptually bistable wire-frame (Necker) cubes can be conditioned to depend on retinal location by interleaving their presentation with cubes that are disambiguated by depth cues (Haijiang, Saunders, Stone & Backus, 2006; Harrison & Backus, 2010a). The long-term nature of the learned bias is demonstrated by resistance to counter-conditioning on a consecutive day. In previous work, either binocular disparity and occlusion, or a combination of monocular depth cues that included occlusion, internal occlusion, haze, and depth-from-shading, were used to control the rotation direction of disambiguated cubes. Here, we test the relative effectiveness of these two sets of depth cues in establishing the retinal location bias. Both cue sets were highly effective in establishing a perceptual bias on Day 1 as measured by the perceived rotation direction of ambiguous cubes. The effect of counter-conditioning on Day 2, on perceptual outcome for ambiguous cubes, was independent of whether the cue set was the same or different as Day 1. This invariance suggests that a common neural population instantiates the bias for rotation direction, regardless of the cue-set used. However, in a further experiment where only disambiguated cubes were presented on Day 1, perceptual outcome of ambiguous cubes during Day 2 counter-conditioning showed that the monocular-only cue set was in fact more effective than disparity-plus-occlusion for causing long-term learning of the bias. These results can be reconciled if the conditioning effect of Day 1 ambiguous trials in the first experiment is taken into account (Harrison & Backus, 2010b). We suggest that monocular disambiguation leads to stronger bias either because it more strongly activates a single neural population that is necessary for perceiving rotation, or because ambiguous stimuli engage cortical areas that are also engaged by monocularly disambiguated stimuli but not by disparity-disambiguated stimuli. PMID:21335023

  17. Disambiguation of Necker cube rotation by monocular and binocular depth cues: relative effectiveness for establishing long-term bias.

    PubMed

    Harrison, Sarah J; Backus, Benjamin T; Jain, Anshul

    2011-05-11

    The apparent direction of rotation of perceptually bistable wire-frame (Necker) cubes can be conditioned to depend on retinal location by interleaving their presentation with cubes that are disambiguated by depth cues (Haijiang, Saunders, Stone, & Backus, 2006; Harrison & Backus, 2010a). The long-term nature of the learned bias is demonstrated by resistance to counter-conditioning on a consecutive day. In previous work, either binocular disparity and occlusion, or a combination of monocular depth cues that included occlusion, internal occlusion, haze, and depth-from-shading, were used to control the rotation direction of disambiguated cubes. Here, we test the relative effectiveness of these two sets of depth cues in establishing the retinal location bias. Both cue sets were highly effective in establishing a perceptual bias on Day 1 as measured by the perceived rotation direction of ambiguous cubes. The effect of counter-conditioning on Day 2, on perceptual outcome for ambiguous cubes, was independent of whether the cue set was the same or different as Day 1. This invariance suggests that a common neural population instantiates the bias for rotation direction, regardless of the cue set used. However, in a further experiment where only disambiguated cubes were presented on Day 1, perceptual outcome of ambiguous cubes during Day 2 counter-conditioning showed that the monocular-only cue set was in fact more effective than disparity-plus-occlusion for causing long-term learning of the bias. These results can be reconciled if the conditioning effect of Day 1 ambiguous trials in the first experiment is taken into account (Harrison & Backus, 2010b). We suggest that monocular disambiguation leads to stronger bias either because it more strongly activates a single neural population that is necessary for perceiving rotation, or because ambiguous stimuli engage cortical areas that are also engaged by monocularly disambiguated stimuli but not by disparity-disambiguated stimuli. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Effects of monocular viewing and eye dominance on spatial attention.

    PubMed

    Roth, Heidi L; Lora, Andrea N; Heilman, Kenneth M

    2002-09-01

    Observations in primates and patients with unilateral spatial neglect have suggested that patching of the eye ipsilateral to the injury and contralateral to the neglected space can sometimes improve attention to the neglected space. Investigators have generally attributed the effects of monocular eye patching to activation of subcortical centers that interact with cortical attentional systems. Eye patching is thought to produce preferential activation of attentional systems contralateral to the viewing eye. In this study we examined the effect of monocular eye patching on attentional biases in normal subjects. When normal subjects bisect vertical (radial) lines using both eyes, they demonstrate a far attentional bias, misbisecting lines away from their body. In a monocular viewing experiment, we found that the majority of subjects, who were right eye dominant, had relatively nearer bisections and a diminished far bias when they used their right eye (left eye covered) compared with when they used their left eye (right eye covered). The smaller group of subjects who were left eye dominant had relatively nearer bisections and a diminished far bias when they used their left eye compared with when they used their right eye. In the hemispatial placement experiment, we directly manipulated hemispheric engagement by having subjects perform the same task in right and left hemispace. We found that right eye dominant subjects had a diminished far bias in right hemispace relative to left hemispace. Left eye dominant subjects showed the opposite pattern and had a diminished far bias in left hemispace. For both groups, spatial presentation affected performance more for the non-dominant eye. The results suggest that monocular viewing is associated with preferential activation of attentional systems in the contralateral hemisphere, and that the right hemisphere (at least in right eye dominant subjects) is biased towards far space. Finally, the results suggest that the poorly understood phenomenon of eye dominance may be related to hemispheric specialization for visual attention.

  19. Layered data association using graph-theoretic formulation with applications to tennis ball tracking in monocular sequences.

    PubMed

    Yan, Fei; Christmas, William; Kittler, Josef

    2008-10-01

    In this paper, we propose a multilayered data association scheme with graph-theoretic formulation for tracking multiple objects that undergo switching dynamics in clutter. The proposed scheme takes as input object candidates detected in each frame. At the object candidate level, "tracklets'' are "grown'' from sets of candidates that have high probabilities of containing only true positives. At the tracklet level, a directed and weighted graph is constructed, where each node is a tracklet, and the edge weight between two nodes is defined according to the "compatibility'' of the two tracklets. The association problem is then formulated as an all-pairs shortest path (APSP) problem in this graph. Finally, at the path level, by analyzing the APSPs, all object trajectories are identified, and track initiation and track termination are automatically dealt with. By exploiting a special topological property of the graph, we have also developed a more efficient APSP algorithm than the general-purpose ones. The proposed data association scheme is applied to tennis sequences to track tennis balls. Experiments show that it works well on sequences where other data association methods perform poorly or fail completely.

  20. The Effect of a Monocular Helmet-Mounted Display on Aircrew Health: A Cohort Study of Apache AH Mk 1 Pilots Four-Year Review

    DTIC Science & Technology

    2009-12-01

    forward-looking infrared FOV field-of-view HDU helmet display unit HMD helmet-mounted display IHADSS Integrated Helmet and Display...monocular Integrated Helmet and Display Sighting System (IHADSS) helmet-mounted display ( HMD ) in the British Army’s Apache AH Mk 1 attack helicopter has any...Integrated Helmet and Display Sighting System, IHADSS, Helmet-mounted display, HMD , Apache helicopter, Visual performance UNCLAS UNCLAS UNCLAS SAR 96

  1. Horizontal optokinetic reflex in the opossum Didelphis marsupialis aurita.

    PubMed

    Nasi, J P; Bernardes, R F; Volchan, E; Rocha-Miranda, C E; Tecles, M

    1989-01-01

    Electro-oculographic recordings were performed in 10 opossums. The optokinetic reflex was elicited by projecting a random dot stimulus on a cylindrical screen moving horizontally from left to right or right to left at various constant speeds. Binocular stimulation yielded the same response as the temporal to nasal monocular condition. The nasal to temporal monocular response was always less than that to the opposite direction: 50% at 3 degrees/s and 15% at 18 degrees/s. These results are discussed in a comparative context.

  2. Multispectral embedding-based deep neural network for three-dimensional human pose recovery

    NASA Astrophysics Data System (ADS)

    Yu, Jialin; Sun, Jifeng

    2018-01-01

    Monocular image-based three-dimensional (3-D) human pose recovery aims to retrieve 3-D poses using the corresponding two-dimensional image features. Therefore, the pose recovery performance highly depends on the image representations. We propose a multispectral embedding-based deep neural network (MSEDNN) to automatically obtain the most discriminative features from multiple deep convolutional neural networks and then embed their penultimate fully connected layers into a low-dimensional manifold. This compact manifold can explore not only the optimum output from multiple deep networks but also the complementary properties of them. Furthermore, the distribution of each hierarchy discriminative manifold is sufficiently smooth so that the training process of our MSEDNN can be effectively implemented only using few labeled data. Our proposed network contains a body joint detector and a human pose regressor that are jointly trained. Extensive experiments conducted on four databases show that our proposed MSEDNN can achieve the best recovery performance compared with the state-of-the-art methods.

  3. Color Constancy in Two-Dimensional and Three-Dimensional Scenes: Effects of Viewing Methods and Surface Texture.

    PubMed

    Morimoto, Takuma; Mizokami, Yoko; Yaguchi, Hirohisa; Buck, Steven L

    2017-01-01

    There has been debate about how and why color constancy may be better in three-dimensional (3-D) scenes than in two-dimensional (2-D) scenes. Although some studies have shown better color constancy for 3-D conditions, the role of specific cues remains unclear. In this study, we compared color constancy for a 3-D miniature room (a real scene consisting of actual objects) and 2-D still images of that room presented on a monitor using three viewing methods: binocular viewing, monocular viewing, and head movement. We found that color constancy was better for the 3-D room; however, color constancy for the 2-D image improved when the viewing method caused the scene to be perceived more like a 3-D scene. Separate measurements of the perceptual 3-D effect of each viewing method also supported these results. An additional experiment comparing a miniature room and its image with and without texture suggested that surface texture of scene objects contributes to color constancy.

  4. A comparative interregional analysis of selected data from LANDSAT-1 and EREP for the inventory and monitoring of natural ecosystems

    NASA Technical Reports Server (NTRS)

    Poulton, C. E.

    1975-01-01

    Comparative statistics were presented on the capability of LANDSAT-1 and three of the Skylab remote sensing systems (S-190A, S-190B, S-192) for the recognition and inventory of analogous natural vegetations and landscape features important in resource allocation and management. Two analogous regions presenting vegetational zonation from salt desert to alpine conditions above the timberline were observed, emphasizing the visual interpretation mode in the investigation. An hierarchical legend system was used as the basic classification of all land surface features. Comparative tests were run on image identifiability with the different sensor systems, and mapping and interpretation tests were made both in monocular and stereo interpretation with all systems except the S-192. Significant advantage was found in the use of stereo from space when image analysis is by visual or visual-machine-aided interactive systems. Some cost factors in mapping from space are identified. The various image types are compared and an operational system is postulated.

  5. Comparative analysis of ROS-based monocular SLAM methods for indoor navigation

    NASA Astrophysics Data System (ADS)

    Buyval, Alexander; Afanasyev, Ilya; Magid, Evgeni

    2017-03-01

    This paper presents a comparison of four most recent ROS-based monocular SLAM-related methods: ORB-SLAM, REMODE, LSD-SLAM, and DPPTAM, and analyzes their feasibility for a mobile robot application in indoor environment. We tested these methods using video data that was recorded from a conventional wide-angle full HD webcam with a rolling shutter. The camera was mounted on a human-operated prototype of an unmanned ground vehicle, which followed a closed-loop trajectory. Both feature-based methods (ORB-SLAM, REMODE) and direct SLAMrelated algorithms (LSD-SLAM, DPPTAM) demonstrated reasonably good results in detection of volumetric objects, corners, obstacles and other local features. However, we met difficulties with recovering typical for offices homogeneously colored walls, since all of these methods created empty spaces in a reconstructed sparse 3D scene. This may cause collisions of an autonomously guided robot with unfeatured walls and thus limits applicability of maps, which are obtained by the considered monocular SLAM-related methods for indoor robot navigation.

  6. Visual cues and perceived reachability.

    PubMed

    Gabbard, Carl; Ammar, Diala

    2005-12-01

    A rather consistent finding in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate at midline. Explanations of such behavior have focused primarily on perceptions of postural constraints and the notion that individuals calibrate reachability in reference to multiple degrees of freedom, also known as the whole-body explanation. The present study examined the role of visual information in the form of binocular and monocular cues in perceived reachability. Right-handed participants judged the reachability of visual targets at midline with both eyes open, dominant eye occluded, and the non-dominant eye covered. Results indicated that participants were relatively accurate with condition responses not being significantly different in regard to total error. Analysis of the direction of error (mean bias) revealed effective accuracy across conditions with only a marginal distinction between monocular and binocular conditions. Therefore, within the task conditions of this experiment, it appears that binocular and monocular cues provide sufficient visual information for effective judgments of perceived reach at midline.

  7. Stereoscopic advantages for vection induced by radial, circular, and spiral optic flows.

    PubMed

    Palmisano, Stephen; Summersby, Stephanie; Davies, Rodney G; Kim, Juno

    2016-11-01

    Although observer motions project different patterns of optic flow to our left and right eyes, there has been surprisingly little research into potential stereoscopic contributions to self-motion perception. This study investigated whether visually induced illusory self-motion (i.e., vection) is influenced by the addition of consistent stereoscopic information to radial, circular, and spiral (i.e., combined radial + circular) patterns of optic flow. Stereoscopic vection advantages were found for radial and spiral (but not circular) flows when monocular motion signals were strong. Under these conditions, stereoscopic benefits were greater for spiral flow than for radial flow. These effects can be explained by differences in the motion aftereffects generated by these displays, which suggest that the circular motion component in spiral flow selectively reduced adaptation to stereoscopic motion-in-depth. Stereoscopic vection advantages were not observed for circular flow when monocular motion signals were strong, but emerged when monocular motion signals were weakened. These findings show that stereoscopic information can contribute to visual self-motion perception in multiple ways.

  8. The energy spectrum of ultra-high-energy cosmic rays measured by the Telescope Array FADC fluorescence detectors in monocular mode

    NASA Astrophysics Data System (ADS)

    Abu-Zayyad, T.; Aida, R.; Allen, M.; Anderson, R.; Azuma, R.; Barcikowski, E.; Belz, J. W.; Bergman, D. R.; Blake, S. A.; Cady, R.; Cheon, B. G.; Chiba, J.; Chikawa, M.; Cho, E. J.; Cho, W. R.; Fujii, H.; Fujii, T.; Fukuda, T.; Fukushima, M.; Hanlon, W.; Hayashi, K.; Hayashi, Y.; Hayashida, N.; Hibino, K.; Hiyama, K.; Honda, K.; Iguchi, T.; Ikeda, D.; Ikuta, K.; Inoue, N.; Ishii, T.; Ishimori, R.; Ito, H.; Ivanov, D.; Iwamoto, S.; Jui, C. C. H.; Kadota, K.; Kakimoto, F.; Kalashev, O.; Kanbe, T.; Kasahara, K.; Kawai, H.; Kawakami, S.; Kawana, S.; Kido, E.; Kim, H. B.; Kim, H. K.; Kim, J. H.; Kim, J. H.; Kitamoto, K.; Kitamura, S.; Kitamura, Y.; Kobayashi, K.; Kobayashi, Y.; Kondo, Y.; Kuramoto, K.; Kuzmin, V.; Kwon, Y. J.; Lan, J.; Lim, S. I.; Lundquist, J. P.; Machida, S.; Martens, K.; Matsuda, T.; Matsuura, T.; Matsuyama, T.; Matthews, J. N.; Myers, I.; Minamino, M.; Miyata, K.; Murano, Y.; Nagataki, S.; Nakamura, T.; Nam, S. W.; Nonaka, T.; Ogio, S.; Ogura, J.; Ohnishi, M.; Ohoka, H.; Oki, K.; Oku, D.; Okuda, T.; Ono, M.; Oshima, A.; Ozawa, S.; Park, I. H.; Pshirkov, M. S.; Rodriguez, D. C.; Roh, S. Y.; Rubtsov, G.; Ryu, D.; Sagawa, H.; Sakurai, N.; Sampson, A. L.; Scott, L. M.; Shah, P. D.; Shibata, F.; Shibata, T.; Shimodaira, H.; Shin, B. K.; Shin, J. I.; Shirahama, T.; Smith, J. D.; Sokolsky, P.; Sonley, T. J.; Springer, R. W.; Stokes, B. T.; Stratton, S. R.; Stroman, T. A.; Suzuki, S.; Takahashi, Y.; Takeda, M.; Taketa, A.; Takita, M.; Tameda, Y.; Tanaka, H.; Tanaka, K.; Tanaka, M.; Thomas, S. B.; Thomson, G. B.; Tinyakov, P.; Tkachev, I.; Tokuno, H.; Tomida, T.; Troitsky, S.; Tsunesada, Y.; Tsutsumi, K.; Tsuyuguchi, Y.; Uchihori, Y.; Udo, S.; Ukai, H.; Vasiloff, G.; Wada, Y.; Wong, T.; Yamakawa, Y.; Yamane, R.; Yamaoka, H.; Yamazaki, K.; Yang, J.; Yoneda, Y.; Yoshida, S.; Yoshii, H.; Zollinger, R.; Zundel, Z.

    2013-08-01

    We present a measurement of the energy spectrum of ultra-high-energy cosmic rays performed by the Telescope Array experiment using monocular observations from its two new FADC-based fluorescence detectors. After a short description of the experiment, we describe the data analysis and event reconstruction procedures. Since the aperture of the experiment must be calculated by Monte Carlo simulation, we describe this calculation and the comparisons of simulated and real data used to verify the validity of the aperture calculation. Finally, we present the energy spectrum calculated from the merged monocular data sets of the two FADC-based detectors, and also the combination of this merged spectrum with an independent, previously published monocular spectrum measurement performed by Telescope Array's third fluorescence detector [T. Abu-Zayyad et al., The energy spectrum of Telescope Array's middle drum detector and the direct comparison to the high resolution fly's eye experiment, Astroparticle Physics 39 (2012) 109-119, http://dx.doi.org/10.1016/j.astropartphys.2012.05.012, Available from: ]. This combined spectrum corroborates the recently published Telescope Array surface detector spectrum [T. Abu-Zayyad, et al., The cosmic-ray energy spectrum observed with the surface detector of the Telescope Array experiment, ApJ 768 (2013) L1, http://dx.doi.org/10.1088/2041-8205/768/1/L1, Available from: ] with independent systematic uncertainties.

  9. Landing performance by low-time private pilots after the sudden loss of binocular vision - Cyclops II

    NASA Technical Reports Server (NTRS)

    Lewis, C. E., Jr.; Swaroop, R.; Mcmurty, T. C.; Blakeley, W. R.; Masters, R. L.

    1973-01-01

    Study of low-time general aviation pilots, who, in a series of spot landings, were suddenly deprived of binocular vision by patching either eye on the downwind leg of a standard, closed traffic pattern. Data collected during these landings were compared with control data from landings flown with normal vision during the same flight. The sequence of patching and the mix of control and monocular landings were randomized to minimize the effect of learning. No decrease in performance was observed during landings with vision restricted to one eye, in fact, performance improved. This observation is reported at a high level of confidence (p less than 0.001). These findings confirm the previous work of Lewis and Krier and have important implications with regard to aeromedical certification standards.

  10. Efficient receptive field tiling in primate V1

    PubMed Central

    Nauhaus, Ian; Nielsen, Kristina J.; Callaway, Edward M.

    2017-01-01

    The primary visual cortex (V1) encodes a diverse set of visual features, including orientation, ocular dominance (OD) and spatial frequency (SF), whose joint organization must be precisely structured to optimize coverage within the retinotopic map. Prior experiments have only identified efficient coverage based on orthogonal maps. Here, we used two-photon calcium imaging to reveal an alternative arrangement for OD and SF maps in macaque V1; their gradients run parallel but with unique spatial periods, whereby low SF regions coincide with monocular regions. Next, we mapped receptive fields and find surprisingly precise micro-retinotopy that yields a smaller point-image and requires more efficient inter-map geometry, thus underscoring the significance of map relationships. While smooth retinotopy is constraining, studies suggest that it improves both wiring economy and the V1 population code read downstream. Altogether, these data indicate that connectivity within V1 is finely tuned and precise at the level of individual neurons. PMID:27499086

  11. Retinal projection type super multi-view head-mounted display

    NASA Astrophysics Data System (ADS)

    Takahashi, Hideya; Ito, Yutaka; Nakata, Seigo; Yamada, Kenji

    2014-02-01

    We propose a retinal projection type super multi-view head-mounted display (HMD). The smooth motion parallax provided by the super multi-view technique enables a precise superposition of virtual 3D images on the real scene. Moreover, if a viewer focuses one's eyes on the displayed 3D image, the stimulus for the accommodation of the human eye is produced naturally. Therefore, although proposed HMD is a monocular HMD, it provides observers with natural 3D images. The proposed HMD consists of an image projection optical system and a holographic optical element (HOE). The HOE is used as a combiner, and also works as a condenser lens to implement the Maxwellian view. Some parallax images are projected onto the HOE, and converged on the pupil, and then projected onto the retina. In order to verify the effectiveness of the proposed HMD, we constructed the prototype HMD. In the prototype HMD, the number of parallax images and the number of convergent points on the pupil is three. The distance between adjacent convergent points is 2 mm. We displayed virtual images at the distance from 20 cm to 200 cm in front of the pupil, and confirmed the accommodation. This paper describes the principle of proposed HMD, and also describes the experimental result.

  12. Binocular summation and peripheral visual response time

    NASA Technical Reports Server (NTRS)

    Gilliland, K.; Haines, R. F.

    1975-01-01

    Six males were administered a peripheral visual response time test to the onset of brief small stimuli imaged in 10-deg arc separation intervals across the dark adapted horizontal retinal meridian under both binocular and monocular viewing conditions. This was done in an attempt to verify the existence of peripheral binocular summation using a response time measure. The results indicated that from 50-deg arc right to 50-deg arc left of the line of sight binocular summation is a reasonable explanation for the significantly faster binocular data. The stimulus position by viewing eye interaction was also significant. A discussion of these and other analyses is presented along with a review of related literature.

  13. Ground moving target geo-location from monocular camera mounted on a micro air vehicle

    NASA Astrophysics Data System (ADS)

    Guo, Li; Ang, Haisong; Zheng, Xiangming

    2011-08-01

    The usual approaches to unmanned air vehicle(UAV)-to-ground target geo-location impose some severe constraints to the system, such as stationary objects, accurate geo-reference terrain database, or ground plane assumption. Micro air vehicle(MAV) works with characteristics including low altitude flight, limited payload and onboard sensors' low accuracy. According to these characteristics, a method is developed to determine the location of ground moving target which imaged from the air using monocular camera equipped on MAV. This method eliminates the requirements for terrain database (elevation maps) and altimeters that can provide MAV's and target's altitude. Instead, the proposed method only requires MAV flight status provided by its inherent onboard navigation system which includes inertial measurement unit(IMU) and global position system(GPS). The key is to get accurate information on the altitude of the ground moving target. First, Optical flow method extracts background static feature points. Setting a local region around the target in the current image, The features which are on the same plane with the target in this region are extracted, and are retained as aided features. Then, inverse-velocity method calculates the location of these points by integrated with aircraft status. The altitude of object, which is calculated by using position information of these aided features, combining with aircraft status and image coordinates, geo-locate the target. Meanwhile, a framework with Bayesian estimator is employed to eliminate noise caused by camera, IMU and GPS. Firstly, an extended Kalman filter(EKF) provides a simultaneous localization and mapping solution for the estimation of aircraft states and aided features location which defines the moving target local environment. Secondly, an unscented transformation(UT) method determines the estimated mean and covariance of target location from aircraft states and aided features location, and then exports them for the moving target Kalman filter(KF). Experimental results show that our method can instantaneously geo-locate the moving target by operator's single click and can reach 15 meters accuracy for an MAV flying at 200 meters above the ground.

  14. A fast bilinear structure from motion algorithm using a video sequence and inertial sensors.

    PubMed

    Ramachandran, Mahesh; Veeraraghavan, Ashok; Chellappa, Rama

    2011-01-01

    In this paper, we study the benefits of the availability of a specific form of additional information—the vertical direction (gravity) and the height of the camera, both of which can be conveniently measured using inertial sensors and a monocular video sequence for 3D urban modeling. We show that in the presence of this information, the SfM equations can be rewritten in a bilinear form. This allows us to derive a fast, robust, and scalable SfM algorithm for large scale applications. The SfM algorithm developed in this paper is experimentally demonstrated to have favorable properties compared to the sparse bundle adjustment algorithm. We provide experimental evidence indicating that the proposed algorithm converges in many cases to solutions with lower error than state-of-art implementations of bundle adjustment. We also demonstrate that for the case of large reconstruction problems, the proposed algorithm takes lesser time to reach its solution compared to bundle adjustment. We also present SfM results using our algorithm on the Google StreetView research data set.

  15. Early Binocular Input Is Critical for Development of Audiovisual but Not Visuotactile Simultaneity Perception.

    PubMed

    Chen, Yi-Chuan; Lewis, Terri L; Shore, David I; Maurer, Daphne

    2017-02-20

    Temporal simultaneity provides an essential cue for integrating multisensory signals into a unified perception. Early visual deprivation, in both animals and humans, leads to abnormal neural responses to audiovisual signals in subcortical and cortical areas [1-5]. Behavioral deficits in integrating complex audiovisual stimuli in humans are also observed [6, 7]. It remains unclear whether early visual deprivation affects visuotactile perception similarly to audiovisual perception and whether the consequences for either pairing differ after monocular versus binocular deprivation [8-11]. Here, we evaluated the impact of early visual deprivation on the perception of simultaneity for audiovisual and visuotactile stimuli in humans. We tested patients born with dense cataracts in one or both eyes that blocked all patterned visual input until the cataractous lenses were removed and the affected eyes fitted with compensatory contact lenses (mean duration of deprivation = 4.4 months; range = 0.3-28.8 months). Both monocularly and binocularly deprived patients demonstrated lower precision in judging audiovisual simultaneity. However, qualitatively different outcomes were observed for the two patient groups: the performance of monocularly deprived patients matched that of young children at immature stages, whereas that of binocularly deprived patients did not match any stage in typical development. Surprisingly, patients performed normally in judging visuotactile simultaneity after either monocular or binocular deprivation. Therefore, early binocular input is necessary to develop normal neural substrates for simultaneity perception of visual and auditory events but not visual and tactile events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues.

    PubMed

    Warren, Paul A; Rushton, Simon K

    2009-05-01

    We have recently suggested that the brain uses its sensitivity to optic flow in order to parse retinal motion into components arising due to self and object movement (e.g. Rushton, S. K., & Warren, P. A. (2005). Moving observers, 3D relative motion and the detection of object movement. Current Biology, 15, R542-R543). Here, we explore whether stereo disparity is necessary for flow parsing or whether other sources of depth information, which could theoretically constrain flow-field interpretation, are sufficient. Stationary observers viewed large field of view stimuli containing textured cubes, moving in a manner that was consistent with a complex observer movement through a stationary scene. Observers made speeded responses to report the perceived direction of movement of a probe object presented at different depths in the scene. Across conditions we varied the presence or absence of different binocular and monocular cues to depth order. In line with previous studies, results consistent with flow parsing (in terms of both perceived direction and response time) were found in the condition in which motion parallax and stereoscopic disparity were present. Observers were poorer at judging object movement when depth order was specified by parallax alone. However, as more monocular depth cues were added to the stimulus the results approached those found when the scene contained stereoscopic cues. We conclude that both monocular and binocular static depth information contribute to flow parsing. These findings are discussed in the context of potential architectures for a model of the flow parsing mechanism.

  17. Rapid regulation of brain-derived neurotrophic factor mRNA within eye-specific circuits during ocular dominance column formation.

    PubMed

    Lein, E S; Shatz, C J

    2000-02-15

    The neurotrophin brain-derived neurotrophic factor (BDNF) has emerged as a candidate retrograde signaling molecule for geniculocortical axons during the formation of ocular dominance columns. Here we examined whether neuronal activity can regulate BDNF mRNA in eye-specific circuits in the developing cat visual system. Dark-rearing throughout the critical period for ocular dominance column formation decreases levels of BDNF mRNA within primary visual cortex, whereas short-term (2 d) binocular blockade of retinal activity with tetrodotoxin (TTX) downregulates BDNF mRNA within the lateral geniculate nucleus (LGN) and visual cortical areas. Brief (6 hr to 2 d) monocular TTX blockade during the critical period and also in adulthood causes downregulation in appropriate eye-specific laminae in the LGN and ocular dominance columns within primary visual cortex. Monocular TTX blockade at postnatal day 23 also downregulates BDNF mRNA in a periodic fashion, consistent with recent observations that ocular dominance columns can be detected at these early ages by physiological methods. In contrast, 10 d monocular TTX during the critical period does not cause a lasting decrease in BDNF mRNA expression in columns pertaining to the treated eye, consistent with the nearly complete shift in physiological response properties of cortical neurons in favor of the unmanipulated eye known to result from long-term monocular deprivation. These observations demonstrate that BDNF mRNA levels can provide an accurate "molecular readout" of the activity levels of cortical neurons and are consistent with a highly local action of BDNF in strengthening and maintaining active synapses during ocular dominance column formation.

  18. A complete investigation of monocular and binocular functions in clinically treated amblyopia.

    PubMed

    Zhao, Wuxiao; Jia, Wu-Li; Chen, Ge; Luo, Yan; Lin, Borong; He, Qing; Lu, Zhong-Lin; Li, Min; Huang, Chang-Bing

    2017-09-06

    The gold standard of a successful amblyopia treatment is full recovery of visual acuity (VA) in the amblyopic eye, but there has been no systematic study on both monocular and binocular visual functions. In this research, we aimed to quantify visual qualities with a variety of perceptual tasks in subjects with treated amblyopia. We found near stereoacuity and pAE dominance in binocular rivalry in "treated" amblyopia were largely comparable to those of normal subjects. CSF of the pAE remained deficient in high spatial frequencies. The binocular contrast summation ratio is significantly lower than normal standard. The interocular balance point is 34%, indicating that contrast in pAE is much less effective as the same contrast in pFE in binocular phase combination. Although VA, stereoacuity and binocular rivalry at low spatial frequency in treated amblyopes were normal or nearly normal, the pAE remained "lazy" in high frequency domain, binocular contrast summation, and interocular phase combination. Our results suggest that structured monocular and binocular training are necessary to fully recover deficient functions in amblyopia.

  19. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    PubMed Central

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  20. The Sun in STEREO

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Parallax gives depth to life. Simultaneous viewing from slightly different vantage points makes binocular humans superior to monocular cyclopes, and fixes us in the third dimension of the Universe. We've been stunned by 3-d images of Venus and Mars (along with more familiar views of earth). Now astronomers plan to give us the best view of all, 3-d images of the dynamic Sun. That's one of the prime goals of NASA's Solar Terrestrial Relations Observatories, also known as STEREO. STEREO is a pair of spacecraft observatories, one placed in orbit in front of earth, and one to be placed in an earth-trailing orbit. Simultaneous observations of the Sun with the two STEREO spacecraft will provide extraordinary 3-d views of all types of solar activity, especially the dramatic events called coronal mass ejections which send high energy particles from the outer solar atmosphere hurtling towards earth. The image above the first image of the sun by the two STEREO spacecraft, an extreme ultraviolet shot of the Sun's million-degree corona, taken by the Extreme Ultraviolet Imager on the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) instrument package. STEREO's first 3-d solar images should be available in April if all goes well. Put on your red and blue glasses!

  1. Estimating 3D positions and velocities of projectiles from monocular views.

    PubMed

    Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P

    2009-05-01

    In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.

  2. Damping of monocular pendular nystagmus with vibration in a patient with multiple sclerosis.

    PubMed

    Beh, Shin C; Tehrani, Ali Saber; Kheradmand, Amir; Zee, David S

    2014-04-15

    Acquired pendular nystagmus (PN) occurs commonly in multiple sclerosis (MS) and results in a highly disabling oscillopsia that impairs vision. It usually consists of pseudo-sinusoidal oscillations at a single frequency (3-5 Hz) that often briefly stop for a few hundred milliseconds after saccades and blinks. The oscillations are thought to arise from instability in the gaze-holding networks ("neural integrator") in the brainstem and cerebellum.(1,2) Here we describe a patient with monocular PN in whom vibration on the skull from a handheld muscle massager strikingly diminished or stopped her nystagmus.

  3. Binocular interactions in random chromatic changes at isoluminance

    NASA Astrophysics Data System (ADS)

    Medina, José M.

    2006-02-01

    To examine the type of chromatic interactions at isoluminance in the phenomenon of binocular vision, I have determined simple visual reaction times (VRT) under three observational conditions (monocular left, monocular right, and binocular) for different chromatic stimuli along random color axes at isoluminance (simultaneous L-, M-, and S-cone variations). Upper and lower boundaries of probability summation as well as the binocular capacity coefficient were estimated with observed distributions of reaction times. The results were not consistent with the notion of independent chromatic channels between eyes, suggesting the existence of excitatory and inhibitory binocular interactions at suprathreshold isoluminance conditions.

  4. Monocular measurement of the spectrum of UHE cosmic rays by the FADC detector of the HiRes experiment

    NASA Astrophysics Data System (ADS)

    Abbasi, R. U.; Abu-Zayyad, T.; Amman, J. F.; Archbold, G. C.; Bellido, J. A.; Belov, K.; Belz, J. W.; Bergman, D. R.; Cao, Z.; Clay, R. W.; Cooper, M. D.; Dai, H.; Dawson, B. R.; Everett, A. A.; Girard, J. H. V.; Gray, R. C.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hüntemeyer, P.; Jones, B. F.; Jui, C. C. H.; Kieda, D. B.; Kim, K.; Kirn, M. A.; Loh, E. C.; Manago, N.; Marek, L. J.; Martens, K.; Martin, G.; Manago, N.; Matthews, J. A. J.; Matthews, J. N.; Meyer, J. R.; Moore, S. A.; Morrison, P.; Moosman, A. N.; Mumford, J. R.; Munro, M. W.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M.; Sarracino, J. S.; Schnetzer, S.; Shen, P.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, S. B.; Thompson, T. N.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; VanderVeen, T. D.; Zech, A.; Zhang, X.

    2005-03-01

    We have measured the spectrum of UHE cosmic rays using the Flash ADC (FADC) detector (called HiRes-II) of the High Resolution Fly's Eye experiment running in monocular mode. We describe in detail the data analysis, development of the Monte Carlo simulation program, and results. We also describe the results of the HiRes-I detector. We present our measured spectra and compare them with a model incorporating galactic and extragalactic cosmic rays. Our combined spectra provide strong evidence for the existence of the spectral feature known as the "ankle."

  5. Infantile Nystagmus and Abnormalities of Conjugate Eye Movements in Down Syndrome.

    PubMed

    Weiss, Avery H; Kelly, John P; Phillips, James O

    2016-03-01

    Subjects with Down syndrome (DS) have an anatomical defect within the cerebellum that may impact downstream oculomotor areas. This study characterized gaze holding and gains for smooth pursuit, saccades, and optokinetic nystagmus (OKN) in DS children with infantile nystagmus (IN). Clinical data of 18 DS children with IN were reviewed retrospectively. Subjects with constant strabismus were excluded to remove any contribution of latent nystagmus. Gaze-holding, horizontal and vertical saccades to target steps, horizontal smooth pursuit of drifting targets, OKN in response to vertically or horizontally-oriented square wave gratings drifted at 15°/s, 30°/s, and 45°/s were recorded using binocular video-oculography. Seven subjects had additional optical coherence tomography imaging. Infantile nystagmus was associated with one or more gaze-holding instabilities (GHI) in each subject. The majority of subjects had a combination of conjugate horizontal jerk with constant or exponential slow-phase velocity, asymmetric or symmetric, and either monocular or binocular pendular nystagmus. Six of seven subjects had mild (Grade 0-1) persistence of retinal layers overlying the fovea, similar to that reported in DS children without nystagmus. All subjects had abnormal gains across one or more stimulus conditions (horizontal smooth pursuit, saccades, or OKN). Saccade velocities followed the main sequence. Down syndrome subjects with IN show a wide range of GHI and abnormalities of conjugate eye movements. We propose that these ocular motor abnormalities result from functional abnormalities of the cerebellum and/or downstream oculomotor circuits, perhaps due to extensive miswiring.

  6. Localization Using Visual Odometry and a Single Downward-Pointing Camera

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.

    2012-01-01

    Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.

  7. Color Constancy in Two-Dimensional and Three-Dimensional Scenes: Effects of Viewing Methods and Surface Texture

    PubMed Central

    Morimoto, Takuma; Mizokami, Yoko; Yaguchi, Hirohisa; Buck, Steven L.

    2017-01-01

    There has been debate about how and why color constancy may be better in three-dimensional (3-D) scenes than in two-dimensional (2-D) scenes. Although some studies have shown better color constancy for 3-D conditions, the role of specific cues remains unclear. In this study, we compared color constancy for a 3-D miniature room (a real scene consisting of actual objects) and 2-D still images of that room presented on a monitor using three viewing methods: binocular viewing, monocular viewing, and head movement. We found that color constancy was better for the 3-D room; however, color constancy for the 2-D image improved when the viewing method caused the scene to be perceived more like a 3-D scene. Separate measurements of the perceptual 3-D effect of each viewing method also supported these results. An additional experiment comparing a miniature room and its image with and without texture suggested that surface texture of scene objects contributes to color constancy. PMID:29238513

  8. Baseline risk factors for incidence of blindness in a South Indian population: the chennai eye disease incidence study.

    PubMed

    Vijaya, Lingam; Asokan, Rashima; Panday, Manish; Choudhari, Nikhil S; Ramesh, Sathyamangalam Ve; Velumuri, Lokapavani; Boddupalli, Sachi Devi; Sunil, Govindan T; George, Ronnie

    2014-08-07

    To report the baseline risk factors and causes for incident blindness. Six years after the baseline study, 4419 subjects from the cohort underwent a detailed examination at the base hospital. Incident blindness was defined by World Health Organization criteria as visual acuity of less than 6/120 (3/60) and/or a visual field of less than 10° in the better-seeing eye at the 6-year follow-up, provided that the eye had a visual acuity of better than or equal to 6/120 (3/60) and visual field greater than 10° at baseline. For incident monocular blindness, both eyes should have visual acuity of more than 6/120 (3/60) at baseline and developed visual acuity of less than 6/120 (3/60) in one eye at 6-year follow-up. For incident blindness, 21 participants (0.48%, 95% confidence interval [CI], 0.3-0.7) became blind; significant baseline risk factors were increasing age (P = 0.001), smokeless tobacco use (P < 0.001), and no history of cataract surgery (P = 0.02). Incident monocular blindness was found in 132 participants (3.8%, 95% CI, 3.7-3.8); it was significantly more (P < 0.001) in the rural population (5.4%, 95% CI, 5.4-5.5) than in the urban population (1.9%, 95% CI, 1.8-1.9). Baseline risk factors (P < 0.001) were increasing age and rural residence, and no history of cataract surgery was a protective factor (P = 0.03). Increasing age was a significant risk factor for blindness and monocular blindness. No history of cataract surgery was a risk factor for blindness and a protective factor for monocular blindness. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  9. Unstable Binocular Fixation Affects Reaction Times But Not Implicit Motor Learning in Dyslexia.

    PubMed

    Przekoracka-Krawczyk, Anna; Brenk-Krakowska, Alicja; Nawrot, Pawel; Rusiak, Patrycja; Naskrecki, Ryszard

    2017-12-01

    Individuals with developmental dyslexia suffer not only from reading problems as more general motor deficits can also be observed in this patient group. Both psychometric clinical tests and objective eyetracking methods suggest that unstable binocular fixation may contribute to reading problems. Because binocular instability may cause poor eye-hand coordination and impair motor control, the primary aim of this study was to explore in dyslexic subjects the influence of unstable binocular fixation on reaction times (RTs) and implicit motor learning (IML), which is one of the fundamental cerebellar functions. Fixation disparity (FD) and instability of FD were assessed subjectively using the Wesson card and a modified Mallett test. A modified version of the Serial Reaction Time Task (SRTT) was used to measure the RTs and IML skills. The results for the dyslexic group (DG), which included 29 adult subjects (15 were tested binocularly, DGbin; 14 were tested monocularly, DGmono), were compared with data from the control group (CG), which consisted of 30 age-matched nondyslexic subjects (15 tested binocularly, CGbin; and the other 15 tested monocularly, CGmono). The results indicated that the DG showed poorer binocular stability and longer RTs in the groups tested binocularly (RTs: 534 vs. 411 ms for DGbin and CGbin, respectively; P < 0.001) as compared with the groups examined monocularly (RTs: 431 vs. 424 ms for DGmono and CGmono, respectively; P = 0.996). The DG also exhibited impaired IML when compared with the CG (EFIML: 25 vs. 50 ms for DG and CG, respectively; P = 0.012). Unstable binocularity in dyslexia may affect RTs but was not related to poor IML skills. Impaired IML in dyslexia was independent of the viewing conditions (monocular versus binocular) and may be related to cerebellar deficits.

  10. Long-Term Visual Training Increases Visual Acuity and Long-Term Monocular Deprivation Promotes Ocular Dominance Plasticity in Adult Standard Cage-Raised Mice.

    PubMed

    Hosang, Leon; Yusifov, Rashad; Löwel, Siegrid

    2018-01-01

    For routine behavioral tasks, mice predominantly rely on olfactory cues and tactile information. In contrast, their visual capabilities appear rather restricted, raising the question whether they can improve if vision gets more behaviorally relevant. We therefore performed long-term training using the visual water task (VWT): adult standard cage (SC)-raised mice were trained to swim toward a rewarded grating stimulus so that using visual information avoided excessive swimming toward nonrewarded stimuli. Indeed, and in contrast to old mice raised in a generally enriched environment (Greifzu et al., 2016), long-term VWT training increased visual acuity (VA) on average by more than 30% to 0.82 cycles per degree (cyc/deg). In an individual animal, VA even increased to 1.49 cyc/deg, i.e., beyond the rat range of VAs. Since visual experience enhances the spatial frequency threshold of the optomotor (OPT) reflex of the open eye after monocular deprivation (MD), we also quantified monocular vision after VWT training. Monocular VA did not increase reliably, and eye reopening did not initiate a decline to pre-MD values as observed by optomotry; VA values rather increased by continued VWT training. Thus, optomotry and VWT measure different parameters of mouse spatial vision. Finally, we tested whether long-term MD induced ocular dominance (OD) plasticity in the visual cortex of adult [postnatal day (P)162-P182] SC-raised mice. This was indeed the case: 40-50 days of MD induced OD shifts toward the open eye in both VWT-trained and, surprisingly, also in age-matched mice without VWT training. These data indicate that (1) long-term VWT training increases adult mouse VA, and (2) long-term MD induces OD shifts also in adult SC-raised mice.

  11. Reading strategies in mild to moderate strabismic amblyopia: an eye movement investigation.

    PubMed

    Kanonidou, Evgenia; Proudlock, Frank A; Gottlob, Irene

    2010-07-01

    PURPOSE. To investigate oculomotor strategies in strabismic amblyopia and evaluate abnormalities during monocular and binocular reading. METHODS. Eye movements were recorded with a head-mounted infrared video eye-tracker (250 Hz, <0.01 degrees resolution) in 20 strabismic amblyopes (mean age, 44.9 +/- 10.7 years) and 20 normal control subjects (mean age, 42.8 +/- 10.9 years) while they silently read paragraphs of text. Monocular reading comparisons were made between the amblyopic eye and the nondominant eye of control subjects and the nonamblyopic eye and the dominant eye of the control subjects. Binocular reading between the amblyopic and control subjects was also compared. RESULTS. Mean reading speed, number of progressive and regressive saccades per line, saccadic amplitude (of progressive saccades), and fixation duration were estimated. Inter- and intrasubject statistical comparisons were made. Reading speed was significantly slower in amblyopes than in control subjects during monocular reading with amblyopic (13.094 characters/s vs. 22.188 characters/s; P < 0.0001) and nonamblyopic eyes (16.241 characters/s vs. 22.349 characters/s, P < 0.0001), and binocularly (15.698 characters/s vs. 23.425 characters/s, P < 0.0001). In amblyopes, reading was significantly slower with the amblyopic eye than with the nonamblyopic eye in binocular viewing (P < 0.05). These differences were associated with significantly more regressive saccades and longer fixation durations, but not with changes in saccadic amplitudes. CONCLUSIONS. In strabismic amblyopia, reading is impaired, not only during monocular viewing with the amblyopic eye, but also with the nonamblyopic eye and binocularly, even though normal visual acuity pertains to the latter two conditions. The impaired reading performance is associated with differences in both the saccadic and fixational patterns, most likely as adaptation strategies to abnormal sensory experiences such as crowding and suppression.

  12. Quantitative evaluation of three advanced laparoscopic viewing technologies: a stereo endoscope, an image projection display, and a TFT display.

    PubMed

    Wentink, M; Jakimowicz, J J; Vos, L M; Meijer, D W; Wieringa, P A

    2002-08-01

    Compared to open surgery, minimally invasive surgery (MIS) relies heavily on advanced technology, such as endoscopic viewing systems and innovative instruments. The aim of the study was to objectively compare three technologically advanced laparoscopic viewing systems with the standard viewing system currently used in most Dutch hospitals. We evaluated the following advanced laparoscopic viewing systems: a Thin Film Transistor (TFT) display, a stereo endoscope, and an image projection display. The standard viewing system was comprised of a monocular endoscope and a high-resolution monitor. Task completion time served as the measure of performance. Eight surgeons with laparoscopic experience participated in the experiment. The average task time was significantly greater (p <0.05) with the stereo viewing system than with the standard viewing system. The average task times with the TFT display and the image projection display did not differ significantly from the standard viewing system. Although the stereo viewing system promises improved depth perception and the TFT and image projection displays are supposed to improve hand-eye coordination, none of these systems provided better task performance than the standard viewing system in this pelvi-trainer experiment.

  13. Multifocal visual evoked responses to dichoptic stimulation using virtual reality goggles: Multifocal VER to dichoptic stimulation.

    PubMed

    Arvind, Hemamalini; Klistorner, Alexander; Graham, Stuart L; Grigg, John R

    2006-05-01

    Multifocal visual evoked potentials (mfVEPs) have demonstrated good diagnostic capabilities in glaucoma and optic neuritis. This study aimed at evaluating the possibility of simultaneously recording mfVEP for both eyes with dichoptic stimulation using virtual reality goggles and also to determine the stimulus characteristics that yield maximum amplitude. ten healthy volunteers were recruited and temporally sparse pattern pulse stimuli were presented dichoptically using virtual reality goggles. Experiment 1 involved recording responses to dichoptically presented checkerboard stimuli and also confirming true topographic representation by switching off specific segments. Experiment 2 involved monocular stimulation and comparison of amplitude with Experiment 1. In Experiment 3, orthogonally oriented gratings were dichoptically presented. Experiment 4 involved dichoptic presentation of checkerboard stimuli at different levels of sparseness (5.0 times/s, 2.5 times/s, 1.66 times/s and 1.25 times/s), where stimulation of corresponding segments of two eyes were separated by 16.7, 66.7,116.7 & 166.7 ms respectively. Experiment 1 demonstrated good traces in all regions and confirmed topographic representation. However, there was suppression of amplitude of responses to dichoptic stimulation by 17.9+/-5.4% compared to monocular stimulation. Experiment 3 demonstrated similar suppression between orthogonal and checkerboard stimuli (p = 0.08). Experiment 4 demonstrated maximum amplitude and least suppression (4.8%) with stimulation at 1.25 times/s with 166.7 ms separation between eyes. It is possible to record mfVEP for both eyes during dichoptic stimulation using virtual reality goggles, which present binocular simultaneous patterns driven by independent sequences. Interocular suppression can be almost eliminated by using a temporally sparse stimulus of 1.25 times/s with a separation of 166.7 ms between stimulation of corresponding segments of the two eyes.

  14. A robust vision-based sensor fusion approach for real-time pose estimation.

    PubMed

    Assa, Akbar; Janabi-Sharifi, Farrokh

    2014-02-01

    Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.

  15. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    PubMed

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  16. Cortical mechanisms for afterimage formation: evidence from interocular grouping

    PubMed Central

    Dong, Bo; Holm, Linus; Bao, Min

    2017-01-01

    Whether the retinal process alone or retinal and cortical processes jointly determine afterimage (AI) formation has long been debated. Based on the retinal rebound responses, recent work proposes that afterimage signals are exclusively generated in the retina, although later modified by cortical mechanisms. We tested this notion with the method of “indirect proof”. Each eye was presented with a 2-by-2 checkerboard of horizontal and vertical grating patches. Each corresponding patch of the two checkerboards was perpendicular to each other, which produces binocular rivalry, and can generate percepts ranging from complete interocular grouping to either monocular pattern. The monocular percepts became more frequent with higher contrast. Due to adaptation, the visual system is less sensitive during the AIs than during the inductions with AI-similar contrast. If the retina is the only origin of AIs, comparable contrast appearance would require stronger retinal signals in the AIs than in the inductions, thus leading to more frequent monocular percepts in the AIs than in the inductions. Surprisingly, subjects saw the fully coherent stripes significantly more often in AIs. Our results thus contradict the retinal generation notion, and suggest that in addition to the retina, cortex is directly involved in the generation of AI signals. PMID:28112230

  17. Compact and wide-field-of-view head-mounted display

    NASA Astrophysics Data System (ADS)

    Uchiyama, Shoichi; Kamakura, Hiroshi; Karasawa, Joji; Sakaguchi, Masafumi; Furihata, Takeshi; Itoh, Yoshitaka

    1997-05-01

    A compact and wide field of view HMD having 1.32-in full color VGA poly-Si TFT LCDs and simple eyepieces much like LEEP optics has been developed. The total field of view is 80 deg with a 40 deg overlap in its central area. Each optical unit which includes an LCD and eyepiece is 46 mm in diameter and 42 mm in length. The total number of pixels is equivalent to (864 times 3) times 480. This HMD realizes its wide field of view and compact size by having a narrower binocular area (overlap area) than that of commercialized HMDs. For this reason, it is expected that the frequency of monocular vision will be more than that of commercialized HMDs and human natural vision. Therefore, we researched the convergent state of eyes while observing the monocular areas of this HMD by employing an EOG and considered the suitability of this HMD to human vision. As a result, it was found that the convergent state of the monocular vision was nearly equal to that of binocular vision. That is, it can be said that this HMD has the possibility of being well suited to human vision in terms of the convergence.

  18. Night myopia is reduced in binocular vision.

    PubMed

    Chirre, Emmanuel; Prieto, Pedro M; Schwarz, Christina; Artal, Pablo

    2016-06-01

    Night myopia, which is a shift in refraction with light level, has been widely studied but still lacks a complete understanding. We used a new infrared open-view binocular Hartmann-Shack wave front sensor to quantify night myopia under monocular and natural binocular viewing conditions. Both eyes' accommodative response, aberrations, pupil diameter, and convergence were simultaneously measured at light levels ranging from photopic to scotopic conditions to total darkness. For monocular vision, reducing the stimulus luminance resulted in a progression of the accommodative state that tends toward the subject's dark focus or tonic accommodation and a change in convergence following the induced accommodative error. Most subjects presented a myopic shift of accommodation that was mitigated in binocular vision. The impact of spherical aberration on the focus shift was relatively small. Our results in monocular conditions support the hypothesis that night myopia has an accommodative origin as the eye progressively changes its accommodation state with decreasing luminance toward its resting state in total darkness. On the other hand, binocularity restrains night myopia, possibly by using fusional convergence as an additional accommodative cue, thus reducing the potential impact of night myopia on vision at low light levels.

  19. Does stereo-endoscopy improve neurosurgical targeting in 3rd ventriculostomy?

    NASA Astrophysics Data System (ADS)

    Abhari, Kamyar; de Ribaupierre, Sandrine; Peters, Terry; Eagleson, Roy

    2011-03-01

    Endoscopic third ventriculostomy is a minimally invasive surgical technique to treat hydrocephalus; a condition where patients suffer from excessive amounts of cerebrospinal fluid (CSF) in the ventricular system of their brain. This technique involves using a monocular endoscope to locate the third ventricle, where a hole can be made to drain excessive fluid. Since a monocular endoscope provides only a 2D view, it is difficult to make this perforation due to the lack of monocular cues and depth perception. In a previous study, we had investigated the use of a stereo-endoscope to allow neurosurgeons to locate and avoid hazardous areas on the surface of the third ventricle. In this paper, we extend our previous study by developing a new methodology to evaluate the targeting performance in piercing the hole in the membrane. We consider the accuracy of this surgical task and derive an index of performance for a task which does not have a well-defined position or width of target. Our performance metric is sensitive and can distinguish between experts and novices. We make use of this metric to demonstrate an objective learning curve on this task for each subject.

  20. Study of a direct visualization display tool for space applications

    NASA Astrophysics Data System (ADS)

    Pereira do Carmo, J.; Gordo, P. R.; Martins, M.; Rodrigues, F.; Teodoro, P.

    2017-11-01

    The study of a Direct Visualization Display Tool (DVDT) for space applications is reported. The review of novel technologies for a compact display tool is described. Several applications for this tool have been identified with the support of ESA astronauts and are presented. A baseline design is proposed. It consists mainly of OLEDs as image source; a specially designed optical prism as relay optics; a Personal Digital Assistant (PDA), with data acquisition card, as control unit; and voice control and simplified keyboard as interfaces. Optical analysis and the final estimated performance are reported. The system is able to display information (text, pictures or/and video) with SVGA resolution directly to the astronaut using a Field of View (FOV) of 20x14.5 degrees. The image delivery system is a monocular Head Mounted Display (HMD) that weights less than 100g. The HMD optical system has an eye pupil of 7mm and an eye relief distance of 30mm.

  1. Vision-guided gripping of a cylinder

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1991-01-01

    The motivation for vision-guided servoing is taken from tasks in automated or telerobotic space assembly and construction. Vision-guided servoing requires the ability to perform rapid pose estimates and provide predictive feature tracking. Monocular information from a gripper-mounted camera is used to servo the gripper to grasp a cylinder. The procedure is divided into recognition and servo phases. The recognition stage verifies the presence of a cylinder in the camera field of view. Then an initial pose estimate is computed and uncluttered scan regions are selected. The servo phase processes only the selected scan regions of the image. Given the knowledge, from the recognition phase, that there is a cylinder in the image and knowing the radius of the cylinder, 4 of the 6 pose parameters can be estimated with minimal computation. The relative motion of the cylinder is obtained by using the current pose and prior pose estimates. The motion information is then used to generate a predictive feature-based trajectory for the path of the gripper.

  2. Monocular Advantage for Face Perception Implicates Subcortical Mechanisms in Adult Humans

    PubMed Central

    Gabay, Shai; Nestor, Adrian; Dundas, Eva; Behrmann, Marlene

    2014-01-01

    The ability to recognize faces accurately and rapidly is an evolutionarily adaptive process. Most studies examining the neural correlates of face perception in adult humans have focused on a distributed cortical network of face-selective regions. There is, however, robust evidence from phylogenetic and ontogenetic studies that implicates subcortical structures, and recently, some investigations in adult humans indicate subcortical correlates of face perception as well. The questions addressed here are whether low-level subcortical mechanisms for face perception (in the absence of changes in expression) are conserved in human adults, and if so, what is the nature of these subcortical representations. In a series of four experiments, we presented pairs of images to the same or different eyes. Participants’ performance demonstrated that subcortical mechanisms, indexed by monocular portions of the visual system, play a functional role in face perception. These mechanisms are sensitive to face-like configurations and afford a coarse representation of a face, comprised of primarily low spatial frequency information, which suffices for matching faces but not for more complex aspects of face perception such as sex differentiation. Importantly, these subcortical mechanisms are not implicated in the perception of other visual stimuli, such as cars or letter strings. These findings suggest a conservation of phylogenetically and ontogenetically lower-order systems in adult human face perception. The involvement of subcortical structures in face recognition provokes a reconsideration of current theories of face perception, which are reliant on cortical level processing, inasmuch as it bolsters the cross-species continuity of the biological system for face recognition. PMID:24236767

  3. Chromatic and achromatic monocular deprivation produce separable changes of eye dominance in adults.

    PubMed

    Zhou, Jiawei; Reynaud, Alexandre; Kim, Yeon Jin; Mullen, Kathy T; Hess, Robert F

    2017-11-29

    Temporarily depriving one eye of its input, in whole or in part, results in a transient shift in eye dominance in human adults, with the patched eye becoming stronger and the unpatched eye weaker. However, little is known about the role of colour contrast in these behavioural changes. Here, we first show that the changes in eye dominance and contrast sensitivity induced by monocular eye patching affect colour and achromatic contrast sensitivity equally. We next use dichoptic movies, customized and filtered to stimulate the two eyes differentially. We show that a strong imbalance in achromatic contrast between the eyes, with no colour content, also produces similar, unselective shifts in eye dominance for both colour and achromatic contrast sensitivity. Interestingly, if this achromatic imbalance is paired with similar colour contrast in both eyes, the shift in eye dominance is selective, affecting achromatic but not chromatic contrast sensitivity and revealing a dissociation in eye dominance for colour and achromatic image content. On the other hand, a strong imbalance in chromatic contrast between the eyes, with no achromatic content, produces small, unselective changes in eye dominance, but if paired with similar achromatic contrast in both eyes, no changes occur. We conclude that perceptual changes in eye dominance are strongly driven by interocular imbalances in achromatic contrast, with colour contrast having a significant counter balancing effect. In the short term, eyes can have different dominances for achromatic and chromatic contrast, suggesting separate pathways at the site of these neuroplastic changes. © 2017 The Author(s).

  4. Altered spontaneous brain activity pattern in patients with late monocular blindness in middle-age using amplitude of low-frequency fluctuation: a resting-state functional MRI study

    PubMed Central

    Li, Qing; Huang, Xin; Ye, Lei; Wei, Rong; Zhang, Ying; Zhong, Yu-Lin; Jiang, Nan; Shao, Yi

    2016-01-01

    Objective Previous reports have demonstrated significant brain activity changes in bilateral blindness, whereas brain activity changes in late monocular blindness (MB) at rest are not well studied. Our study aimed to investigate spontaneous brain activity in patients with late middle-aged MB using the amplitude of low-frequency fluctuation (ALFF) method and their relationship with clinical features. Methods A total of 32 patients with MB (25 males and 7 females) and 32 healthy control (HC) subjects (25 males and 7 females), similar in age, sex, and education, were recruited for the study. All subjects were performed with resting-state functional magnetic resonance imaging scanning. The ALFF method was applied to evaluate spontaneous brain activity. The relationships between the ALFF signal values in different brain regions and clinical features in MB patients were investigated using correlation analysis. Results Compared with HCs, the MB patients had marked lower ALFF values in the left cerebellum anterior lobe, right parahippocampal gyrus, right cuneus, left precentral gyrus, and left paracentral lobule, but higher ALFF values in the right middle frontal gyrus, left middle frontal gyrus, and left supramarginal gyrus. However, there was no linear correlation between the mean ALFF signal values in brain regions and clinical manifestations in MB patients. Conclusion There were abnormal spontaneous activities in many brain regions including vision and vision-related regions, which might indicate the neuropathologic mechanisms of vision loss in the MB patients. Meanwhile, these brain activity changes might be used as a useful clinical indicator for MB. PMID:27980398

  5. Altered spontaneous brain activity pattern in patients with late monocular blindness in middle-age using amplitude of low-frequency fluctuation: a resting-state functional MRI study.

    PubMed

    Li, Qing; Huang, Xin; Ye, Lei; Wei, Rong; Zhang, Ying; Zhong, Yu-Lin; Jiang, Nan; Shao, Yi

    2016-01-01

    Previous reports have demonstrated significant brain activity changes in bilateral blindness, whereas brain activity changes in late monocular blindness (MB) at rest are not well studied. Our study aimed to investigate spontaneous brain activity in patients with late middle-aged MB using the amplitude of low-frequency fluctuation (ALFF) method and their relationship with clinical features. A total of 32 patients with MB (25 males and 7 females) and 32 healthy control (HC) subjects (25 males and 7 females), similar in age, sex, and education, were recruited for the study. All subjects were performed with resting-state functional magnetic resonance imaging scanning. The ALFF method was applied to evaluate spontaneous brain activity. The relationships between the ALFF signal values in different brain regions and clinical features in MB patients were investigated using correlation analysis. Compared with HCs, the MB patients had marked lower ALFF values in the left cerebellum anterior lobe, right parahippocampal gyrus, right cuneus, left precentral gyrus, and left paracentral lobule, but higher ALFF values in the right middle frontal gyrus, left middle frontal gyrus, and left supramarginal gyrus. However, there was no linear correlation between the mean ALFF signal values in brain regions and clinical manifestations in MB patients. There were abnormal spontaneous activities in many brain regions including vision and vision-related regions, which might indicate the neuropathologic mechanisms of vision loss in the MB patients. Meanwhile, these brain activity changes might be used as a useful clinical indicator for MB.

  6. The dependence of binocular contrast sensitivities on binocular single vision in normal and amblyopic human subjects

    PubMed Central

    Hood, A S; Morrison, J D

    2002-01-01

    We have measured monocular and binocular contrast sensitivities in response to medium to high spatial frequencies of vertical sinusoidal grating patterns in normal subjects, anisometropic amblyopes, strabismic amblyopes and non-amblyopic esotropes. On binocular viewing, contrast sensitivities were slightly but significantly increased in normal subjects, markedly increased in anisometropes and esotropes with anomalous binocular single vision (BSV) and significantly reduced in esotropes and exotropes without BSV. Application of a prismatic correction to the strabismic eye in order to achieve bifoveal stimulation resulted in a significant reduction in contrast sensitivity in esotropes with and without anomalous BSV, in exotropes and in non-amblyopic esotropes. Control experiments in normal subjects with monocular viewing showed that degradative effects of the prism occurred only with high prism powers and at high spatial frequencies, thus establishing that the reduced contrast sensitivities were the consequence of bifoveal stimulation rather than optical degradation. Displacement of the image of the grating pattern by 2 deg in normal subjects and anisometropes by a dichoptic method to simulate a small angle esotropia had no effect on the contrast sensitivities recorded through the companion eye. By contrast, esotropes showed similar reductions in contrast sensitivity to those obtained with the prism experiments, confirming a fundamental difference between subjects with normal and abnormal ocular alignments. The results have thus established a suppressive action of the fovea of the amblyopic eye acting on the companion, non-amblyopic eye and indicate that correction of ocular misalignments in adult esotropes may be disadvantageous to binocular visual performance. PMID:11956347

  7. Visual experience sculpts whole-cortex spontaneous infraslow activity patterns through an Arc-dependent mechanism

    PubMed Central

    Kraft, Andrew W.; Mitra, Anish; Bauer, Adam Q.; Raichle, Marcus E.; Culver, Joseph P.; Lee, Jin-Moo

    2017-01-01

    Decades of work in experimental animals has established the importance of visual experience during critical periods for the development of normal sensory-evoked responses in the visual cortex. However, much less is known concerning the impact of early visual experience on the systems-level organization of spontaneous activity. Human resting-state fMRI has revealed that infraslow fluctuations in spontaneous activity are organized into stereotyped spatiotemporal patterns across the entire brain. Furthermore, the organization of spontaneous infraslow activity (ISA) is plastic in that it can be modulated by learning and experience, suggesting heightened sensitivity to change during critical periods. Here we used wide-field optical intrinsic signal imaging in mice to examine whole-cortex spontaneous ISA patterns. Using monocular or binocular visual deprivation, we examined the effects of critical period visual experience on the development of ISA correlation and latency patterns within and across cortical resting-state networks. Visual modification with monocular lid suturing reduced correlation between left and right cortices (homotopic correlation) within the visual network, but had little effect on internetwork correlation. In contrast, visual deprivation with binocular lid suturing resulted in increased visual homotopic correlation and increased anti-correlation between the visual network and several extravisual networks, suggesting cross-modal plasticity. These network-level changes were markedly attenuated in mice with genetic deletion of Arc, a gene known to be critical for activity-dependent synaptic plasticity. Taken together, our results suggest that critical period visual experience induces global changes in spontaneous ISA relationships, both within the visual network and across networks, through an Arc-dependent mechanism. PMID:29087327

  8. Search for point-like sources of cosmic rays with energies above 1018.5 eV in the HiRes-I monocular data set

    NASA Astrophysics Data System (ADS)

    High-Resolution Fly'S Eye Collaboration; Abbai, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Belov, K.; Belz, J. W.; Benzvi, S.; Bergman, D. R.; Blake, S. A.; Cao, Z.; Connolly, B. M.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Gray, R. C.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jones, B. F.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Loh, E. C.; Maestas, M. M.; Manago, N.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; Moore, S. A.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Rodriguez, D.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Scott, L. M.; Sinnis, G.; Smith, J. D.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.; Zhang, X.

    2007-07-01

    We report the results of a search for point-like deviations from isotropy in the arrival directions of ultra-high energy cosmic rays in the northern hemisphere. In the monocular data set collected by the High-Resolution Fly’s Eye, consisting of 1525 events with energy exceeding 1018.5 eV, we find no evidence for point-like excesses. We place a 90% c.l. upper limit of 0.8 hadronic cosmic rays/km2 yr on the flux from such sources for the northern hemisphere and place tighter limits as a function of position in the sky.

  9. Vestibular and Non-vestibular Contributions to Eye Movements that Compensate for Head Rotations during Viewing of Near Targets

    NASA Technical Reports Server (NTRS)

    Han, Yanning H.

    2006-01-01

    We studied horizontal eye movements induced by en-bloc yaw rotation, over a frequency range 0.2 - 2.8 Hz, in 10 normal human subjects as they monocularly viewed a target located at their near point of focus. We measured gain and phase relationships between eye-in-head velocity and head velocity when the near target was either earth-fixed or head-fixed. During viewing of the earth-fixed near target, median gain was 1.49 (range 1.24 - 1.87) at 0.2 Hz for the group of subjects, but declined at higher frequencies, so that at 2.8 Hz median gain was 1.08 (range 0.68 - 1.67). During viewing of the head-fixed near target , median gain was 0.03 (range 0.01 - 0.10) at 0.2 Hz for the group of subjects, but increased at higher frequencies, so that at 2.8 Hz median gain was 0.71 (range 0.28 - 0.94). We estimated the vestibular contribution to these responses vestibulo-ocular reflex gain (Gvor) by applying transient head perturbations (peak acceleration> 1,000 deg/s(exp 2)) during sinusoidal rotation under the two viewing conditions. Median Gvor, estimated < 70ms after the onset of head perturbation, was 0.98 (range 0.39 - 1.42) while viewing the earth-fixed near target, and 0.97 (range 0.37 - 1.33) while viewing the head-fixed near target. For the group of subjects, 9 out of 10 subjects showed no significant difference of Gvor between the two viewing conditions ( p > 0.053 ) at all test frequencies. Since Gvor accounted for only -73% of the overall response gain during viewing of the earth-fixed target, we investigated the relative contributions of non-vestibular factors. When subjects viewed the earth-fixed target under strobe illumination, to eliminate retinal image slip information, the gain of compensatory eye movements declined compared with viewing in ambient room light. During sum-of-sine head rotations, while viewing the earth-fixed target, to Han et al./VOR during near-viewing minimize contributions from predictive mechanisms, gain also declined Nonetheless, simple superposition of smooth-pursuit tracking of sinusoidal target motion could not fully account for the overall response at higher frequencies, suggesting other nonvestibular contributions. During binocular viewing conditions when vergence angle was significantly greater than monocular viewing (p < 0.00l), the gain of compensatory eye movements did not show proportional change; indeed, gain could not be correlated with vergence angle during monocular or binocular viewing. We conclude that several separate factors contribute to generate eye rotations during sinusoidal yaw head rotations while viewing a near target; these include the VOR, visual-tracking eye movements that utilize retinal image motion, predictive eye movements and, possibly, other unidentified non-vestibular factors. For these experiments, vergence was not an important determinant of response gam.

  10. Vestibular and Non-vestibular Contributions to Eye Movements that Compensate for Head Rotations during Viewing of Near Targets

    NASA Technical Reports Server (NTRS)

    Han, Yanning H.; Kumar, Arun N.; Reschke, Millard F.; Somers, Jeffrey T.; Dell'Osso, Louis F.; Leigh, R. John

    2004-01-01

    We studied horizontal eye movements induced by en-bloc yaw rotation, over a frequency range 0.2 - 2.8 Hz, in 10 normal human subjects as th ey monocularly viewed a target located at their near point of focus. We measured gain and phase relationships between eye-in-head velocity and head velocity when the near target was either earth-fixed or head-fixed. During viewing of the earth-fixed near target,median gain was 1.49 (range 1.24 - 1.87) at 0.2 Hz for the group of subjects, but decl ined at higher frequencies, so that at 2.8 Hz median gain was 1.08 (r ange 0.68 - 1.67). During viewing of the head-fixed near target, median gain was 0.03 (range 0.01 - 0.10) at 0.2 Hz for the group of subjec ts, but increased at higher frequencies, so that at 2.8 Hz median gai n was 0.71 (range 0.28 - 0.94). We estimated the vestibular contribution to these responses (vestibulo-ocular reflex gain, Gvor) by applyin g transient head perturbations (peak acceleration> 1,000 deg's(exp 2) ) during sinusoidal rotation under the two viewing conditions. Median Gvor, estimated < 70m after the onset of head perturbation, was 0.98 (range 0.39 - 1.42) while viewing the earth-fixed near target, and 0. 97 (range 0.37 - 1.33) while viewing the head-fixed near target. For the group of subjects, 9 out of 10 subjects showed no sigificant diff erence of Gvor between the two viewing conditions ( p > 0.053 ) at all test frequencies. Since Gvor accounted for only approximately 73% of the overall response gain during viewing of the earth-fixed target, we investigated the relative contributions of non-vestibular factors. When subjects viewed the earth-fixed target under strobe illumination , to eliminate retinal image slip information, the gain of compensato ry eye movements declined compared with viewing in ambient room light . During sum-of-sine head rotations, while viewing the earth-fixed target, to minimize contributions from predictive mechanisms, gain also declined Nonetheless, simple superposition of smooth-pursuit tracking of sinusoidal target motion could not fully account for the overall r esponse at higher frequencies, suggesting other non-vestibular contributions. During binocular viewing conditions when vergence angle was s ignificantly greater than monocular viewing (p < 0.001), this gain of compensatory eye movements did not show proportional change; indeed, gain could not be correlated with vergence angle during monocular or binocular viewing. We conclude that several separate factors contribute to generate eye rotations during sinusoidal yaw head rotations whi le viewing a near target; these include the VOR, visual-tracking eye movements that utilize retinal image motion, predictive eye movements and, possibly, other unidentified nonvestibular factors. For these experiments, vergence was not an important determinant of response gain .

  11. Amblyopia and Binocular Vision

    PubMed Central

    Birch, Eileen E.

    2012-01-01

    Amblyopia is the most common cause of monocular visual loss in children, affecting 1.3% to 3.6% of children. Current treatments are effective in reducing the visual acuity deficit but many amblyopic individuals are left with residual visual acuity deficits, ocular motor abnormalities, deficient fine motor skills, and risk for recurrent amblyopia. Using a combination of psychophysical, electrophysiological, imaging, risk factor analysis, and fine motor skill assessment, the primary role of binocular dysfunction in the genesis of amblyopia and the constellation of visual and motor deficits that accompany the visual acuity deficit has been identified. These findings motivated us to evaluate a new, binocular approach to amblyopia treatment with the goals of reducing or eliminating residual and recurrent amblyopia and of improving the deficient ocular motor function and fine motor skills that accompany amblyopia. PMID:23201436

  12. Threat perception in the chameleon (Chamaeleo chameleon): evidence for lateralized eye use.

    PubMed

    Lustig, Avichai; Keter-Katz, Hadas; Katzir, Gadi

    2012-07-01

    Chameleons are arboreal lizards with highly independent, large amplitude eye movements. In response to an approaching threat, a chameleon on a vertical pole moves so as to keep itself away from the threat. In so doing, it shifts between monocular and binocular scanning of the threat and of the environment. We analyzed eye movements in the Common chameleon, Chamaeleo chameleon, during avoidance response for lateralization, that is, asymmetry at the functional/behavioral levels. The chameleons were exposed to a threat, approaching horizontally from clockwise or anti-clockwise directions, and that could be viewed monocularly or binocularly. Our results show three broad patterns of eye use, as determined by durations spent viewing the threat and by frequency of eye shifts. Under binocular viewing, two of the patterns were found to be both side dependent, that is, lateralized and role dependent ("leading" or "following"). However, under monocular viewing, no such lateralization was detected. We discuss these findings in light of the situation not uncommon in vertebrates, of independent eye movements and a high degree of optic nerve decussation and that lateralization may well occur in organisms that are regularly exposed to critical stimuli from all spatial directions. We point to the need of further investigating lateralization at fine behavioral levels.

  13. Binocular Interactions Underlying the Classic Optomotor Responses of Flying Flies

    PubMed Central

    Duistermars, Brian J.; Care, Rachel A.; Frye, Mark A.

    2012-01-01

    In response to imposed course deviations, the optomotor reactions of animals reduce motion blur and facilitate the maintenance of stable body posture. In flies, many anatomical and electrophysiological studies suggest that disparate motion cues stimulating the left and right eyes are not processed in isolation but rather are integrated in the brain to produce a cohesive panoramic percept. To investigate the strength of such inter-ocular interactions and their role in compensatory sensory–motor transformations, we utilize a virtual reality flight simulator to record wing and head optomotor reactions by tethered flying flies in response to imposed binocular rotation and monocular front-to-back and back-to-front motion. Within a narrow range of stimulus parameters that generates large contrast insensitive optomotor responses to binocular rotation, we find that responses to monocular front-to-back motion are larger than those to panoramic rotation, but are contrast sensitive. Conversely, responses to monocular back-to-front motion are slower than those to rotation and peak at the lowest tested contrast. Together our results suggest that optomotor responses to binocular rotation result from the influence of non-additive contralateral inhibitory as well as excitatory circuit interactions that serve to confer contrast insensitivity to flight behaviors influenced by rotatory optic flow. PMID:22375108

  14. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    PubMed Central

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-01-01

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143

  15. Binocular function to increase visual outcome in patients implanted with a diffractive trifocal intraocular lens.

    PubMed

    Kretz, Florian T A; Müller, Matthias; Gerl, Matthias; Gerl, Ralf H; Auffarth, Gerd U

    2015-08-21

    To evaluate binocular visual outcome for near, intermediate and distance compared to monocular visual outcome at the same distances in patients implanted with a diffractive trifocal intraocular lens (IOL). The study comprised of 100 eyes of 50 patients that underwent bilateral refractive lens exchange or cataract surgery with implantation of a multifocal diffractive IOL (AT LISA tri 839MP, Carl Zeiss Meditech, Germany). A complete ophthalmological examination was performed preoperatively and 3 month postoperatively. The main outcome measures were monocular and binocular uncorrected distance (UDVA), corrected distance (CDVA), uncorrected intermediate (UIVA), and uncorrected near visual acuities (UNVA), keratometry, and manifest refraction. The mean age was 59.28 years ± 9.6 [SD] (range 44-79 years), repectively. There was significant improvement in UDVA, UIVA, UNVA and CDVA. Comparing the monocular results to the binocular results there was a statistical significant better binocular outcome in all distances (UDVA p = 0.036; UIVA p < 0.0001; UNVA p = 0.001). The postoperative manifest refraction was in 86 % of patients within ± 0.50 [D]. The trifocal IOL improved near, intermediate, and distance vision compared to preoperatively. In addition a statistical significant increase for binocular visual function in all distances could be found. German Clinical Trials Register (DRKS) DRKS00007837.

  16. Design of a noninvasive face mask for ocular occlusion in rats and assessment in a visual discrimination paradigm.

    PubMed

    Hager, Audrey M; Dringenberg, Hans C

    2012-12-01

    The rat visual system is structured such that the large (>90 %) majority of retinal ganglion axons reach the contralateral lateral geniculate nucleus (LGN) and visual cortex (V1). This anatomical design allows for the relatively selective activation of one cerebral hemisphere under monocular viewing conditions. Here, we describe the design of a harness and face mask allowing simple and noninvasive monocular occlusion in rats. The harness is constructed from synthetic fiber (shoelace-type material) and fits around the girth region and neck, allowing for easy adjustments to fit rats of various weights. The face mask consists of soft rubber material that is attached to the harness by Velcro strips. Eyeholes in the mask can be covered by additional Velcro patches to occlude either one or both eyes. Rats readily adapt to wearing the device, allowing behavioral testing under different types of viewing conditions. We show that rats successfully acquire a water-maze-based visual discrimination task under monocular viewing conditions. Following task acquisition, interocular transfer was assessed. Performance with the previously occluded, "untrained" eye was impaired, suggesting that training effects were partially confined to one cerebral hemisphere. The method described herein provides a simple and noninvasive means to restrict visual input for studies of visual processing and learning in various rodent species.

  17. Thalamocortical dynamics of the McCollough effect: boundary-surface alignment through perceptual learning.

    PubMed

    Grossberg, Stephen; Hwang, Seungwoo; Mingolla, Ennio

    2002-05-01

    This article further develops the FACADE neural model of 3-D vision and figure-ground perception to quantitatively explain properties of the McCollough effect (ME). The model proposes that many ME data result from visual system mechanisms whose primary function is to adaptively align, through learning, boundary and surface representations that are positionally shifted due to the process of binocular fusion. For example, binocular boundary representations are shifted by binocular fusion relative to monocular surface representations, yet the boundaries must become positionally aligned with the surfaces to control binocular surface capture and filling-in. The model also includes perceptual reset mechanisms that use habituative transmitters in opponent processing circuits. Thus the model shows how ME data may arise from a combination of mechanisms that have a clear functional role in biological vision. Simulation results with a single set of parameters quantitatively fit data from 13 experiments that probe the nature of achromatic/chromatic and monocular/binocular interactions during induction of the ME. The model proposes how perceptual learning, opponent processing, and habituation at both monocular and binocular surface representations are involved, including early thalamocortical sites. In particular, it explains the anomalous ME utilizing these multiple processing sites. Alternative models of the ME are also summarized and compared with the present model.

  18. A new binocular approach to the treatment of amblyopia in adults well beyond the critical period of visual development.

    PubMed

    Hess, R F; Mansouri, B; Thompson, B

    2010-01-01

    The present treatments for amblyopia are predominantly monocular aiming to improve the vision in the amblyopic eye through either patching of the fellow fixing eye or visual training of the amblyopic eye. This approach is problematic, not least of which because it rarely results in establishment of binocular function. Recently it has shown that amblyopes possess binocular cortical mechanisms for both threshold and suprathreshold stimuli. We outline a novel procedure for measuring the extent to which the fixing eye suppresses the fellow amblyopic eye, rendering what is a structurally binocular system, functionally monocular. Here we show that prolonged periods of viewing (under the artificial conditions of stimuli of different contrast in each eye) during which information from the two eyes is combined leads to a strengthening of binocular vision in strabismic amblyopes and eventual combination of binocular information under natural viewing conditions (stimuli of the same contrast in each eye). Concomitant improvement in monocular acuity of the amblyopic eye occurs with this reduction in suppression and strengthening of binocular fusion. Furthermore, in a majority of patients tested, stereoscopic function is established. This provides the basis for a new treatment of amblyopia, one that is purely binocular and aimed at reducing suppression as a first step.

  19. Perception of 3D spatial relations for 3D displays

    NASA Astrophysics Data System (ADS)

    Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.

    2004-05-01

    We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.

  20. The iPod binocular home-based treatment for amblyopia in adults: efficacy and compliance.

    PubMed

    Hess, Robert F; Babu, Raiju Jacob; Clavagnier, Simon; Black, Joanna; Bobier, William; Thompson, Benjamin

    2014-09-01

    Occlusion therapy for amblyopia is predicated on the idea that amblyopia is primarily a disorder of monocular vision; however, there is growing evidence that patients with amblyopia have a structurally intact binocular visual system that is rendered functionally monocular due to suppression. Furthermore, we have found that a dichoptic treatment intervention designed to directly target suppression can result in clinically significant improvement in both binocular and monocular visual function in adult patients with amblyopia. The fact that monocular improvement occurs in the absence of any fellow eye occlusion suggests that amblyopia is, in part, due to chronic suppression. Previously the treatment has been administered as a psychophysical task and more recently as a video game that can be played on video goggles or an iPod device equipped with a lenticular screen. The aim of this case-series study of 14 amblyopes (six strabismics, six anisometropes and two mixed) ages 13 to 50 years was to investigate: 1. whether the portable video game treatment is suitable for at-home use and 2. whether an anaglyphic version of the iPod-based video game, which is more convenient for at-home use, has comparable effects to the lenticular version. The dichoptic video game treatment was conducted at home and visual functions assessed before and after treatment. We found that at-home use for 10 to 30 hours restored simultaneous binocular perception in 13 of 14 cases along with significant improvements in acuity (0.11 ± 0.08 logMAR) and stereopsis (0.6 ± 0.5 log units). Furthermore, the anaglyph and lenticular platforms were equally effective. In addition, the iPod devices were able to record a complete and accurate picture of treatment compliance. The home-based dichoptic iPod approach represents a viable treatment for adults with amblyopia. © 2014 The Authors. Clinical and Experimental Optometry © 2014 Optometrists Association Australia.

  1. Long-Term Visual Training Increases Visual Acuity and Long-Term Monocular Deprivation Promotes Ocular Dominance Plasticity in Adult Standard Cage-Raised Mice

    PubMed Central

    Yusifov, Rashad

    2018-01-01

    Abstract For routine behavioral tasks, mice predominantly rely on olfactory cues and tactile information. In contrast, their visual capabilities appear rather restricted, raising the question whether they can improve if vision gets more behaviorally relevant. We therefore performed long-term training using the visual water task (VWT): adult standard cage (SC)-raised mice were trained to swim toward a rewarded grating stimulus so that using visual information avoided excessive swimming toward nonrewarded stimuli. Indeed, and in contrast to old mice raised in a generally enriched environment (Greifzu et al., 2016), long-term VWT training increased visual acuity (VA) on average by more than 30% to 0.82 cycles per degree (cyc/deg). In an individual animal, VA even increased to 1.49 cyc/deg, i.e., beyond the rat range of VAs. Since visual experience enhances the spatial frequency threshold of the optomotor (OPT) reflex of the open eye after monocular deprivation (MD), we also quantified monocular vision after VWT training. Monocular VA did not increase reliably, and eye reopening did not initiate a decline to pre-MD values as observed by optomotry; VA values rather increased by continued VWT training. Thus, optomotry and VWT measure different parameters of mouse spatial vision. Finally, we tested whether long-term MD induced ocular dominance (OD) plasticity in the visual cortex of adult [postnatal day (P)162–P182] SC-raised mice. This was indeed the case: 40–50 days of MD induced OD shifts toward the open eye in both VWT-trained and, surprisingly, also in age-matched mice without VWT training. These data indicate that (1) long-term VWT training increases adult mouse VA, and (2) long-term MD induces OD shifts also in adult SC-raised mice. PMID:29379877

  2. 3D imaging with a single-aperture 3-mm objective lens: concept, fabrication, and test

    NASA Astrophysics Data System (ADS)

    Korniski, Ronald; Bae, Sam Y.; Shearn, Michael; Manohara, Harish; Shahinian, Hrayr

    2011-10-01

    There are many advantages to minimally invasive surgery (MIS). An endoscope is the optical system of choice by the surgeon for MIS. The smaller the incision or opening made to perform the surgery, the smaller the optical system needed. For minimally invasive neurological and skull base surgeries the openings are typically 10-mm in diameter (dime sized) or less. The largest outside diameter (OD) endoscope used is 4mm. A significant drawback to endoscopic MIS is that it only provides a monocular view of the surgical site thereby lacking depth information for the surgeon. A stereo view would provide the surgeon instantaneous depth information of the surroundings within the field of view, a significant advantage especially during brain surgery. Providing 3D imaging in an endoscopic objective lens system presents significant challenges because of the tight packaging constraints. This paper presents a promising new technique for endoscopic 3D imaging that uses a single lens system with complementary multi-bandpass filters (CMBFs), and describes the proof-of-concept demonstrations performed to date validating the technique. These demonstrations of the technique have utilized many commercial off-the- shelf (COTS) components including the ones used in the endoscope objective.

  3. Kinder, gentler stereo

    NASA Astrophysics Data System (ADS)

    Siegel, Mel; Tobinaga, Yoshikazu; Akiya, Takeo

    1999-05-01

    Not only binocular perspective disparity, but also many secondary binocular and monocular sensory phenomena, contribute to the human sensation of depth. Binocular perspective disparity is notable as the strongest depth perception factor. However means for creating if artificially from flat image pairs are notorious for inducing physical and mental stresses, e.g., 'virtual reality sickness'. Aiming to deliver a less stressful 'kinder gentler stereo (KGS)', we systematically examine the secondary phenomena and their synergistic combination with each other and with binocular perspective disparity. By KGS we mean a stereo capture, rendering, and display paradigm without cue conflicts, without eyewear, without viewing zones, with negligible 'lock-in' time to perceive the image in depth, and with a normal appearance for stereo-deficient viewers. To achieve KGS we employ optical and digital image processing steps that introduce distortions contrary to strict 'geometrical correctness' of binocular perspective but which nevertheless result in increased stereoscopic viewing comfort. We particularly exploit the lower limits of interoccular separation, showing that unexpectedly small disparities stimulate accurate and pleasant depth sensations. Under these circumstances crosstalk is perceived as depth-of-focus rather than as ghosting. This suggests the possibility of radically new approaches to stereoview multiplexing that enable zoneless autostereoscopic display.

  4. Three-dimensional deformable-model-based localization and recognition of road vehicles.

    PubMed

    Zhang, Zhaoxiang; Tan, Tieniu; Huang, Kaiqi; Wang, Yunhong

    2012-01-01

    We address the problem of model-based object recognition. Our aim is to localize and recognize road vehicles from monocular images or videos in calibrated traffic scenes. A 3-D deformable vehicle model with 12 shape parameters is set up as prior information, and its pose is determined by three parameters, which are its position on the ground plane and its orientation about the vertical axis under ground-plane constraints. An efficient local gradient-based method is proposed to evaluate the fitness between the projection of the vehicle model and image data, which is combined into a novel evolutionary computing framework to estimate the 12 shape parameters and three pose parameters by iterative evolution. The recovery of pose parameters achieves vehicle localization, whereas the shape parameters are used for vehicle recognition. Numerous experiments are conducted in this paper to demonstrate the performance of our approach. It is shown that the local gradient-based method can evaluate accurately and efficiently the fitness between the projection of the vehicle model and the image data. The evolutionary computing framework is effective for vehicles of different types and poses is robust to all kinds of occlusion.

  5. 3D Imaging with a Single-Aperture 3-mm Objective Lens: Concept, Fabrication and Test

    NASA Technical Reports Server (NTRS)

    Korniski, Ron; Bae, Sam Y.; Shearn, Mike; Manohara, Harish; Shahinian, Hrayr

    2011-01-01

    There are many advantages to minimally invasive surgery (MIS). An endoscope is the optical system of choice by the surgeon for MIS. The smaller the incision or opening made to perform the surgery, the smaller the optical system needed. For minimally invasive neurological and skull base surgeries the openings are typically 10-mm in diameter (dime sized) or less. The largest outside diameter (OD) endoscope used is 4mm. A significant drawback to endoscopic MIS is that it only provides a monocular view of the surgical site thereby lacking depth information for the surgeon. A stereo view would provide the surgeon instantaneous depth information of the surroundings within the field of view, a significant advantage especially during brain surgery. Providing 3D imaging in an endoscopic objective lens system presents significant challenges because of the tight packaging constraints. This paper presents a promising new technique for endoscopic 3D imaging that uses a single lens system with complementary multi-bandpass filters (CMBFs), and describes the proof-of-concept demonstrations performed to date validating the technique. These demonstrations of the technique have utilized many commercial off-the-shelf (COTS) components including the ones used in the endoscope objective.

  6. Unusual presentation of a skull base mass lesion in sarcoidosis mimicking malignant neoplasm: a case report.

    PubMed

    Shijo, Katsunori; Moro, Nobuhiro; Sasano, Mari; Watanabe, Mitsuru; Yagasaki, Hiroshi; Takahashi, Shori; Homma, Taku; Yoshino, Atsuo

    2018-05-29

    Sarcoidosis is a multi-organ disease of unknown etiology characterised by the presence of epithelioid granulomas, without caseous necrosis. Systemic sarcoidosis is rare among children, while neurosarcoidosis in children is even rarer whether it is systemic or not. We described the case of a 12-year-old boy who presented with monocular vision loss accompanied by unusual MRI features of an extensive meningeal infiltrating mass lesion. The patient underwent surgical resection (biopsy) via a frontotemporal craniotomy to establish a definitive diagnosis based on the histopathology, since neurosarcoidosis remains a very difficult diagnosis to establish from neuroradiogenic imagings. Based on the histopathology of the resected mass lesion, neurosarcoidosis was diagnosed. On follow-up after 3 months of steroid therapy, the patient displayed a good response on the imaging studies. MRI revealed that the preexisting mass lesion had regressed extremely. We also conducted a small literature review on imaging studies, manifestations, appropriate treatments, etc., in particular neurosarcoidosis including children. Although extremely rare, neurosarcoidosis, even in children, should be considered in the differential diagnosis of skull base mass lesions to avoid unnecessary aggressive surgery and delay in treatment, since surgery may have little role in the treatment of sarcoidosis.

  7. Monocular Elevation Deficiency - Double Elevator Palsy

    MedlinePlus

    American Association for Pediatric Ophthalmology and Strabismus Home About AAPOS Patient Info Resources Allied Health News & Events Meetings J AAPOS American Association for Pediatric Ophthalmology ...

  8. [The lazy eye - contemporary strategies of amblyopia treatment].

    PubMed

    Sturm, V

    2011-02-16

    Amblyopia is a condition of decreased monocular or binocular visual acuity caused by form deprivation or abnormal binocular interaction. Amblyopia is the most common cause of monocular vision loss in children with a prevalence of 2 to 5%. During the last decade, several prospective randomized studies have influenced our clinical management. Based on these studies, optimum refractive correction should be prescribed first. However, most patients will need additional occlusion therapy which is still considered the «gold standard» of amblyopia management. Now much lower doses have been shown to be effective. In moderate amblyopia, penalization with atropine is as effective as patching. New treatment modalities including perceptual learning, pharmacotherapy with levodopa and citicholine or transcranial magnetic stimulation have not yet been widely accepted.

  9. A method of real-time detection for distant moving obstacles by monocular vision

    NASA Astrophysics Data System (ADS)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  10. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    NASA Astrophysics Data System (ADS)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  11. Discrimination of binocular color mixtures in dichromacy: evaluation of the Maxwell-Cornsweet conjecture

    NASA Astrophysics Data System (ADS)

    Knoblauch, Kenneth; McMahon, Matthew J.

    1995-10-01

    We tested the Maxwell-Cornsweet conjecture that differential spectral filtering of the two eyes can increase the dimensionality of a dichromat's color vision. Sex-linked dichromats wore filters that differentially passed long- and middle-wavelength regions of the spectrum to each eye. Monocularly, temporal modulation thresholds (1.5 Hz) for color mixtures from the Rayleigh region of the spectrum were accounted for by a single, univariant mechanism. Binocularly, univariance was rejected because, as in monocular viewing by trichromats, in no color direction could silent substitution of the color mixtures be obtained. Despite the filter-aided increase in dimension, estimated wavelength discrimination was quite poor in this spectral region, suggesting a limit to the effectiveness of this technique. binocular summation.

  12. Diurnal rhythms of visual accommodation and blink responses - Implication for flight-deck visual standards

    NASA Technical Reports Server (NTRS)

    Murphy, M. R.; Randle, R. J.; Williams, B. A.

    1977-01-01

    Possible 24-h variations in accommodation responses were investigated. A recently developed servo-controlled optometer and focus stimulator were used to obtain monocular accommodation response data on four college-age subjects. No 24-h rhythm in accommodation was shown. Heart rate and blink rate also were measured and periodicity analysis showed a mean 24-h rhythm for both; however, blink rate periodograms were significant for only two of the four subjects. Thus, with the qualifications that college students were tested instead of pilots and that they performed monocular laboratory tasks instead of binocular flight-deck tasks, it is concluded that 24-h rhythms in accommodation responses need not be considered in setting visual standards for flight-deck tasks.

  13. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs

    PubMed Central

    Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María

    2017-01-01

    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works. PMID:28481277

  14. Spatial constraints of stereopsis in video displays

    NASA Technical Reports Server (NTRS)

    Schor, Clifton

    1989-01-01

    Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.

  15. Bilateral symmetry in vision and influence of ocular surgical procedures on binocular vision: A topical review.

    PubMed

    Arba Mosquera, Samuel; Verma, Shwetabh

    2016-01-01

    We analyze the role of bilateral symmetry in enhancing binocular visual ability in human eyes, and further explore how efficiently bilateral symmetry is preserved in different ocular surgical procedures. The inclusion criterion for this review was strict relevance to the clinical questions under research. Enantiomorphism has been reported in lower order aberrations, higher order aberrations and cone directionality. When contrast differs in the two eyes, binocular acuity is better than monocular acuity of the eye that receives higher contrast. Anisometropia has an uncommon occurrence in large populations. Anisometropia seen in infancy and childhood is transitory and of little consequence for the visual acuity. Binocular summation of contrast signals declines with age, independent of inter-ocular differences. The symmetric associations between the right and left eye could be explained by the symmetry in pupil offset and visual axis which is always nasal in both eyes. Binocular summation mitigates poor visual performance under low luminance conditions and strong inter-ocular disparity detrimentally affects binocular summation. Considerable symmetry of response exists in fellow eyes of patients undergoing myopic PRK and LASIK, however the method to determine whether or not symmetry is maintained consist of comparing individual terms in a variety of ad hoc ways both before and after the refractive surgery, ignoring the fact that retinal image quality for any individual is based on the sum of all terms. The analysis of bilateral symmetry should be related to the patients' binocular vision status. The role of aberrations in monocular and binocular vision needs further investigation. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  16. The heterophoria of 3-5 year old children as a function of viewing distance and target type.

    PubMed

    Troyer, Mary E; Sreenivasan, Vidhyapriya; Peper, T J; Candy, T Rowan

    2017-01-01

    Heterophoria is the misalignment of the eyes in monocular viewing and represents the accuracy of vergence driven by all classical cues except disparity. It is challenging to assess restless children using clinical cover tests, and phoria in early childhood is poorly understood. Here we used eye tracking to assess phoria as a function of viewing distance and target in adults and young children, with comparison to clinical cover tests. Purkinje image tracking (MCS PowerRefractor) was used to record eye alignment in adults (19-28 years, N = 24) and typically developing children (3-5 years, N = 24). Objective unilateral and alternating cover tests were performed using an infrared filter while participants viewed a pseudo-randomised sequence of Lea symbols (0.18 logMAR; Snellen: 20/30 or 6/9) and animated cartoon movies at distances of 40 cm, 1 m, and 6 m. For the unilateral cover test, a 10 s binocular period preceded and followed 30 s of occlusion of the right eye. For the alternating cover test, a 10 s binocular period preceded and followed alternate covering of right and left eyes for 3-s each. Phoria was derived from the difference in weighted average binocular and monocular alignment. A masked prism-neutralised clinical cover test was performed for each of the conditions for comparison. Closer viewing distance resulted in greater exophoria for both children and adults (p < 0.001). Phorias were similar for adults and children for each viewing distance and target, with mean differences of less than 2 prism dioptres (pd). Overall, the average PowerRefractor phorias (pooled across protocols) for adults were 1.3, 2.3 and 3.8 pd exophoria and for children were 0.1 pd esophoria, 0.94 and 3.8 pd exophoria for the 6 m, 1 m and 40 cm distances respectively. The corresponding clinical cover test values were 0.7, 1.9, and 4.1 pd exophoria for adults and 0, 1.5 and 3.3 pd exophoria for the children. Refractive states were also similar (≤0.5 D difference) for viewing the Lea symbols or movie for any protocol tested. Phoria estimation can be challenging for a pre-school child. These data suggest that by 3-5 years of age objective eye-tracking measures in a typically developing group are adult-like at the range of distances tested, and that use of an animated movie produces similar average results to a small optotype (0.18 logMAR; Snellen 20/30 or 6/9). © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  17. Ways of Viewing Pictorial Plasticity

    PubMed Central

    2017-01-01

    The plastic effect is historically used to denote various forms of stereopsis. The vivid impression of depth often associated with binocular stereopsis can also be achieved in other ways, for example, using a synopter. Accounts of this go back over a hundred years. These ways of viewing all aim to diminish sensorial evidence that the picture is physically flat. Although various viewing modes have been proposed in the literature, their effects have never been compared. In the current study, we compared three viewing modes: monocular blur, synoptic viewing, and free viewing (using a placebo synopter). By designing a physical embodiment that was indistinguishable for the three experimental conditions, we kept observers naïve with respect to the differences between them; 197 observers participated in an experiment where the three viewing modes were compared by performing a rating task. Results indicate that synoptic viewing causes the largest plastic effect. Monocular blur scores lower than synoptic viewing but is still rated significantly higher than the baseline conditions. The results strengthen the idea that synoptic viewing is not due to a placebo effect. Furthermore, monocular blur has been verified for the first time as a way of experiencing the plastic effect, although the effect is smaller than synoptic viewing. We discuss the results with respect to the theoretical basis for the plastic effect. We show that current theories are not described with sufficient details to explain the differences we found. PMID:28491270

  18. Early Cross-modal Plasticity in Adults.

    PubMed

    Lo Verde, Luca; Morrone, Maria Concetta; Lunghi, Claudia

    2017-03-01

    It is known that, after a prolonged period of visual deprivation, the adult visual cortex can be recruited for nonvisual processing, reflecting cross-modal plasticity. Here, we investigated whether cross-modal plasticity can occur at short timescales in the typical adult brain by comparing the interaction between vision and touch during binocular rivalry before and after a brief period of monocular deprivation, which strongly alters ocular balance favoring the deprived eye. While viewing dichoptically two gratings of orthogonal orientation, participants were asked to actively explore a haptic grating congruent in orientation to one of the two rivalrous stimuli. We repeated this procedure before and after 150 min of monocular deprivation. We first confirmed that haptic stimulation interacted with vision during rivalry promoting dominance of the congruent visuo-haptic stimulus and that monocular deprivation increased the deprived eye and decreased the nondeprived eye dominance. Interestingly, after deprivation, we found that the effect of touch did not change for the nondeprived eye, whereas it disappeared for the deprived eye, which was potentiated after deprivation. The absence of visuo-haptic interaction for the deprived eye lasted for over 1 hr and was not attributable to a masking induced by the stronger response of the deprived eye as confirmed by a control experiment. Taken together, our results demonstrate that the adult human visual cortex retains a high degree of cross-modal plasticity, which can occur even at very short timescales.

  19. Six-month-old infants' perception of the hollow face illusion: evidence for a general convexity bias.

    PubMed

    Corrow, Sherryse L; Mathison, Jordan; Granrud, Carl E; Yonas, Albert

    2014-01-01

    Corrow, Granrud, Mathison, and Yonas (2011, Perception, 40, 1376-1383) found evidence that 6-month-old infants perceive the hollow face illusion. In the present study we asked whether 6-month-old infants perceive illusory depth reversal for a nonface object and whether infants' perception of the hollow face illusion is affected by mask orientation inversion. In experiment 1 infants viewed a concave bowl, and their reaches were recorded under monocular and binocular viewing conditions. Infants reached to the bowl as if it were convex significantly more often in the monocular than in the binocular viewing condition. These results suggest that infants perceive illusory depth reversal with a nonface stimulus and that the infant visual system has a bias to perceive objects as convex. Infants in experiment 2 viewed a concave face-like mask in upright and inverted orientations. Infants reached to the display as if it were convex more in the monocular than in the binocular condition; however, mask orientation had no effect on reaching. Previous findings that adults' perception of the hollow face illusion is affected by mask orientation inversion have been interpreted as evidence of stored-knowledge influences on perception. However, we found no evidence of such influences in infants, suggesting that their perception of this illusion may not be affected by stored knowledge, and that perceived depth reversal is not face-specific in infants.

  20. Enhancement of vision by monocular deprivation in adult mice.

    PubMed

    Prusky, Glen T; Alam, Nazia M; Douglas, Robert M

    2006-11-08

    Plasticity of vision mediated through binocular interactions has been reported in mammals only during a "critical" period in juvenile life, wherein monocular deprivation (MD) causes an enduring loss of visual acuity (amblyopia) selectively through the deprived eye. Here, we report a different form of interocular plasticity of vision in adult mice in which MD leads to an enhancement of the optokinetic response (OKR) selectively through the nondeprived eye. Over 5 d of MD, the spatial frequency sensitivity of the OKR increased gradually, reaching a plateau of approximately 36% above pre-deprivation baseline. Eye opening initiated a gradual decline, but sensitivity was maintained above pre-deprivation baseline for 5-6 d. Enhanced function was restricted to the monocular visual field, notwithstanding the dependence of the plasticity on binocular interactions. Activity in visual cortex ipsilateral to the deprived eye was necessary for the characteristic induction of the enhancement, and activity in visual cortex contralateral to the deprived eye was necessary for its maintenance after MD. The plasticity also displayed distinct learning-like properties: Active testing experience was required to attain maximal enhancement and for enhancement to persist after MD, and the duration of enhanced sensitivity after MD was extended by increasing the length of MD, and by repeating MD. These data show that the adult mouse visual system maintains a form of experience-dependent plasticity in which the visual cortex can modulate the normal function of subcortical visual pathways.

  1. Bifocal Stereo for Multipath Person Re-Identification

    NASA Astrophysics Data System (ADS)

    Blott, G.; Heipke, C.

    2017-11-01

    This work presents an approach for the task of person re-identification by exploiting bifocal stereo cameras. Present monocular person re-identification approaches show a decreasing working distance, when increasing the image resolution to obtain a higher reidentification performance. We propose a novel 3D multipath bifocal approach, containing a rectilinear lens with larger focal length for long range distances and a fish eye lens of a smaller focal length for the near range. The person re-identification performance is at least on par with 2D re-identification approaches but the working distance of the approach is increased and on average 10% more re-identification performance can be achieved in the overlapping field of view compared to a single camera. In addition, the 3D information is exploited from the overlapping field of view to solve potential 2D ambiguities.

  2. Human single neuron activity precedes emergence of conscious perception.

    PubMed

    Gelbard-Sagiv, Hagar; Mudrik, Liad; Hill, Michael R; Koch, Christof; Fried, Itzhak

    2018-05-25

    Identifying the neuronal basis of spontaneous changes in conscious experience in the absence of changes in the external environment is a major challenge. Binocular rivalry, in which two stationary monocular images lead to continuously changing perception, provides a unique opportunity to address this issue. We studied the activity of human single neurons in the medial temporal and frontal lobes while patients were engaged in binocular rivalry. Here we report that internal changes in the content of perception are signaled by very early (~-2000 ms) nonselective medial frontal activity, followed by selective activity of medial temporal lobe neurons that precedes the perceptual change by ~1000 ms. Such early activations are not found for externally driven perceptual changes. These results suggest that a medial fronto-temporal network may be involved in the preconscious internal generation of perceptual transitions.

  3. A novel visual-inertial monocular SLAM

    NASA Astrophysics Data System (ADS)

    Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo

    2018-02-01

    With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.

  4. Amblyopia and binocular vision.

    PubMed

    Birch, Eileen E

    2013-03-01

    Amblyopia is the most common cause of monocular visual loss in children, affecting 1.3%-3.6% of children. Current treatments are effective in reducing the visual acuity deficit but many amblyopic individuals are left with residual visual acuity deficits, ocular motor abnormalities, deficient fine motor skills, and risk for recurrent amblyopia. Using a combination of psychophysical, electrophysiological, imaging, risk factor analysis, and fine motor skill assessment, the primary role of binocular dysfunction in the genesis of amblyopia and the constellation of visual and motor deficits that accompany the visual acuity deficit has been identified. These findings motivated us to evaluate a new, binocular approach to amblyopia treatment with the goals of reducing or eliminating residual and recurrent amblyopia and of improving the deficient ocular motor function and fine motor skills that accompany amblyopia. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Experience of the ARGO autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Bertozzi, Massimo; Broggi, Alberto; Conte, Gianni; Fascioli, Alessandra

    1998-07-01

    This paper presents and discusses the first results obtained by the GOLD (Generic Obstacle and Lane Detection) system as an automatic driver of ARGO. ARGO is a Lancia Thema passenger car equipped with a vision-based system that allows to extract road and environmental information from the acquired scene. By means of stereo vision, obstacles on the road are detected and localized, while the processing of a single monocular image allows to extract the road geometry in front of the vehicle. The generality of the underlying approach allows to detect generic obstacles (without constraints on shape, color, or symmetry) and to detect lane markings even in dark and in strong shadow conditions. The hardware system consists of a PC Pentium 200 Mhz with MMX technology and a frame-grabber board able to acquire 3 b/w images simultaneously; the result of the processing (position of obstacles and geometry of the road) is used to drive an actuator on the steering wheel, while debug information are presented to the user on an on-board monitor and a led-based control panel.

  6. Experience-driven plasticity in binocular vision

    PubMed Central

    Klink, P. Christiaan; Brascamp, Jan W.; Blake, Randolph; van Wezel, Richard J.A.

    2010-01-01

    Summary Experience-driven neuronal plasticity allows the brain to adapt its functional connectivity to recent sensory input. Here we use binocular rivalry [1], an experimental paradigm where conflicting images are presented to the individual eyes, to demonstrate plasticity in the neuronal mechanisms that convert visual information from two separated retinas into single perceptual experiences. Perception during binocular rivalry tended to initially consist of alternations between exclusive representations of monocularly defined images, but upon prolonged exposure, mixture percepts became more prevalent. The completeness of suppression, reflected in the incidence of mixture percepts, plausibly reflects the strength of inhibition that likely plays a role in binocular rivalry [2]. Recovery of exclusivity was possible, but required highly specific binocular stimulation. Documenting the prerequisites for these observed changes in perceptual exclusivity, our experiments suggest experience-driven plasticity at interocular inhibitory synapses, driven by the (lack of) correlated activity of neurons representing the conflicting stimuli. This form of plasticity is consistent with a previously proposed, but largely untested, anti-Hebbian learning mechanism for inhibitory synapses in vision [3, 4]. Our results implicate experience-driven plasticity as one governing principle in the neuronal organization of binocular vision. PMID:20674360

  7. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    NASA Astrophysics Data System (ADS)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  8. Aerobic Exercise Effects on Ocular Dominance Plasticity with a Phase Combination Task in Human Adults

    PubMed Central

    Reynaud, Alexandre; Hess, Robert F.

    2017-01-01

    Several studies have shown that short-term monocular patching can induce ocular dominance plasticity in normal adults, in which the patched eye becomes stronger in binocular viewing. There is a recent study showing that exercise enhances this plasticity effect when assessed with binocular rivalry. We address one question, is this enhancement from exercise a general effect such that it is seen for measures of binocular processing other than that revealed using binocular rivalry? Using a binocular phase combination task in which we directly measure each eye's contribution to the binocularly fused percept, we show no additional effect of exercise after short-term monocular occlusion and argue that the enhancement of ocular dominance plasticity from exercise could not be demonstrated with our approach. PMID:28357142

  9. Monocular depth perception using image processing and machine learning

    NASA Astrophysics Data System (ADS)

    Hombali, Apoorv; Gorde, Vaibhav; Deshpande, Abhishek

    2011-10-01

    This paper primarily exploits some of the more obscure, but inherent properties of camera and image to propose a simpler and more efficient way of perceiving depth. The proposed method involves the use of a single stationary camera at an unknown perspective and an unknown height to determine depth of an object on unknown terrain. In achieving so a direct correlation between a pixel in an image and the corresponding location in real space has to be formulated. First, a calibration step is undertaken whereby the equation of the plane visible in the field of view is calculated along with the relative distance between camera and plane by using a set of derived spatial geometrical relations coupled with a few intrinsic properties of the system. The depth of an unknown object is then perceived by first extracting the object under observation using a series of image processing steps followed by exploiting the aforementioned mapping of pixel and real space coordinate. The performance of the algorithm is greatly enhanced by the introduction of reinforced learning making the system independent of hardware and environment. Furthermore the depth calculation function is modified with a supervised learning algorithm giving consistent improvement in results. Thus, the system uses the experience in past and optimizes the current run successively. Using the above procedure a series of experiments and trials are carried out to prove the concept and its efficacy.

  10. Human body motion tracking based on quantum-inspired immune cloning algorithm

    NASA Astrophysics Data System (ADS)

    Han, Hong; Yue, Lichuan; Jiao, Licheng; Wu, Xing

    2009-10-01

    In a static monocular camera system, to gain a perfect 3D human body posture is a great challenge for Computer Vision technology now. This paper presented human postures recognition from video sequences using the Quantum-Inspired Immune Cloning Algorithm (QICA). The algorithm included three parts. Firstly, prior knowledge of human beings was used, the key joint points of human could be detected automatically from the human contours and skeletons which could be thinning from the contours; And due to the complexity of human movement, a forecasting mechanism of occlusion joint points was addressed to get optimum 2D key joint points of human body; And then pose estimation recovered by optimizing between the 2D projection of 3D human key joint points and 2D detection key joint points using QICA, which recovered the movement of human body perfectly, because this algorithm could acquire not only the global optimal solution, but the local optimal solution.

  11. Study of robot landmark recognition with complex background

    NASA Astrophysics Data System (ADS)

    Huang, Yuqing; Yang, Jia

    2007-12-01

    It's of great importance for assisting robot in path planning, position navigating and task performing by perceiving and recognising environment characteristic. To solve the problem of monocular-vision-oriented landmark recognition for mobile intelligent robot marching with complex background, a kind of nested region growing algorithm which fused with transcendental color information and based on current maximum convergence center is proposed, allowing invariance localization to changes in position, scale, rotation, jitters and weather conditions. Firstly, a novel experiment threshold based on RGB vision model is used for the first image segmentation, which allowing some objects and partial scenes with similar color to landmarks also are detected with landmarks together. Secondly, with current maximum convergence center on segmented image as each growing seed point, the above region growing algorithm accordingly starts to establish several Regions of Interest (ROI) orderly. According to shape characteristics, a quick and effectual contour analysis based on primitive element is applied in deciding whether current ROI could be reserved or deleted after each region growing, then each ROI is judged initially and positioned. When the position information as feedback is conveyed to the gray image, the whole landmarks are extracted accurately with the second segmentation on the local image that exclusive to landmark area. Finally, landmarks are recognised by Hopfield neural network. Results issued from experiments on a great number of images with both campus and urban district as background show the effectiveness of the proposed algorithm.

  12. Multiaccommodative stimuli in VR systems: problems & solutions.

    PubMed

    Marran, L; Schor, C

    1997-09-01

    Virtual reality environments can introduce multiple and sometimes conflicting accommodative stimuli. For instance, with the high-powered lenses commonly used in head-mounted displays, small discrepancies in screen lens placement, caused by manufacturer error or user adjustment focus error, can change the focal depths of the image by a couple of diopters. This can introduce a binocular accommodative stimulus or, if the displacement between the two screens is unequal, an unequal (anisometropic) accommodative stimulus for the two eyes. Systems that allow simultaneous viewing of virtual and real images can also introduce a conflict in accommodative stimuli: When real and virtual images are at different focal planes, both cannot be in focus at the same time, though they may appear to be in similar locations in space. In this paper four unique designs are described that minimize the range of accommodative stimuli and maximize the visual system's ability to cope efficiently with the focus conflicts that remain: pinhole optics, monocular lens addition combined with aniso-accommodation, chromatic bifocal, and bifocal lens system. The advantages and disadvantages of each design are described and recommendation for design choice is given after consideration of the end use of the virtual reality system (e.g., low or high end, entertainment, technical, or medical use). The appropriate design modifications should allow greater user comfort and better performance.

  13. Generic Dynamic Environment Perception Using Smart Mobile Devices.

    PubMed

    Danescu, Radu; Itu, Razvan; Petrovai, Andra

    2016-10-17

    The driving environment is complex and dynamic, and the attention of the driver is continuously challenged, therefore computer based assistance achieved by processing image and sensor data may increase traffic safety. While active sensors and stereovision have the advantage of obtaining 3D data directly, monocular vision is easy to set up, and can benefit from the increasing computational power of smart mobile devices, and from the fact that almost all of them come with an embedded camera. Several driving assistance application are available for mobile devices, but they are mostly targeted for simple scenarios and a limited range of obstacle shapes and poses. This paper presents a technique for generic, shape independent real-time obstacle detection for mobile devices, based on a dynamic, free form 3D representation of the environment: the particle based occupancy grid. Images acquired in real time from the smart mobile device's camera are processed by removing the perspective effect and segmenting the resulted bird-eye view image to identify candidate obstacle areas, which are then used to update the occupancy grid. The occupancy grid tracked cells are grouped into obstacles depicted as cuboids having position, size, orientation and speed. The easy to set up system is able to reliably detect most obstacles in urban traffic, and its measurement accuracy is comparable to a stereovision system.

  14. Depth Perception and the History of Three-Dimensional Art: Who Produced the First Stereoscopic Images?

    PubMed Central

    2017-01-01

    The history of the expression of three-dimensional structure in art can be traced from the use of occlusion in Palaeolithic cave paintings, through the use of shadow in classical art, to the development of perspective during the Renaissance. However, the history of the use of stereoscopic techniques is controversial. Although the first undisputed stereoscopic images were presented by Wheatstone in 1838, it has been claimed that two sketches by Jacopo Chimenti da Empoli (c. 1600) can be to be fused to yield an impression of stereoscopic depth, while others suggest that Leonardo da Vinci’s Mona Lisa is the world’s first stereogram. Here, we report the first quantitative study of perceived depth in these works, in addition to more recent works by Salvador Dalí. To control for the contribution of monocular depth cues, ratings of the magnitude and coherence of depth were recorded for both stereoscopic and pseudoscopic presentations, with a genuine contribution of stereoscopic cues revealed by a difference between these scores. Although effects were clear for Wheatstone and Dalí’s images, no such effects could be found for works produced earlier. As such, we have no evidence to reject the conventional view that the first producer of stereoscopic imagery was Sir Charles Wheatstone. PMID:28203349

  15. Depth Perception and the History of Three-Dimensional Art: Who Produced the First Stereoscopic Images?

    PubMed

    Brooks, Kevin R

    2017-01-01

    The history of the expression of three-dimensional structure in art can be traced from the use of occlusion in Palaeolithic cave paintings, through the use of shadow in classical art, to the development of perspective during the Renaissance. However, the history of the use of stereoscopic techniques is controversial. Although the first undisputed stereoscopic images were presented by Wheatstone in 1838, it has been claimed that two sketches by Jacopo Chimenti da Empoli (c. 1600) can be to be fused to yield an impression of stereoscopic depth, while others suggest that Leonardo da Vinci's Mona Lisa is the world's first stereogram. Here, we report the first quantitative study of perceived depth in these works, in addition to more recent works by Salvador Dalí. To control for the contribution of monocular depth cues, ratings of the magnitude and coherence of depth were recorded for both stereoscopic and pseudoscopic presentations, with a genuine contribution of stereoscopic cues revealed by a difference between these scores. Although effects were clear for Wheatstone and Dalí's images, no such effects could be found for works produced earlier. As such, we have no evidence to reject the conventional view that the first producer of stereoscopic imagery was Sir Charles Wheatstone.

  16. Satellite Articulation Characterization from an Image Trajectory Matrix Using Optimization

    NASA Astrophysics Data System (ADS)

    Curtis, D. H.; Cobb, R. G.

    Autonomous on-orbit satellite servicing and inspection benefits from an inspector satellite that can autonomously gain as much information as possible about the primary satellite. This includes performance of articulated objects such as solar arrays, antennas, and sensors. This paper presents a method of characterizing the articulation of a satellite using resolved monocular imagery. A simulated point cloud representing a nominal satellite with articulating solar panels and a complex articulating appendage is developed and projected to the image coordinates that would be seen from an inspector following a given inspection route. A method is developed to analyze the resulting image trajectory matrix. The developed method takes advantage of the fact that the route of the inspector satellite is known to assist in the segmentation of the points into different rigid bodies, the creation of the 3D point cloud, and the identification of the articulation parameters. Once the point cloud and the articulation parameters are calculated, they can be compared to the known truth. The error in the calculated point cloud is determined as well as the difference between the true workspace of the satellite and the calculated workspace. These metrics can be used to compare the quality of various inspection routes for characterizing the satellite and its articulation.

  17. Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).

    PubMed

    Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong

    2016-02-06

    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.

  18. Effects of Anisometropic Amblyopia on Visuomotor Behavior, Part 2: Visually Guided Reaching

    PubMed Central

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C.; Chandrakumar, Manokaraananthan; Hirji, Zahra; Crawford, J. Douglas; Wong, Agnes M. F.

    2016-01-01

    Purpose The effects of impaired spatiotemporal vision in amblyopia on visuomotor skills have rarely been explored in detail. The goal of this study was to examine the influences of amblyopia on visually guided reaching. Methods Fourteen patients with anisometropic amblyopia and 14 control subjects were recruited. Participants executed reach-to-touch movements toward targets presented randomly 5° or 10° to the left or right of central fixation in three viewing conditions: binocular, monocular amblyopic eye, and monocular fellow eye viewing (left and right monocular viewing for control subjects). Visual feedback of the target was removed on 50% of the trials at the initiation of reaching. Results Reaching accuracy was comparable between patients and control subjects during all three viewing conditions. Patients’ reaching responses were slightly less precise during amblyopic eye viewing, but their precision was normal during binocular or fellow eye viewing. Reaching reaction time was not affected by amblyopia. The duration of the acceleration phase was longer in patients than in control subjects under all viewing conditions, whereas the duration of the deceleration phase was unaffected. Peak acceleration and peak velocity were also reduced in patients. Conclusions Amblyopia affects both the programming and the execution of visually guided reaching. The increased duration of the acceleration phase, as well as the reduced peak acceleration and peak velocity, might reflect a strategy or adaptation of feedforward/feedback control of the visuomotor system to compensate for degraded spatiotemporal vision in amblyopia, allowing patients to optimize their reaching performance. PMID:21051723

  19. Monocular tool control, eye dominance, and laterality in New Caledonian crows.

    PubMed

    Martinho, Antone; Burns, Zackory T; von Bayern, Auguste M P; Kacelnik, Alex

    2014-12-15

    Tool use, though rare, is taxonomically widespread, but morphological adaptations for tool use are virtually unknown. We focus on the New Caledonian crow (NCC, Corvus moneduloides), which displays some of the most innovative tool-related behavior among nonhumans. One of their major food sources is larvae extracted from burrows with sticks held diagonally in the bill, oriented with individual, but not species-wide, laterality. Among possible behavioral and anatomical adaptations for tool use, NCCs possess unusually wide binocular visual fields (up to 60°), suggesting that extreme binocular vision may facilitate tool use. Here, we establish that during natural extractions, tool tips can only be viewed by the contralateral eye. Thus, maintaining binocular view of tool tips is unlikely to have selected for wide binocular fields; the selective factor is more likely to have been to allow each eye to see far enough across the midsagittal line to view the tool's tip monocularly. Consequently, we tested the hypothesis that tool side preference follows eye preference and found that eye dominance does predict tool laterality across individuals. This contrasts with humans' species-wide motor laterality and uncorrelated motor-visual laterality, possibly because bill-held tools are viewed monocularly and move in concert with eyes, whereas hand-held tools are visible to both eyes and allow independent combinations of eye preference and handedness. This difference may affect other models of coordination between vision and mechanical control, not necessarily involving tools. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Novel quantitative assessment of metamorphopsia in maculopathy.

    PubMed

    Wiecek, Emily; Lashkari, Kameran; Dakin, Steven C; Bex, Peter

    2014-11-18

    Patients with macular disease often report experiencing metamorphopsia (visual distortion). Although typically measured with Amsler charts, more quantitative assessments of perceived distortion are desirable to effectively monitor the presence, progression, and remediation of visual impairment. Participants with binocular (n = 33) and monocular (n = 50) maculopathy across seven disease groups, and control participants (n = 10) with no identifiable retinal disease completed a modified Amsler grid assessment (presented on a computer screen with eye tracking to ensure fixation compliance) and two novel assessments to measure metamorphopsia in the central 5° of visual field. A total of 81% (67/83) of participants completed a hyperacuity task where they aligned eight dots in the shape of a square, and 64% (32/50) of participants with monocular distortion completed a spatial alignment task using dichoptic stimuli. Ten controls completed all tasks. Horizontal and vertical distortion magnitudes were calculated for each of the three assessments. Distortion magnitudes were significantly higher in patients than controls in all assessments. There was no significant difference in magnitude of distortion across different macular diseases. There were no significant correlations between overall magnitude of distortion among any of the three measures and no significant correlations in localized measures of distortion. Three alternative quantifications of monocular spatial distortion in the central visual field generated uncorrelated estimates of visual distortion. It is therefore unlikely that metamorphopsia is caused solely by retinal displacement, but instead involves additional top-down information, knowledge about the scene, and perhaps, cortical reorganization. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.

  1. Does partial occlusion promote normal binocular function?

    PubMed

    Li, Jingrong; Thompson, Benjamin; Ding, Zhaofeng; Chan, Lily Y L; Chen, Xiang; Yu, Minbin; Deng, Daming; Hess, Robert F

    2012-10-03

    There is growing evidence that abnormal binocular interactions play a key role in the amblyopia syndrome and represent a viable target for treatment interventions. In this context the use of partial occlusion using optical devices such as Bangerter filters as an alternative to complete occlusion is of particular interest. The aims of this study were to understand why Bangerter filters do not result in improved binocular outcomes compared to complete occlusion, and to compare the effects of Bangerter filters, optical blur and neutral density (ND) filters on normal binocular function. The effects of four strengths of Bangerter filters (0.8, 0.6, 0.4, 0.2) on letter and vernier acuity, contrast sensitivity, stereoacuity, and interocular suppression were measured in 21 observers with normal vision. In a subset of 14 observers, the partial occlusion effects of Bangerter filters, ND filters and plus lenses on stereopsis and interocular suppression were compared. Bangerter filters did not have graded effect on vision and induced significant disruption to binocular function. This disruption was greater than that of monocular defocus but weaker than that of ND filters. The effect of the Bangerter filters on stereopsis was more pronounced than their effect on monocular acuity, and the induced monocular acuity deficits did not predict the induced deficits in stereopsis. Bangerter filters appear to be particularly disruptive to binocular function. Other interventions, such as optical defocus and those employing computer generated dichoptic stimulus presentation, may be more appropriate than partial occlusion for targeting binocular function during amblyopia treatment.

  2. A method to detect progression of glaucoma using the multifocal visual evoked potential technique

    PubMed Central

    Wangsupadilok, Boonchai; Kanadani, Fabio N.; Grippo, Tomas M.; Liebmann, Jeffrey M.; Ritch, Robert; Hood, Donald C.

    2010-01-01

    Purpose To describe a method for monitoring progression of glaucoma using the multifocal visual evoked potential (mfVEP) technique. Methods Eighty-seven patients diagnosed with open-angle glaucoma were divided into two groups. Group I, comprised 43 patients who had a repeat mfVEP test within 50 days (mean 0.9 ± 0.5 months), and group II, 44 patients who had a repeat test after at least 6 months (mean 20.7 ± 9.7 months). Monocular mfVEPs were obtained using a 60-sector pattern reversal dartboard display. Monocular and interocular analyses were performed. Data from the two visits were compared. The total number of abnormal test points with P < 5% within the visual field (total scores) and number of abnormal test points within a cluster (cluster size) were calculated. Data for group I provided a measure of test–retest variability independent of disease progression. Data for group II provided a possible measure of progression. Results The difference in the total scores for group II between visit 1 and visit 2 for the interocular and monocular comparison was significant (P < 0.05) as was the difference in cluster size for the interocular comparison (P < 0.05). Group I did not show a significant change in either total score or cluster size. Conclusion The change in the total score and cluster size over time provides a possible method for assessing progression of glaucoma with the mfVEP technique. PMID:18830654

  3. A special role for binocular visual input during development and as a component of occlusion therapy for treatment of amblyopia.

    PubMed

    Mitchell, Donald E

    2008-01-01

    To review work on animal models of deprivation amblyopia that points to a special role for binocular visual input in the development of spatial vision and as a component of occlusion (patching) therapy for amblyopia. The studies reviewed employ behavioural methods to measure the effects of various early experiential manipulations on the development of the visual acuity of the two eyes. Short periods of concordant binocular input, if continuous, can offset much longer daily periods of monocular deprivation to allow the development of normal visual acuity in both eyes. It appears that the visual system does not weigh all visual input equally in terms of its ability to impact on the development of vision but instead places greater weight on concordant binocular exposure. Experimental models of patching therapy for amblyopia imposed on animals in which amblyopia had been induced by a prior period of early monocular deprivation, indicate that the benefits of patching therapy may be only temporary and decline rapidly after patching is discontinued. However, when combined with critical amounts of binocular visual input each day, the benefits of patching can be both heightened and made permanent. Taken together with demonstrations of retained binocular connections in the visual cortex of monocularly deprived animals, a strong argument is made for inclusion of specific training of stereoscopic vision for part of the daily periods of binocular exposure that should be incorporated as part of any patching protocol for amblyopia.

  4. Object tracking using plenoptic image sequences

    NASA Astrophysics Data System (ADS)

    Kim, Jae Woo; Bae, Seong-Joon; Park, Seongjin; Kim, Do Hyung

    2017-05-01

    Object tracking is a very important problem in computer vision research. Among the difficulties of object tracking, partial occlusion problem is one of the most serious and challenging problems. To address the problem, we proposed novel approaches to object tracking on plenoptic image sequences. Our approaches take advantage of the refocusing capability that plenoptic images provide. Our approaches input the sequences of focal stacks constructed from plenoptic image sequences. The proposed image selection algorithms select the sequence of optimal images that can maximize the tracking accuracy from the sequence of focal stacks. Focus measure approach and confidence measure approach were proposed for image selection and both of the approaches were validated by the experiments using thirteen plenoptic image sequences that include heavily occluded target objects. The experimental results showed that the proposed approaches were satisfactory comparing to the conventional 2D object tracking algorithms.

  5. Extracting flat-field images from scene-based image sequences using phase correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caron, James N., E-mail: Caron@RSImd.com; Montes, Marcos J.; Obermark, Jerome L.

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method usesmore » sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.« less

  6. Meta-image navigation augmenters for unmanned aircraft systems (MINA for UAS)

    NASA Astrophysics Data System (ADS)

    Òªelik, Koray; Somani, Arun K.; Schnaufer, Bernard; Hwang, Patrick Y.; McGraw, Gary A.; Nadke, Jeremy

    2013-05-01

    GPS is a critical sensor for Unmanned Aircraft Systems (UASs) due to its accuracy, global coverage and small hardware footprint, but is subject to denial due to signal blockage or RF interference. When GPS is unavailable, position, velocity and attitude (PVA) performance from other inertial and air data sensors is not sufficient, especially for small UASs. Recently, image-based navigation algorithms have been developed to address GPS outages for UASs, since most of these platforms already include a camera as standard equipage. Performing absolute navigation with real-time aerial images requires georeferenced data, either images or landmarks, as a reference. Georeferenced imagery is readily available today, but requires a large amount of storage, whereas collections of discrete landmarks are compact but must be generated by pre-processing. An alternative, compact source of georeferenced data having large coverage area is open source vector maps from which meta-objects can be extracted for matching against real-time acquired imagery. We have developed a novel, automated approach called MINA (Meta Image Navigation Augmenters), which is a synergy of machine-vision and machine-learning algorithms for map aided navigation. As opposed to existing image map matching algorithms, MINA utilizes publicly available open-source geo-referenced vector map data, such as OpenStreetMap, in conjunction with real-time optical imagery from an on-board, monocular camera to augment the UAS navigation computer when GPS is not available. The MINA approach has been experimentally validated with both actual flight data and flight simulation data and results are presented in the paper.

  7. Automatic PSO-Based Deformable Structures Markerless Tracking in Laparoscopic Cholecystectomy

    NASA Astrophysics Data System (ADS)

    Djaghloul, Haroun; Batouche, Mohammed; Jessel, Jean-Pierre

    An automatic and markerless tracking method of deformable structures (digestive organs) during laparoscopic cholecystectomy intervention that uses the (PSO) behavour and the preoperative a priori knowledge is presented. The associated shape to the global best particles of the population determines a coarse representation of the targeted organ (the gallbladder) in monocular laparoscopic colored images. The swarm behavour is directed by a new fitness function to be optimized to improve the detection and tracking performance. The function is defined by a linear combination of two terms, namely, the human a priori knowledge term (H) and the particle's density term (D). Under the limits of standard (PSO) characteristics, experimental results on both synthetic and real data show the effectiveness and robustness of our method. Indeed, it outperforms existing methods without need of explicit initialization (such as active contours, deformable models and Gradient Vector Flow) on accuracy and convergence rate.

  8. Interocular transfer of depth discrimination in intact and in DSO-sectioned pigeons.

    PubMed

    Musumeci, D; Lemeignan, M; Bloch, S

    1991-07-22

    Interocular transfer (IOT) of a depth discrimination task was studied in intact pigeons and with a section of the supraoptic decussation (DSO). Animals were trained to respond to the nearer of two small light emitting diodes placed at different depths in the left and right compartments of a black tunnel. The near stimulus (at 10.5 cm from the eyes) and the far one (at 21 cm) could only be seen one at a time. Though the task was difficult to learn monocularly, intact as well as lesioned animals had good transfer scores with the untrained eye. Success in transfer may be related to the presentation of the discriminanda which assured that their images impinged upon the retinal 'red field'. DSO-transection did not affect IOT possibly because differential oculomotor adjustments needed for focusing near or far targets provide central bilateral and/or binocular information which is not conveyed by the DSO.

  9. Comparison of MR imaging sequences for liver and head and neck interventions: is there a single optimal sequence for all purposes?

    PubMed

    Boll, Daniel T; Lewin, Jonathan S; Duerk, Jeffrey L; Aschoff, Andrik J; Merkle, Elmar M

    2004-05-01

    To compare the appropriate pulse sequences for interventional device guidance during magnetic resonance (MR) imaging at 0.2 T and to evaluate the dependence of sequence selection on the anatomic region of the procedure. Using a C-arm 0.2 T system, four interventional MR sequences were applied in 23 liver cases and during MR-guided neck interventions in 13 patients. The imaging protocol consisted of: multislice turbo spin echo (TSE) T2w, sequential-slice fast imaging with steady precession (FISP), a time-reversed version of FISP (PSIF), and FISP with balanced gradients in all spatial directions (True-FISP) sequences. Vessel conspicuity was rated and contrast-to-noise ratio (CNR) was calculated for each sequence and a differential receiver operating characteristic was performed. Liver findings were detected in 96% using the TSE sequence. PSIF, FISP, and True-FISP imaging showed lesions in 91%, 61%, and 65%, respectively. The TSE sequence offered the best CNR, followed by PSIF imaging. Differential receiver operating characteristic analysis also rated TSE and PSIF to be the superior sequences. Lesions in the head and neck were detected in all cases by TSE and FISP, in 92% using True-FISP, and in 84% using PSIF. True-FISP offered the best CNR, followed by TSE imaging. Vessels appeared bright on FISP and True-FISP imaging and dark on the other sequences. In interventional MR imaging, no single sequence fits all purposes. Image guidance for interventional MR during liver procedures is best achieved by PSIF or TSE, whereas biopsies in the head and neck are best performed using FISP or True-FISP sequences.

  10. Principles of Quantitative MR Imaging with Illustrated Review of Applicable Modular Pulse Diagrams.

    PubMed

    Mills, Andrew F; Sakai, Osamu; Anderson, Stephan W; Jara, Hernan

    2017-01-01

    Continued improvements in diagnostic accuracy using magnetic resonance (MR) imaging will require development of methods for tissue analysis that complement traditional qualitative MR imaging studies. Quantitative MR imaging is based on measurement and interpretation of tissue-specific parameters independent of experimental design, compared with qualitative MR imaging, which relies on interpretation of tissue contrast that results from experimental pulse sequence parameters. Quantitative MR imaging represents a natural next step in the evolution of MR imaging practice, since quantitative MR imaging data can be acquired using currently available qualitative imaging pulse sequences without modifications to imaging equipment. The article presents a review of the basic physical concepts used in MR imaging and how quantitative MR imaging is distinct from qualitative MR imaging. Subsequently, the article reviews the hierarchical organization of major applicable pulse sequences used in this article, with the sequences organized into conventional, hybrid, and multispectral sequences capable of calculating the main tissue parameters of T1, T2, and proton density. While this new concept offers the potential for improved diagnostic accuracy and workflow, awareness of this extension to qualitative imaging is generally low. This article reviews the basic physical concepts in MR imaging, describes commonly measured tissue parameters in quantitative MR imaging, and presents the major available pulse sequences used for quantitative MR imaging, with a focus on the hierarchical organization of these sequences. © RSNA, 2017.

  11. Self calibrating monocular camera measurement of traffic parameters.

    DOT National Transportation Integrated Search

    2009-12-01

    This proposed project will extend the work of previous projects that have developed algorithms and software : to measure traffic speed under adverse conditions using un-calibrated cameras. The present implementation : uses the WSDOT CCTV cameras moun...

  12. A low cost PSD-based monocular motion capture system

    NASA Astrophysics Data System (ADS)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  13. Left hemispheric advantage for numerical abilities in the bottlenose dolphin.

    PubMed

    Kilian, Annette; von Fersen, Lorenzo; Güntürkün, Onur

    2005-02-28

    In a two-choice discrimination paradigm, a bottlenose dolphin discriminated relational dimensions between visual numerosity stimuli under monocular viewing conditions. After prior binocular acquisition of the task, two monocular test series with different number stimuli were conducted. In accordance with recent studies on visual lateralization in the bottlenose dolphin, our results revealed an overall advantage of the right visual field. Due to the complete decussation of the optic nerve fibers, this suggests a specialization of the left hemisphere for analysing relational features between stimuli as required in tests for numerical abilities. These processes are typically right hemisphere-based in other mammals (including humans) and birds. The present data provide further evidence for a general right visual field advantage in bottlenose dolphins for visual information processing. It is thus assumed that dolphins possess a unique functional architecture of their cerebral asymmetries. (c) 2004 Elsevier B.V. All rights reserved.

  14. Effect of prescribed prism on monocular interpupillary distances and fitting heights for progressive add lenses.

    PubMed

    Brooks, C W; Riley, H D

    1994-06-01

    Success in fitting progressive addition lenses is dependent upon the accurate placement of the progressive zone. Both eyes must track simultaneously within the boundary of the progressive corridor. Vertical prism will displace the wearer's lines of sight and consequently eye position. Because fitting heights are measured using an empty frame, subjects with vertical phorias usually will fuse, and not show the vertical differences in pupil heights during the measuring process. Therefore, when prescriptions contain vertical prism one must consider the changes in measured fitting heights that will occur once the lenses are placed in the frame. Fitting heights must be altered approximately 0.3 mm for each vertical prism diopter prescribed. The fitting height adjustment is opposite from the base direction of the prescribed prism. An explanation of the effect of prescribed horizontal prism on monocular interpupillary distance (PD) measurements is also included.

  15. On improving IED object detection by exploiting scene geometry using stereo processing

    NASA Astrophysics Data System (ADS)

    van de Wouw, Dennis W. J. M.; Dubbelman, Gijs; de With, Peter H. N.

    2015-03-01

    Detecting changes in the environment with respect to an earlier data acquisition is important for several applications, such as finding Improvised Explosive Devices (IEDs). We explore and evaluate the benefit of depth sensing in the context of automatic change detection, where an existing monocular system is extended with a second camera in a fixed stereo setup. We then propose an alternative frame registration that exploits scene geometry, in particular the ground plane. Furthermore, change characterization is applied to localized depth maps to distinguish between 3D physical changes and shadows, which solves one of the main challenges of a monocular system. The proposed system is evaluated on real-world acquisitions, containing geo-tagged test objects of 18 18 9 cm up to a distance of 60 meters. The proposed extensions lead to a significant reduction of the false-alarm rate by a factor of 3, while simultaneously improving the detection score with 5%.

  16. The horizontal optokinetic reflex of the opossum (Didelphis marsupialis aurita): physiological and anatomical studies in normal and early monoenucleated specimens.

    PubMed

    Nasi, J P; Volchan, E; Tecles, M T; Bernardes, R F; Rocha-Miranda, C E

    1997-05-01

    In the opossum the symmetrical binocular horizontal optokinetic nystagmus gives way to an asymmetrical monocular reflex: the nasotemporal (NT) stimulation yielding lower gain than the temporonasal (TN). In adults, monocularly enucleated at postnatal days 21-25 (pnd21-25), the gain of NT responses is markedly increased, approaching that of TN. Severe cell loss was detected in the nucleus of the optic tract (NOT) on the deafferented side in early monoenucleated specimens. In normal animals retinal afferents to the NOT are all crossed, while in animals enucleated at pnd21-25 sparse uncrossed retinal elements were observed. Although this abnormal projection might influence the increased NT response in this subgroup, it is argued that the increased symmetry in monoenucleated opossums may be the result of changes mediated by the commissural connection between both NOTs.

  17. Image quality assessment of silent T2 PROPELLER sequence for brain imaging in infants.

    PubMed

    Kim, Hyun Gi; Choi, Jin Wook; Yoon, Soo Han; Lee, Sieun

    2018-02-01

    Infants are vulnerable to high acoustic noise. Acoustic noise generated by MR scanning can be reduced by a silent sequence. The purpose of this study is to compare the image quality of the conventional and silent T2 PROPELLER sequences for brain imaging in infants. A total of 36 scans were acquired from 24 infants using a 3 T MR scanner. Each patient underwent both conventional and silent T2 PROPELLER sequences. Acoustic noise level was measured. Quantitative and qualitative assessments were performed with the images taken with each sequence. The sound pressure level of the conventional T2 PROPELLER imaging sequence was 92.1 dB and that of the silent T2 PROPELLER imaging sequence was 73.3 dB (reduction of 20%). On quantitative assessment, the two sequences (conventional vs silent T2 PROPELLER) did not show significant difference in relative contrast (0.069 vs 0.068, p value = 0.536) and signal-to-noise ratio (75.4 vs 114.8, p value = 0.098). Qualitative assessment of overall image quality (p value = 0.572), grey-white differentiation (p value = 0.986), shunt-related artefact (p value > 0.999), motion artefact (p value = 0.801) and myelination degree in different brain regions (p values ≥ 0.092) did not show significant difference between the two sequences. The silent T2 PROPELLER sequence reduces acoustic noise and generated comparable image quality to that of the conventional sequence. Advances in knowledge: This is the first report to compare silent T2 PROPELLER images with that of conventional T2 PROPELLER images in children.

  18. Retinal Image Quality During Accommodation

    PubMed Central

    López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.

    2013-01-01

    Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386

  19. Retinal image quality during accommodation.

    PubMed

    López-Gil, Norberto; Martin, Jesson; Liu, Tao; Bradley, Arthur; Díaz-Muñoz, David; Thibos, Larry N

    2013-07-01

    We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Subjects viewed a monochromatic (552 nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye's higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  20. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  1. Trifocal Tensor-Based Adaptive Visual Trajectory Tracking Control of Mobile Robots.

    PubMed

    Chen, Jian; Jia, Bingxi; Zhang, Kaixiang

    2017-11-01

    In this paper, a trifocal tensor-based approach is proposed for the visual trajectory tracking task of a nonholonomic mobile robot equipped with a roughly installed monocular camera. The desired trajectory is expressed by a set of prerecorded images, and the robot is regulated to track the desired trajectory using visual feedback. Trifocal tensor is exploited to obtain the orientation and scaled position information used in the control system, and it works for general scenes owing to the generality of trifocal tensor. In the previous works, the start, current, and final images are required to share enough visual information to estimate the trifocal tensor. However, this requirement can be easily violated for perspective cameras with limited field of view. In this paper, key frame strategy is proposed to loosen this requirement, extending the workspace of the visual servo system. Considering the unknown depth and extrinsic parameters (installing position of the camera), an adaptive controller is developed based on Lyapunov methods. The proposed control strategy works for almost all practical circumstances, including both trajectory tracking and pose regulation tasks. Simulations are made based on the virtual experimentation platform (V-REP) to evaluate the effectiveness of the proposed approach.

  2. PointCom: semi-autonomous UGV control with intuitive interface

    NASA Astrophysics Data System (ADS)

    Rohde, Mitchell M.; Perlin, Victor E.; Iagnemma, Karl D.; Lupa, Robert M.; Rohde, Steven M.; Overholt, James; Fiorani, Graham

    2008-04-01

    Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI) techniques from the entertainment software industry are being used to develop video-game style interfaces that require little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less burdensome than many current generation systems. mand).

  3. Plasticity in adult cat visual cortex (area 17) following circumscribed monocular lesions of all retinal layers

    PubMed Central

    Calford, M B; Wang, C; Taglianetti, V; Waleszczyk, W J; Burke, W; Dreher, B

    2000-01-01

    In eight adult cats intense, sharply circumscribed, monocular laser lesions were used to remove all cellular layers of the retina. The extents of the retinal lesions were subsequently confirmed with counts of α-ganglion cells in retinal whole mounts; in some cases these revealed radial segmental degeneration of ganglion cells distal to the lesion.Two to 24 weeks later, area 17 (striate cortex; V1) was studied electrophysiologically in a standard anaesthetized, paralysed (artificially respired) preparation. Recording single- or multineurone activity revealed extensive topographical reorganization within the lesion projection zone (LPZ).Thus, with stimulation of the lesioned eye, about 75 % of single neurones in the LPZ had ‘ectopic’ visual discharge fields which were displaced to normal retina in the immediate vicinity of the lesion.The sizes of the ectopic discharge fields were not significantly different from the sizes of the normal discharge fields. Furthermore, binocular cells recorded from the LPZ, when stimulated via their ectopic receptive fields, exhibited orientation tuning and preferred stimulus velocities which were indistinguishable from those found when the cells were stimulated via the normal eye.However, the responses to stimuli presented via ectopic discharge fields were generally weaker (lower peak discharge rates) than those to presentations via normal discharge fields, and were characterized by a lower-than-normal upper velocity limit.Overall, the properties of the ectopic receptive fields indicate that cortical mechanisms rather than a retinal ‘periphery’ effect underlie the topographic reorganization of area 17 following monocular retinal lesions. PMID:10767137

  4. Public Perception of the Burden of Microtia.

    PubMed

    Byun, Stephanie; Hong, Paul; Bezuhly, Michael

    2016-10-01

    Microtia is associated with psychosocial burden and stigma. The authors' objective was to determine the potential impact of being born with microtia by using validated health state utility assessment measures. An online utility assessment using visual analogue scale, time tradeoff, and standard gamble was used to determine utilities for microtia with or without ipsilateral deafness, monocular blindness, and binocular blindness from a prospective sample of the general population. Utility scores were compared between health states using Wilcoxon and Kruskal-Wallis tests. Univariate regression was performed using sex, age, race, and education as independent predictors of utility scores. Over a 6-month enrollment period, 104 participants were included in the analysis. Visual analogue scale (median 0.80, interquartile range [0.72-0.85]), time tradeoff (0.88 [0.77-0.91]), and standard gamble (0.91 [0.84-0.97]) scores for microtia with ipsilateral deafness were higher (P <0.01) than those of binocular blindness (visual analogue scale, 0.30 [0.20-0.45]; time tradeoff, 0.42 [0.17-0.67]; and standard gamble, 0.52 [0.36-0.78]). Time trade-off scores for microtia with deafness were not different from monocular blindness (0.83 [0.67-0.91]). Higher level of education was associated with higher time tradeoff and standard gamble scores for microtia with or without deafness (P <0.05). Using objective health state utility scores, the current study demonstrates that the perceived burden of microtia with or without deafness is no different or less than monocular blindness. Given high utility scores for microtia, delaying autologous reconstruction beyond school entrance age may be justified.

  5. Spatial contrast sensitivity at twilight: luminance, monocularity, and oxygenation.

    PubMed

    Connolly, Desmond M

    2010-05-01

    Visual performance in dim light is compromised by lack of oxygen (hypoxia). The possible influence of altered oxygenation on foveal contrast sensitivity under mesopic (twilight) viewing conditions is relevant to aircrew flying at night, including when using night vision devices, but is poorly documented. Foveal contrast sensitivity was measured binocularly and monocularly in 12 subjects at 7 spatial frequencies, ranging from 0.5 to approximately 16 cycles per degree, using sinusoidal Gabor patch gratings. Hypoxic performance breathing 14.1% oxygen, equivalent to altitude exposure at 3048 m (10,000 ft), was compared with breathing air at sea level (normoxia) at low photopic (28 cd x m(-2)), borderline upper mesopic (approximately 2.1 cd x m(-2)) and midmesopic (approximately 0.26 cd x m(-2)) luminance. Mesopic performance was also assessed breathing 100% oxygen (hyperoxia). Typical 'inverted U' log/log plots of the contrast sensitivity function were obtained, with elevated thresholds (reduced sensitivity) at lower luminance. Binocular viewing enhanced sensitivity by a factor approximating square root of 2 for most conditions, supporting neural summation of the contrast signal, but had greater influence at the lowest light level and highest spatial frequencies (8.26 and 16.51 cpd). Respiratory challenges had no effect. Contrast sensitivity is poorer when viewing monocularly and especially at midmesopic luminance, with relevance to night flying. The foveal contrast sensitivity function is unaffected by respiratory disturbance when twilight conditions favor cone vision, despite known effects on retinal illumination (pupil size). The resilience of the contrast sensitivity function belies the vulnerability of foveal low contrast acuity to mild hypoxia at mesopic luminance.

  6. Perceptual Learning Improves Stereoacuity in Amblyopia

    PubMed Central

    Xi, Jie; Jia, Wu-Li; Feng, Li-Xia; Lu, Zhong-Lin; Huang, Chang-Bing

    2014-01-01

    Purpose. Amblyopia is a developmental disorder that results in both monocular and binocular deficits. Although traditional treatment in clinical practice (i.e., refractive correction, or occlusion by patching and penalization of the fellow eye) is effective in restoring monocular visual acuity, there is little information on how binocular function, especially stereopsis, responds to traditional amblyopia treatment. We aim to evaluate the effects of perceptual learning on stereopsis in observers with amblyopia in the current study. Methods. Eleven observers (21.1 ± 5.1 years, six females) with anisometropic or ametropic amblyopia were trained to judge depth in 10 to 13 sessions. Red–green glasses were used to present three different texture anaglyphs with different disparities but a fixed exposure duration. Stereoacuity was assessed with the Fly Stereo Acuity Test and visual acuity was assessed with the Chinese Tumbling E Chart before and after training. Results. Averaged across observers, training significantly reduced disparity threshold from 776.7″ to 490.4″ (P < 0.01) and improved stereoacuity from 200.3″ to 81.6″ (P < 0.01). Interestingly, visual acuity also significantly improved from 0.44 to 0.35 logMAR (approximately 0.9 lines, P < 0.05) in the amblyopic eye after training. Moreover, the learning effects in two of the three retested observers were largely retained over a 5-month period. Conclusions. Perceptual learning is effective in improving stereo vision in observers with amblyopia. These results, together with previous evidence, suggest that structured monocular and binocular training might be necessary to fully recover degraded visual functions in amblyopia. Chinese Abstract PMID:24508791

  7. Efficacy and safety of multifocal intraocular lenses following cataract and refractive lens exchange: Metaanalysis of peer-reviewed publications.

    PubMed

    Rosen, Emanuel; Alió, Jorge L; Dick, H Burkhard; Dell, Steven; Slade, Stephen

    2016-02-01

    We performed a metaanaysis of peer-reviewed studies involving implantation of a multifocal intraocular lens (IOL) in presbyopic patients with cataract or having refractive lens exchange (RLE). Previous reviews have considered the use of multifocal IOLs after cataract surgery but not after RLE, whereas greater insight might be gained from examining the full range of studies. Selected studies were examined to collate outcomes with monocular and binocular uncorrected distance, intermediate, and near visual acuity; spectacle independence; contrast sensitivity; visual symptoms; adverse events; and patient satisfaction. In 8797 eyes, the mean postoperative monocular uncorrected distance visual acuity (UDVA) was 0.05 logMAR ± 0.006 (SD) (Snellen equivalent 20/20(-3)). In 6334 patients, the mean binocular UDVA was 0.04 ± 0.00 logMAR (Snellen equivalent 20/20(-2)), with a mean spectacle independence of 80.1%. Monocular mean UDVA did not differ significantly between those who had a cataract procedure and those who had an RLE procedure. Neural adaptation to multifocality may vary among patients. Dr. Alió is a clinical research investigator for Hanita Lenses, Carl Zeiss Meditec AG, Topcon Medical Systems, Inc., Oculentis GmbH, and Akkolens International BV. Dr. Dell is a consultant to Bausch & Lomb and Abbott Medical Optics, Inc. Dr. Slade is a consultant to Alcon Surgical, Inc., Carl Zeiss Meditec AG, and Bausch & Lomb. None of the authors has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  8. Effects of brief daily periods of unrestricted vision during early monocular form deprivation on development of visual area 2.

    PubMed

    Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Harwerth, Ronald S; Smith, Earl L; Chino, Yuzo M

    2011-09-14

    Providing brief daily periods of unrestricted vision during early monocular form deprivation reduces the depth of amblyopia. To gain insights into the neural basis of the beneficial effects of this treatment, the binocular and monocular response properties of neurons were quantitatively analyzed in visual area 2 (V2) of form-deprived macaque monkeys. Beginning at 3 weeks of age, infant monkeys were deprived of clear vision in one eye for 12 hours every day until 21 weeks of age. They received daily periods of unrestricted vision for 0, 1, 2, or 4 hours during the form-deprivation period. After behavioral testing to measure the depth of the resulting amblyopia, microelectrode-recording experiments were conducted in V2. The ocular dominance imbalance away from the affected eye was reduced in the experimental monkeys and was generally proportional to the reduction in the depth of amblyopia in individual monkeys. There were no interocular differences in the spatial properties of V2 neurons in any subject group. However, the binocular disparity sensitivity of V2 neurons was significantly higher and binocular suppression was lower in monkeys that had unrestricted vision. The decrease in ocular dominance imbalance in V2 was the neuronal change most closely associated with the observed reduction in the depth of amblyopia. The results suggest that the degree to which extrastriate neurons can maintain functional connections with the deprived eye (i.e., reducing undersampling for the affected eye) is the most significant factor associated with the beneficial effects of brief periods of unrestricted vision.

  9. Effects of Brief Daily Periods of Unrestricted Vision during Early Monocular Form Deprivation on Development of Visual Area 2

    PubMed Central

    Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M.; Harwerth, Ronald S.; Smith, Earl L.

    2011-01-01

    Purpose. Providing brief daily periods of unrestricted vision during early monocular form deprivation reduces the depth of amblyopia. To gain insights into the neural basis of the beneficial effects of this treatment, the binocular and monocular response properties of neurons were quantitatively analyzed in visual area 2 (V2) of form-deprived macaque monkeys. Methods. Beginning at 3 weeks of age, infant monkeys were deprived of clear vision in one eye for 12 hours every day until 21 weeks of age. They received daily periods of unrestricted vision for 0, 1, 2, or 4 hours during the form-deprivation period. After behavioral testing to measure the depth of the resulting amblyopia, microelectrode-recording experiments were conducted in V2. Results. The ocular dominance imbalance away from the affected eye was reduced in the experimental monkeys and was generally proportional to the reduction in the depth of amblyopia in individual monkeys. There were no interocular differences in the spatial properties of V2 neurons in any subject group. However, the binocular disparity sensitivity of V2 neurons was significantly higher and binocular suppression was lower in monkeys that had unrestricted vision. Conclusions. The decrease in ocular dominance imbalance in V2 was the neuronal change most closely associated with the observed reduction in the depth of amblyopia. The results suggest that the degree to which extrastriate neurons can maintain functional connections with the deprived eye (i.e., reducing undersampling for the affected eye) is the most significant factor associated with the beneficial effects of brief periods of unrestricted vision. PMID:21849427

  10. GPS/Optical/Inertial Integration for 3D Navigation Using Multi-Copter Platforms

    NASA Technical Reports Server (NTRS)

    Dill, Evan T.; Young, Steven D.; Uijt De Haag, Maarten

    2017-01-01

    In concert with the continued advancement of a UAS traffic management system (UTM), the proposed uses of autonomous unmanned aerial systems (UAS) have become more prevalent in both the public and private sectors. To facilitate this anticipated growth, a reliable three-dimensional (3D) positioning, navigation, and mapping (PNM) capability will be required to enable operation of these platforms in challenging environments where global navigation satellite systems (GNSS) may not be available continuously. Especially, when the platform's mission requires maneuvering through different and difficult environments like outdoor opensky, outdoor under foliage, outdoor-urban and indoor, and may include transitions between these environments. There may not be a single method to solve the PNM problem for all environments. The research presented in this paper is a subset of a broader research effort, described in [1]. The research is focused on combining data from dissimilar sensor technologies to create an integrated navigation and mapping method that can enable reliable operation in both an outdoor and structured indoor environment. The integrated navigation and mapping design is utilizes a Global Positioning System (GPS) receiver, an Inertial Measurement Unit (IMU), a monocular digital camera, and three short to medium range laser scanners. This paper describes specifically the techniques necessary to effectively integrate the monocular camera data within the established mechanization. To evaluate the developed algorithms a hexacopter was built, equipped with the discussed sensors, and both hand-carried and flown through representative environments. This paper highlights the effect that the monocular camera has on the aforementioned sensor integration scheme's reliability, accuracy and availability.

  11. Evaluation of multifocal visual evoked potentials in patients with Graves' orbitopathy and subclinical optic nerve involvement.

    PubMed

    Pérez-Rico, Consuelo; Rodríguez-González, Natividad; Arévalo-Serrano, Juan; Blanco, Román

    2012-08-01

    Dysthyroid optic neuropathy is the most serious, although infrequent (8-10 %) complication in Graves' orbitopathy (GO). It is known that early stages of compressive optic neuropathy may produce reversible visual field defects, suggesting axoplasmic stasis rather than ganglion cell death. This observational, cross-sectional, case-control study assessed 34 consecutive patients (65 eyes) with Graves' hyperthyroidism and longstanding GO and 31 age-matched control subjects. The patients' multifocal visual evoked potentials (mfVEP) were compared to their clinical and psychophysical (standard automated perimetry [SAP]) and structural (optic coherence tomography [OCT]) diagnostic test data. Abnormal cluster defects were found in 12.3 % and 3.1 % of eyes on the interocular and monocular amplitude analysis mfVEP probability plots, respectively. As well, mfVEP latencies delays were found in 13.8 and 20 % of eyes on the interocular and monocular analysis probability plots, respectively. Interestingly, 19 % of patients with GO had ocular hypertension, and a strong correlation between intraocular pressure measured at upgaze and mfVEP latency was found. MfVEP amplitudes and visual acuity were significantly related to each other (P < 0.05), but not with the latencies delays. However, relationships between the interocular or monocular mfVEP amplitudes and latencies analysis and SAP indices or OCT data were not statistically significant. One-third of our patients with GO showed changes in the mfVEP, indicating significant subclinical optic nerve dysfunction. In this sense, the mfVEP may be a useful diagnostic tool in the clinic for early diagnosis and monitoring of optic nerve function abnormalities in patients with GO.

  12. Perceptual learning improves stereoacuity in amblyopia.

    PubMed

    Xi, Jie; Jia, Wu-Li; Feng, Li-Xia; Lu, Zhong-Lin; Huang, Chang-Bing

    2014-04-15

    Amblyopia is a developmental disorder that results in both monocular and binocular deficits. Although traditional treatment in clinical practice (i.e., refractive correction, or occlusion by patching and penalization of the fellow eye) is effective in restoring monocular visual acuity, there is little information on how binocular function, especially stereopsis, responds to traditional amblyopia treatment. We aim to evaluate the effects of perceptual learning on stereopsis in observers with amblyopia in the current study. Eleven observers (21.1 ± 5.1 years, six females) with anisometropic or ametropic amblyopia were trained to judge depth in 10 to 13 sessions. Red-green glasses were used to present three different texture anaglyphs with different disparities but a fixed exposure duration. Stereoacuity was assessed with the Fly Stereo Acuity Test and visual acuity was assessed with the Chinese Tumbling E Chart before and after training. Averaged across observers, training significantly reduced disparity threshold from 776.7″ to 490.4″ (P < 0.01) and improved stereoacuity from 200.3″ to 81.6″ (P < 0.01). Interestingly, visual acuity also significantly improved from 0.44 to 0.35 logMAR (approximately 0.9 lines, P < 0.05) in the amblyopic eye after training. Moreover, the learning effects in two of the three retested observers were largely retained over a 5-month period. Perceptual learning is effective in improving stereo vision in observers with amblyopia. These results, together with previous evidence, suggest that structured monocular and binocular training might be necessary to fully recover degraded visual functions in amblyopia. Chinese Abstract.

  13. The processing of linear perspective and binocular information for action and perception.

    PubMed

    Bruggeman, Hugo; Yonas, Albert; Konczak, Jürgen

    2007-04-08

    To investigate the processing of linear perspective and binocular information for action and for the perceptual judgment of depth, we presented viewers with an actual Ames trapezoidal window. The display, when presented perpendicular to the line of sight, provided perspective information for a rectangular window slanted in depth, while binocular information specified a planar surface in the fronto-parallel plane. We compared pointing towards the display-edges with perceptual judgment of their positions in depth as the display orientation was varied under monocular and binocular view. On monocular trials, pointing and depth judgment were based on the perspective information and failed to respond accurately to changes in display orientation because pictorial information did not vary sufficiently to specify the small differences in orientation. For binocular trials, pointing was based on binocular information and precisely matched the changes in display orientation whereas depth judgment was short of such adjustment and based upon both binocular and perspective-specified slant information. The finding, that on binocular trials pointing was considerably less responsive to the illusion than perceptual judgment, supports an account of two separate processing streams in the human visual system, a ventral pathway involved in object recognition and a dorsal pathway that produces visual information for the control of actions. Previously, similar differences between perception and action were explained by an alternate explanation, that is, viewers selectively attend to different parts of a display in the two tasks. The finding that under monocular view participants responded to perspective information in both the action and the perception task rules out the attention-based argument.

  14. A Segmentation Method for Lung Parenchyma Image Sequences Based on Superpixels and a Self-Generating Neural Forest

    PubMed Central

    Liao, Xiaolei; Zhao, Juanjuan; Jiao, Cheng; Lei, Lei; Qiang, Yan; Cui, Qiang

    2016-01-01

    Background Lung parenchyma segmentation is often performed as an important pre-processing step in the computer-aided diagnosis of lung nodules based on CT image sequences. However, existing lung parenchyma image segmentation methods cannot fully segment all lung parenchyma images and have a slow processing speed, particularly for images in the top and bottom of the lung and the images that contain lung nodules. Method Our proposed method first uses the position of the lung parenchyma image features to obtain lung parenchyma ROI image sequences. A gradient and sequential linear iterative clustering algorithm (GSLIC) for sequence image segmentation is then proposed to segment the ROI image sequences and obtain superpixel samples. The SGNF, which is optimized by a genetic algorithm (GA), is then utilized for superpixel clustering. Finally, the grey and geometric features of the superpixel samples are used to identify and segment all of the lung parenchyma image sequences. Results Our proposed method achieves higher segmentation precision and greater accuracy in less time. It has an average processing time of 42.21 seconds for each dataset and an average volume pixel overlap ratio of 92.22 ± 4.02% for four types of lung parenchyma image sequences. PMID:27532214

  15. Test-retest reproducibility of accommodative facility measures in primary school children.

    PubMed

    Adler, Paul; Scally, Andrew J; Barrett, Brendan T

    2018-05-08

    To determine the test-retest reproducibility of accommodative facility (AF) measures in an unselected sample of UK primary school children. Using ±2.00 DS flippers and a viewing distance of 40 cm, AF was measured in 136 children (range 4-12 years, average 8.1 ± 2.1) by five testers on three occasions (average interval between successive tests: eight days, range 1-21 days). On each occasion, AF was measured monocularly and binocularly, for two minutes. Full datasets were obtained in 111 children (81.6 per cent). Intra-individual variation in AF was large (standard deviation [SD] = 3.8 cycles per minute [cpm]) and there was variation due to the identity of the tester (SD = 1.6 cpm). On average, AF was greater: (i) in monocular compared to binocular testing (by 1.4 cpm, p < 0.001); (ii) in the second minute of testing compared to the first (by 1.3 cpm, p < 0.001); (iii) in older compared to younger children (for example, AF for 4/5-year-olds was 3.3 cpm lower than in children ≥ 10 years old, p = 0.009); and (iv) on subsequent testing occasions (for example, visit-2 AF was 2.0 cpm higher than visit-1 AF, p < 0.001). After the first minute of testing at visit-1, only 36.9 per cent of children exceeded published normative values for AF (≥ 11 cpm monocularly and ≥ 8 cpm binocularly), but this rose to 83.8 per cent after the third test. Using less stringent pass criteria (≥ 6 cpm monocularly and ≥ 3 cpm binocularly), the equivalent figures were 82.9 and 96.4 per cent, respectively. Reduced AF did not co-exist with abnormal near point of accommodation or reduced visual acuity. The results reveal considerable intra-individual variability in raw AF measures in children. When the results are considered as pass/fail, children who initially exhibit normal AF continued to do so on repeat testing. Conversely, the vast majority of children with initially reduced AF exhibit normal performance on repeat testing. Using established pass/fail criteria, the prevalence of persistently reduced AF in this sample is 3.6 per cent. © 2018 Optometry Australia.

  16. Three-dimensional T1rho-weighted MRI at 1.5 Tesla.

    PubMed

    Borthakur, Arijitt; Wheaton, Andrew; Charagundla, Sridhar R; Shapiro, Erik M; Regatte, Ravinder R; Akella, Sarma V S; Kneeland, J Bruce; Reddy, Ravinder

    2003-06-01

    To design and implement a magnetic resonance imaging (MRI) pulse sequence capable of performing three-dimensional T(1rho)-weighted MRI on a 1.5-T clinical scanner, and determine the optimal sequence parameters, both theoretically and experimentally, so that the energy deposition by the radiofrequency pulses in the sequence, measured as the specific absorption rate (SAR), does not exceed safety guidelines for imaging human subjects. A three-pulse cluster was pre-encoded to a three-dimensional gradient-echo imaging sequence to create a three-dimensional, T(1rho)-weighted MRI pulse sequence. Imaging experiments were performed on a GE clinical scanner with a custom-built knee-coil. We validated the performance of this sequence by imaging articular cartilage of a bovine patella and comparing T(1rho) values measured by this sequence to those obtained with a previously tested two-dimensional imaging sequence. Using a previously developed model for SAR calculation, the imaging parameters were adjusted such that the energy deposition by the radiofrequency pulses in the sequence did not exceed safety guidelines for imaging human subjects. The actual temperature increase due to the sequence was measured in a phantom by a MRI-based temperature mapping technique. Following these experiments, the performance of this sequence was demonstrated in vivo by obtaining T(1rho)-weighted images of the knee joint of a healthy individual. Calculated T(1rho) of articular cartilage in the specimen was similar for both and three-dimensional and two-dimensional methods (84 +/- 2 msec and 80 +/- 3 msec, respectively). The temperature increase in the phantom resulting from the sequence was 0.015 degrees C, which is well below the established safety guidelines. Images of the human knee joint in vivo demonstrate a clear delineation of cartilage from surrounding tissues. We developed and implemented a three-dimensional T(1rho)-weighted pulse sequence on a 1.5-T clinical scanner. Copyright 2003 Wiley-Liss, Inc.

  17. Optimization of a double inversion recovery sequence for noninvasive synovium imaging of joint effusion in the knee.

    PubMed

    Jahng, Geon-Ho; Jin, Wook; Yang, Dal Mo; Ryu, Kyung Nam

    2011-05-01

    We wanted to optimize a double inversion recovery (DIR) sequence to image joint effusion regions of the knee, especially intracapsular or intrasynovial imaging in the suprapatellar bursa and patellofemoral joint space. Computer simulations were performed to determine the optimum inversion times (TI) for suppressing both fat and water signals, and a DIR sequence was optimized based on the simulations for distinguishing synovitis from fluid. In vivo studies were also performed on individuals who showed joint effusion on routine knee MR images to demonstrate the feasibility of using the DIR sequence with a 3T whole-body MR scanner. To compare intracapsular or intrasynovial signals on the DIR images, intermediate density-weighted images and/or post-enhanced T1-weighted images were acquired. The timings to enhance the synovial contrast from the fluid components were TI1 = 2830 ms and TI2 = 254 ms for suppressing the water and fat signals, respectively. Improved contrast for the intrasynovial area in the knees was observed with the DIR turbo spin-echo pulse sequence compared to the intermediate density-weighted sequence. Imaging contrast obtained noninvasively with the DIR sequence was similar to that of the post-enhanced T1-weighted sequence. The DIR sequence may be useful for delineating synovium without using contrast materials.

  18. Looking at eye dominance from a different angle: is sighting strength related to hand preference?

    PubMed

    Carey, David P; Hutchinson, Claire V

    2013-10-01

    Sighting dominance (the behavioural preference for one eye over the other under monocular viewing conditions) has traditionally been thought of as a robust individual trait. However, Khan and Crawford (2001) have shown that, under certain viewing conditions, eye preference reverses as a function of horizontal gaze angle. Remarkably, the reversal of sighting from one eye to the other depends on which hand is used to reach out and grasp the target. Their procedure provides an ideal way to measure the strength of monocular preference for sighting, which may be related to other indicators of hemispheric specialisation for speech, language and motor function. Therefore, we hypothesised that individuals with consistent side preferences (e.g., right hand, right eye) should have more robust sighting dominance than those with crossed lateral preferences. To test this idea, we compared strength of eye dominance in individuals who are consistently right or left sided for hand and foot preference with those who are not. We also modified their procedure in order to minimise a potential image size confound, suggested by Banks et al. (2004) as an explanation of Khan and Crawford's results. We found that the sighting dominance switch occurred at similar eccentricities when we controlled for effects of hand occlusion and target size differences. We also found that sighting dominance thresholds change predictably with the hand used. However, we found no evidence for relationships between strength of hand preference as assessed by questionnaire or by pegboard performance and strength of sighting dominance. Similarly, participants with consistent hand and foot preferences did not show stronger eye preference as assessed using the Khan and Crawford procedure. These data are discussed in terms of indirect relationships between sighting dominance, hand preference and cerebral specialisation for language and motor control. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Effects of local myopic defocus on refractive development in monkeys.

    PubMed

    Smith, Earl L; Hung, Li-Fang; Huang, Juan; Arumugam, Baskar

    2013-11-01

    Visual signals that produce myopia are mediated by local, regionally selective mechanisms. However, little is known about spatial integration for signals that slow eye growth. The purpose of this study was to determine whether the effects of myopic defocus are integrated in a local manner in primates. Beginning at 24 ± 2 days of age, seven rhesus monkeys were reared with monocular spectacles that produced 3 diopters (D) of relative myopic defocus in the nasal visual field of the treated eye but allowed unrestricted vision in the temporal field (NF monkeys). Seven monkeys were reared with monocular +3 D lenses that produced relative myopic defocus across the entire field of view (FF monkeys). Comparison data from previous studies were available for 11 control monkeys, 8 monkeys that experienced 3 D of hyperopic defocus in the nasal field, and 6 monkeys exposed to 3 D of hyperopic defocus across the entire field. Refractive development, corneal power, and axial dimensions were assessed at 2- to 4-week intervals using retinoscopy, keratometry, and ultrasonography, respectively. Eye shape was assessed using magnetic resonance imaging. In response to full-field myopic defocus, the FF monkeys developed compensating hyperopic anisometropia, the degree of which was relatively constant across the horizontal meridian. In contrast, the NF monkeys exhibited compensating hyperopic changes in refractive error that were greatest in the nasal visual field. The changes in the pattern of peripheral refractions in the NF monkeys reflected interocular differences in vitreous chamber shape. As with form deprivation and hyperopic defocus, the effects of myopic defocus are mediated by mechanisms that integrate visual signals in a local, regionally selective manner in primates. These results are in agreement with the hypothesis that peripheral vision can influence eye shape and potentially central refractive error in a manner that is independent of central visual experience.

  20. Predicting through-focus visual acuity with the eye's natural aberrations.

    PubMed

    Kingston, Amanda C; Cox, Ian G

    2013-10-01

    To develop a predictive optical modeling process that utilizes individual computer eye models along with a novel through-focus image quality metric. Individual eye models were implemented in optical design software (Zemax, Bellevue, WA) based on evaluation of ocular aberrations, pupil diameter, visual acuity, and accommodative response of 90 subjects (180 eyes; 24-63 years of age). Monocular high-contrast minimum angle of resolution (logMAR) acuity was assessed at 6 m, 2 m, 1 m, 67 cm, 50 cm, 40 cm, 33 cm, 28 cm, and 25 cm. While the subject fixated on the lowest readable line of acuity, total ocular aberrations and pupil diameter were measured three times each using the Complete Ophthalmic Analysis System (COAS HD VR) at each distance. A subset of 64 mature presbyopic eyes was used to predict the clinical logMAR acuity performance of five novel multifocal contact lens designs. To validate predictability of the design process, designs were manufactured and tested clinically on a population of 24 mature presbyopes (having at least +1.50 D spectacle add at 40 cm). Seven object distances were used in the validation study (6 m, 2 m, 1 m, 67 cm, 50 cm, 40 cm, and 25 cm) to measure monocular high-contrast logMAR acuity. Baseline clinical through-focus logMAR was shown to correlate highly (R² = 0.85) with predicted logMAR from individual eye models. At all object distances, each of the five multifocal lenses showed less than one line difference, on average, between predicted and clinical normalized logMAR acuity. Correlation showed R² between 0.90 and 0.97 for all multifocal designs. Computer-based models that account for patient's aberrations, pupil diameter changes, and accommodative amplitude can be used to predict the performance of contact lens designs. With this high correlation (R² ≥ 0.90) and high level of predictability, more design options can be explored in the computer to optimize performance before a lens is manufactured and tested clinically.

  1. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  2. Processing Dynamic Image Sequences from a Moving Sensor.

    DTIC Science & Technology

    1984-02-01

    65 Roadsign Image Sequence ..... ................ ... 70 Roadsign Sequence with Redundant Features .. ........ . 79 Roadsign Subimage...Selected Feature Error Values .. ........ 66 2c. Industrial Image Selected Feature Local Search Values. .. .... 67 3ab. Roadsign Image Error Values...72 3c. Roadsign Image Local Search Values ............. 73 4ab. Roadsign Redundant Feature Error Values. ............ 8 4c. Roadsign

  3. Transformation of light double cones in the human retina: the origin of trichromatism, of 4D-spatiotemporal vision, and of patchwise 4D Fourier transformation in Talbot imaging

    NASA Astrophysics Data System (ADS)

    Lauinger, Norbert

    1997-09-01

    The interpretation of the 'inverted' retina of primates as an 'optoretina' (a light cones transforming diffractive cellular 3D-phase grating) integrates the functional, structural, and oscillatory aspects of a cortical layer. It is therefore relevant to consider prenatal developments as a basis of the macro- and micro-geometry of the inner eye. This geometry becomes relevant for the postnatal trichromatic synchrony organization (TSO) as well as the adaptive levels of human vision. It is shown that the functional performances, the trichromatism in photopic vision, the monocular spatiotemporal 3D- and 4D-motion detection, as well as the Fourier optical image transformation with extraction of invariances all become possible. To transform light cones into reciprocal gratings especially the spectral phase conditions in the eikonal of the geometrical optical imaging before the retinal 3D-grating become relevant first, then in the von Laue resp. reciprocal von Laue equation for 3D-grating optics inside the grating and finally in the periodicity of Talbot-2/Fresnel-planes in the near-field behind the grating. It is becoming possible to technically realize -- at least in some specific aspects -- such a cortical optoretina sensor element with its typical hexagonal-concentric structure which leads to these visual functions.

  4. Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs) †

    PubMed Central

    Jaramillo, Carlos; Valenti, Roberto G.; Guo, Ling; Xiao, Jizhong

    2016-01-01

    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances. PMID:26861351

  5. Generic Dynamic Environment Perception Using Smart Mobile Devices

    PubMed Central

    Danescu, Radu; Itu, Razvan; Petrovai, Andra

    2016-01-01

    The driving environment is complex and dynamic, and the attention of the driver is continuously challenged, therefore computer based assistance achieved by processing image and sensor data may increase traffic safety. While active sensors and stereovision have the advantage of obtaining 3D data directly, monocular vision is easy to set up, and can benefit from the increasing computational power of smart mobile devices, and from the fact that almost all of them come with an embedded camera. Several driving assistance application are available for mobile devices, but they are mostly targeted for simple scenarios and a limited range of obstacle shapes and poses. This paper presents a technique for generic, shape independent real-time obstacle detection for mobile devices, based on a dynamic, free form 3D representation of the environment: the particle based occupancy grid. Images acquired in real time from the smart mobile device’s camera are processed by removing the perspective effect and segmenting the resulted bird-eye view image to identify candidate obstacle areas, which are then used to update the occupancy grid. The occupancy grid tracked cells are grouped into obstacles depicted as cuboids having position, size, orientation and speed. The easy to set up system is able to reliably detect most obstacles in urban traffic, and its measurement accuracy is comparable to a stereovision system. PMID:27763501

  6. The prevalence of visual deficiencies among 1979 general aviation accident airmen.

    DOT National Transportation Integrated Search

    1981-07-01

    Analyses of the accident experience of pilots who were monocular, did not meet (even the liberal) vision standards, had color vision defects and no operational restrictions, or wore contact lenses, have shown higher-than-expected accident experience ...

  7. Diuretic-enhanced gadolinium excretory MR urography: comparison of conventional gradient-echo sequences and echo-planar imaging.

    PubMed

    Nolte-Ernsting, C C; Tacke, J; Adam, G B; Haage, P; Jung, P; Jakse, G; Günther, R W

    2001-01-01

    The aim of this study was to investigate the utility of different gadolinium-enhanced T1-weighted gradient-echo techniques in excretory MR urography. In 74 urologic patients, excretory MR urography was performed using various T1-weighted gradient-echo (GRE) sequences after injection of gadolinium-DTPA and low-dose furosemide. The examinations included conventional GRE sequences and echo-planar imaging (GRE EPI), both obtained with 3D data sets and 2D projection images. Breath-hold acquisition was used primarily. In 20 of 74 examinations, we compared breath-hold imaging with respiratory gating. Breath-hold imaging was significantly superior to respiratory gating for the visualization of pelvicaliceal systems, but not for the ureters. Complete MR urograms were obtained within 14-20 s using 3D GRE EPI sequences and in 20-30 s with conventional 3D GRE sequences. Ghost artefacts caused by ureteral peristalsis often occurred with conventional 3D GRE imaging and were almost completely suppressed in EPI sequences (p < 0.0001). Susceptibility effects were more pronounced on GRE EPI MR urograms and calculi measured 0.8-21.7% greater in diameter compared with conventional GRE sequences. Increased spatial resolution degraded the image quality only in GRE-EPI urograms. In projection MR urography, the entire pelvicaliceal system was imaged by acquisition of a fast single-slice sequence and the conventional 2D GRE technique provided superior morphological accuracy than 2D GRE EPI projection images (p < 0.0003). Fast 3D GRE EPI sequences improve the clinical practicability of excretory MR urography especially in old or critically ill patients unable to suspend breathing for more than 20 s. Conventional GRE sequences are superior to EPI in high-resolution detail MR urograms and in projection imaging.

  8. Image-based computer-assisted diagnosis system for benign paroxysmal positional vertigo

    NASA Astrophysics Data System (ADS)

    Kohigashi, Satoru; Nakamae, Koji; Fujioka, Hiromu

    2005-04-01

    We develop the image based computer assisted diagnosis system for benign paroxysmal positional vertigo (BPPV) that consists of the balance control system simulator, the 3D eye movement simulator, and the extraction method of nystagmus response directly from an eye movement image sequence. In the system, the causes and conditions of BPPV are estimated by searching the database for record matching with the nystagmus response for the observed eye image sequence of the patient with BPPV. The database includes the nystagmus responses for simulated eye movement sequences. The eye movement velocity is obtained by using the balance control system simulator that allows us to simulate BPPV under various conditions such as canalithiasis, cupulolithiasis, number of otoconia, otoconium size, and so on. Then the eye movement image sequence is displayed on the CRT by the 3D eye movement simulator. The nystagmus responses are extracted from the image sequence by the proposed method and are stored in the database. In order to enhance the diagnosis accuracy, the nystagmus response for a newly simulated sequence is matched with that for the observed sequence. From the matched simulation conditions, the causes and conditions of BPPV are estimated. We apply our image based computer assisted diagnosis system to two real eye movement image sequences for patients with BPPV to show its validity.

  9. Enhanced spatio-temporal alignment of plantar pressure image sequences using B-splines.

    PubMed

    Oliveira, Francisco P M; Tavares, João Manuel R S

    2013-03-01

    This article presents an enhanced methodology to align plantar pressure image sequences simultaneously in time and space. The temporal alignment of the sequences is accomplished using B-splines in the time modeling, and the spatial alignment can be attained using several geometric transformation models. The methodology was tested on a dataset of 156 real plantar pressure image sequences (3 sequences for each foot of the 26 subjects) that was acquired using a common commercial plate during barefoot walking. In the alignment of image sequences that were synthetically deformed both in time and space, an outstanding accuracy was achieved with the cubic B-splines. This accuracy was significantly better (p < 0.001) than the one obtained using the best solution proposed in our previous work. When applied to align real image sequences with unknown transformation involved, the alignment based on cubic B-splines also achieved superior results than our previous methodology (p < 0.001). The consequences of the temporal alignment on the dynamic center of pressure (COP) displacement was also assessed by computing the intraclass correlation coefficients (ICC) before and after the temporal alignment of the three image sequence trials of each foot of the associated subject at six time instants. The results showed that, generally, the ICCs related to the medio-lateral COP displacement were greater when the sequences were temporally aligned than the ICCs of the original sequences. Based on the experimental findings, one can conclude that the cubic B-splines are a remarkable solution for the temporal alignment of plantar pressure image sequences. These findings also show that the temporal alignment can increase the consistency of the COP displacement on related acquired plantar pressure image sequences.

  10. Optimized protocols for cardiac magnetic resonance imaging in patients with thoracic metallic implants.

    PubMed

    Olivieri, Laura J; Cross, Russell R; O'Brien, Kendall E; Ratnayaka, Kanishka; Hansen, Michael S

    2015-09-01

    Cardiac magnetic resonance (MR) imaging is a valuable tool in congenital heart disease; however patients frequently have metal devices in the chest from the treatment of their disease that complicate imaging. Methods are needed to improve imaging around metal implants near the heart. Basic sequence parameter manipulations have the potential to minimize artifact while limiting effects on image resolution and quality. Our objective was to design cine and static cardiac imaging sequences to minimize metal artifact while maintaining image quality. Using systematic variation of standard imaging parameters on a fluid-filled phantom containing commonly used metal cardiac devices, we developed optimized sequences for steady-state free precession (SSFP), gradient recalled echo (GRE) cine imaging, and turbo spin-echo (TSE) black-blood imaging. We imaged 17 consecutive patients undergoing routine cardiac MR with 25 metal implants of various origins using both standard and optimized imaging protocols for a given slice position. We rated images for quality and metal artifact size by measuring metal artifact in two orthogonal planes within the image. All metal artifacts were reduced with optimized imaging. The average metal artifact reduction for the optimized SSFP cine was 1.5+/-1.8 mm, and for the optimized GRE cine the reduction was 4.6+/-4.5 mm (P < 0.05). Quality ratings favored the optimized GRE cine. Similarly, the average metal artifact reduction for the optimized TSE images was 1.6+/-1.7 mm (P < 0.05), and quality ratings favored the optimized TSE imaging. Imaging sequences tailored to minimize metal artifact are easily created by modifying basic sequence parameters, and images are superior to standard imaging sequences in both quality and artifact size. Specifically, for optimized cine imaging a GRE sequence should be used with settings that favor short echo time, i.e. flow compensation off, weak asymmetrical echo and a relatively high receiver bandwidth. For static black-blood imaging, a TSE sequence should be used with fat saturation turned off and high receiver bandwidth.

  11. In vivo Proton Electron Double Resonance Imaging of Mice with Fast Spin Echo Pulse Sequence

    PubMed Central

    Sun, Ziqi; Li, Haihong; Petryakov, Sergey; Samouilov, Alex; Zweier, Jay L.

    2011-01-01

    Purpose To develop and evaluate a 2D fast spin echo (FSE) pulse sequence for enhancing temporal resolution and reducing tissue heating for in vivo proton electron double resonance imaging (PEDRI) of mice. Materials and Methods A four-compartment phantom containing 2 mM TEMPONE was imaged at 20.1 mT using 2D FSE-PEDRI and regular gradient echo (GRE)-PEDRI pulse sequences. Control mice were infused with TEMPONE over ∼1 min followed by time-course imaging using the 2D FSE-PEDRI sequence at intervals of 10 – 30 s between image acquisitions. The average signal intensity from the time-course images was analyzed using a first-order kinetics model. Results Phantom experiments demonstrated that EPR power deposition can be greatly reduced using the FSE-PEDRI pulse sequence compared to the conventional gradient echo pulse sequence. High temporal resolution was achieved at ∼4 s per image acquisition using the FSE-PEDRI sequence with a good image SNR in the range of 233-266 in the phantom study. The TEMPONE half-life measured in vivo was ∼72 s. Conclusion Thus, the FSE-PEDRI pulse sequence enables fast in vivo functional imaging of free radical probes in small animals greatly reducing EPR irradiation time with decreased power deposition and provides increased temporal resolution. PMID:22147559

  12. Abdominal MR imaging in children: motion compensation, sequence optimization, and protocol organization.

    PubMed

    Chavhan, Govind B; Babyn, Paul S; Vasanawala, Shreyas S

    2013-05-01

    Familiarity with basic sequence properties and their trade-offs is necessary for radiologists performing abdominal magnetic resonance (MR) imaging. Acquiring diagnostic-quality MR images in the pediatric abdomen is challenging due to motion, inability to breath hold, varying patient size, and artifacts. Motion-compensation techniques (eg, respiratory gating, signal averaging, suppression of signal from moving tissue, swapping phase- and frequency-encoding directions, use of faster sequences with breath holding, parallel imaging, and radial k-space filling) can improve image quality. Each of these techniques is more suitable for use with certain sequences and acquisition planes and in specific situations and age groups. Different T1- and T2-weighted sequences work better in different age groups and with differing acquisition planes and have specific advantages and disadvantages. Dynamic imaging should be performed differently in younger children than in older children. In younger children, the sequence and the timing of dynamic phases need to be adjusted. Different sequences work better in smaller children and in older children because of differing breath-holding ability, breathing patterns, field of view, and use of sedation. Hence, specific protocols should be maintained for younger children and older children. Combining longer-higher-resolution sequences and faster-lower-resolution sequences helps acquire diagnostic-quality images in a reasonable time. © RSNA, 2013.

  13. Image Encryption Algorithm Based on Hyperchaotic Maps and Nucleotide Sequences Database

    PubMed Central

    2017-01-01

    Image encryption technology is one of the main means to ensure the safety of image information. Using the characteristics of chaos, such as randomness, regularity, ergodicity, and initial value sensitiveness, combined with the unique space conformation of DNA molecules and their unique information storage and processing ability, an efficient method for image encryption based on the chaos theory and a DNA sequence database is proposed. In this paper, digital image encryption employs a process of transforming the image pixel gray value by using chaotic sequence scrambling image pixel location and establishing superchaotic mapping, which maps quaternary sequences and DNA sequences, and by combining with the logic of the transformation between DNA sequences. The bases are replaced under the displaced rules by using DNA coding in a certain number of iterations that are based on the enhanced quaternary hyperchaotic sequence; the sequence is generated by Chen chaos. The cipher feedback mode and chaos iteration are employed in the encryption process to enhance the confusion and diffusion properties of the algorithm. Theoretical analysis and experimental results show that the proposed scheme not only demonstrates excellent encryption but also effectively resists chosen-plaintext attack, statistical attack, and differential attack. PMID:28392799

  14. Quantitative analysis of image quality for acceptance and commissioning of an MRI simulator with a semiautomatic method.

    PubMed

    Chen, Xinyuan; Dai, Jianrong

    2018-05-01

    Magnetic Resonance Imaging (MRI) simulation differs from diagnostic MRI in purpose, technical requirements, and implementation. We propose a semiautomatic method for image acceptance and commissioning for the scanner, the radiofrequency (RF) coils, and pulse sequences for an MRI simulator. The ACR MRI accreditation large phantom was used for image quality analysis with seven parameters. Standard ACR sequences with a split head coil were adopted to examine the scanner's basic performance. The performance of simulation RF coils were measured and compared using the standard sequence with different clinical diagnostic coils. We used simulation sequences with simulation coils to test the quality of image and advanced performance of the scanner. Codes and procedures were developed for semiautomatic image quality analysis. When using standard ACR sequences with a split head coil, image quality passed all ACR recommended criteria. The image intensity uniformity with a simulation RF coil decreased about 34% compared with the eight-channel diagnostic head coil, while the other six image quality parameters were acceptable. Those two image quality parameters could be improved to more than 85% by built-in intensity calibration methods. In the simulation sequences test, the contrast resolution was sensitive to the FOV and matrix settings. The geometric distortion of simulation sequences such as T1-weighted and T2-weighted images was well-controlled in the isocenter and 10 cm off-center within a range of ±1% (2 mm). We developed a semiautomatic image quality analysis method for quantitative evaluation of images and commissioning of an MRI simulator. The baseline performances of simulation RF coils and pulse sequences have been established for routine QA. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  15. Comparison of magnetic resonance imaging sequences for depicting the subthalamic nucleus for deep brain stimulation.

    PubMed

    Nagahama, Hiroshi; Suzuki, Kengo; Shonai, Takaharu; Aratani, Kazuki; Sakurai, Yuuki; Nakamura, Manami; Sakata, Motomichi

    2015-01-01

    Electrodes are surgically implanted into the subthalamic nucleus (STN) of Parkinson's disease patients to provide deep brain stimulation. For ensuring correct positioning, the anatomic location of the STN must be determined preoperatively. Magnetic resonance imaging has been used for pinpointing the location of the STN. To identify the optimal imaging sequence for identifying the STN, we compared images produced with T2 star-weighted angiography (SWAN), gradient echo T2*-weighted imaging, and fast spin echo T2-weighted imaging in 6 healthy volunteers. Our comparison involved measurement of the contrast-to-noise ratio (CNR) for the STN and substantia nigra and a radiologist's interpretations of the images. Of the sequences examined, the CNR and qualitative scores were significantly higher on SWAN images than on other images (p < 0.01) for STN visualization. Kappa value (0.74) on SWAN images was the highest in three sequences for visualizing the STN. SWAN is the sequence best suited for identifying the STN at the present time.

  16. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  17. [Cinematography of ocular fundus with a jointed optical system and tv or cine-camera (author's transl)].

    PubMed

    Kampik, A; Rapp, J

    1979-02-01

    A method of Cinematography of the ocular fundus is introduced which--by connecting a camera with an indirect ophthalmoscop--allows to record the monocular picture of the fundus as produced by the ophthalmic lens.

  18. Enhanced learning of natural visual sequences in newborn chicks.

    PubMed

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  19. Comparison of the quality of different magnetic resonance image sequences of multiple myeloma.

    PubMed

    Sun, Zhao-yong; Zhang, Hai-bo; Li, Shuo; Wang, Yun; Xue, Hua-dan; Jin, Zheng-yu

    2015-02-01

    To compare the image quality of T1WI fat phase,T1WI water phase, short time inversion recovery (STIR) sequence, and diffusion weighted imaging (DWI) sequence in the evaluation of multiple myeloma (MM). Totally 20MM patients were enrolled in this study. All patients underwent scanning at coronal T1WI fat phase, coronal T1WI water phase, coronal STIR sequence, and axial DWI sequence. The image quality of the four different sequences was evaluated. The image was divided into seven sections(head and neck, chest, abdomen, pelvis, thigh, leg, and foot), and the signal-to-noise ratio (SNR) of each section was measured at 7 segments (skull, spine, pelvis, humerus, femur, tibia and fibula and ribs) were measured. In addition, 20 active MM lesions were selected, and the contrast-to-noise ratio (CNR) of each scan sequence was calculated. The average image quality scores of T1WI fat phase,T1WI water phase, STIR sequence, and DWI sequence were 4.19 ± 0.70,4.16 ± 0.73,3.89 ± 0.70, and 3.76 ± 0.68, respectively. The image quality at T1-fat phase and T1-water phase were significantly higher than those at STIR (P=0.000 and P=0.001) and DWI sequence (both P=0.000); however, there was no significant difference between T1-fat and T1-water phase (P=0.723)and between STIR and DWI sequence (P=0.167). The SNR of T1WI fat phase was significantly higher than those of the other three sequences (all P=0.000), and there was no significant difference among the other three sequences (all P>0.05). Although the CNR of DWI sequences was slightly higher than those of the other three sequences,there was no significant difference among all of them (all P>0.05). Imaging at T1WI fat phase,T1WI water phase, STIR sequence, and DWI sequence has certain advantages,and they should be combined in the diagnosis of MM.

  20. MISTICA: Minimum Spanning Tree-based Coarse Image Alignment for Microscopy Image Sequences

    PubMed Central

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T.

    2016-01-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to re-order the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries. PMID:26415193

  1. MISTICA: Minimum Spanning Tree-Based Coarse Image Alignment for Microscopy Image Sequences.

    PubMed

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T

    2016-11-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to reorder the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by the way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries.

  2. Diffusion-weighted imaging of the liver with multiple b values: effect of diffusion gradient polarity and breathing acquisition on image quality and intravoxel incoherent motion parameters--a pilot study.

    PubMed

    Dyvorne, Hadrien A; Galea, Nicola; Nevers, Thomas; Fiel, M Isabel; Carpenter, David; Wong, Edmund; Orton, Matthew; de Oliveira, Andre; Feiweier, Thorsten; Vachon, Marie-Louise; Babb, James S; Taouli, Bachir

    2013-03-01

    To optimize intravoxel incoherent motion (IVIM) diffusion-weighted (DW) imaging by estimating the effects of diffusion gradient polarity and breathing acquisition scheme on image quality, signal-to-noise ratio (SNR), IVIM parameters, and parameter reproducibility, as well as to investigate the potential of IVIM in the detection of hepatic fibrosis. In this institutional review board-approved prospective study, 20 subjects (seven healthy volunteers, 13 patients with hepatitis C virus infection; 14 men, six women; mean age, 46 years) underwent IVIM DW imaging with four sequences: (a) respiratory-triggered (RT) bipolar (BP) sequence, (b) RT monopolar (MP) sequence, (c) free-breathing (FB) BP sequence, and (d) FB MP sequence. Image quality scores were assessed for all sequences. A biexponential analysis with the Bayesian method yielded true diffusion coefficient (D), pseudodiffusion coefficient (D*), and perfusion fraction (PF) in liver parenchyma. Mixed-model analysis of variance was used to compare image quality, SNR, IVIM parameters, and interexamination variability between the four sequences, as well as the ability to differentiate areas of liver fibrosis from normal liver tissue. Image quality with RT sequences was superior to that with FB acquisitions (P = .02) and was not affected by gradient polarity. SNR did not vary significantly between sequences. IVIM parameter reproducibility was moderate to excellent for PF and D, while it was less reproducible for D*. PF and D were both significantly lower in patients with hepatitis C virus than in healthy volunteers with the RT BP sequence (PF = 13.5% ± 5.3 [standard deviation] vs 9.2% ± 2.5, P = .038; D = [1.16 ± 0.07] × 10(-3) mm(2)/sec vs [1.03 ± 0.1] × 10(-3) mm(2)/sec, P = .006). The RT BP DW imaging sequence had the best results in terms of image quality, reproducibility, and ability to discriminate between healthy and fibrotic liver with biexponential fitting.

  3. The effect of lens-induced anisometropia on accommodation and vergence during human visual development.

    PubMed

    Bharadwaj, Shrikant R; Candy, T Rowan

    2011-06-01

    Clear and single binocular vision, a prerequisite for normal human visual development, is achieved through accommodation and vergence. Anisometropia is associated with abnormal visual development, but its impact on accommodation and vergence, and therefore on the individual's visual experience, is not known. This study determined the impact of transiently induced anisometropia on accommodative and vergence performance of the typically developing human visual system. One hundred eighteen subjects (age range, 2.9 months to 41.1 years) watched a cartoon movie that moved between 80 and 33 cm under six different viewing conditions: binocular and monocular, and with ±2 diopters (D) and ±4 D of lens-induced anisometropia. Twenty-one subjects (age range, 3.1 months to 12.1 years) also watched the movie with 11% induced aniseikonia. Accommodation and vergence were recorded in both eyes using a videoretinoscope (25 Hz). The main effect of viewing condition was statistically significant for both accommodation and vergence (both P < 0.001), with monocular accommodative and vergence gains statistically significantly smaller than the binocular and four induced anisometropia conditions (P < 0.001 for both accommodation and vergence). The main effect of age approached significance for accommodation (P = 0.06) and was not significant for vergence (P = 0.32). Accommodative and vergence gains with induced aniseikonia were not statistically significantly different from the binocular condition (both P > 0.5). Accommodative and vergence gains of the typically developing visual system deteriorated marginally (accommodation more than vergence) with transiently induced anisometropia (up to ±4 D) and did not deteriorate significantly with induced aniseikonia of 11%. Some binocular cues remained with ±4 D of induced anisometropia and 11% induced aniseikonia, as indicated by the accommodative and vergence gains being higher than in monocular viewing.

  4. Tonic accommodation predicts closed-loop accommodation responses.

    PubMed

    Liu, Chunming; Drew, Stefanie A; Borsting, Eric; Escobar, Amy; Stark, Lawrence; Chase, Christopher

    2016-12-01

    The purpose of this study is to examine the potential relationship between tonic accommodation (TA), near work induced TA-adaptation and the steady state closed-loop accommodation response (AR). Forty-two graduate students participated in the study. Various aspects of their accommodation system were objectively measured using an open-field infrared auto-refractor (Grand Seiko WAM-5500). Tonic accommodation was assessed in a completely dark environment. The association between TA and closed-loop AR was assessed using linear regression correlations and t-test comparisons. Initial mean baseline TA was 1.84diopter (D) (SD±1.29D) with a wide distribution range (-0.43D to 5.14D). For monocular visual tasks, baseline TA was significantly correlated with the closed-loop AR. The slope of the best fit line indicated that closed-loop AR varied by approximately 0.3D for every 1D change in TA. This ratio was consistent across a variety of viewing distances and different near work tasks, including both static targets and continuous reading. Binocular reading conditions weakened the correlation between baseline TA and AR, although results remained statistically significant. The 10min near reading task with a 3D demand did not reveal significant near work induced TA-adaptation for either monocular or binocular conditions. Consistently, the TA-adaptation did not show any correlation with AR during reading. This study found a strong association between open-loop TA and closed-loop AR across a variety of viewing distances and different near work tasks. Difference between the correlations under monocular and binocular reading condition suggests a potential role for vergence compensation during binocular closed-loop AR. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. LogMAR and Stereoacuity in Keratoconus Corrected with Spectacles and Rigid Gas-permeable Contact Lenses.

    PubMed

    Nilagiri, Vinay Kumar; Metlapally, Sangeetha; Kalaiselvan, Parthasarathi; Schor, Clifton M; Bharadwaj, Shrikant R

    2018-04-01

    This study showed an improvement in three-dimensional depth perception of subjects with bilateral and unilateral keratoconus with rigid gas-permeable (RGP) contact lens wear, relative to spectacles. This novel information will aid clinicians to consider RGP contact lenses as a management modality in keratoconic patients complaining of depth-related difficulties with their spectacles. The aim of this study was to systematically compare changes in logMAR acuity and stereoacuity from best-corrected spherocylindrical spectacles to RGP contact lenses in bilateral and unilateral keratoconus vis-à-vis age-matched control subjects. Monocular and binocular logMAR acuity and random-dot stereoacuity were determined in subjects with bilateral (n = 30; 18 to 24 years) and unilateral (n = 10; 18 to 24 years) keratoconus and 20 control subjects using standard psychophysical protocols. Median (25th to 75th interquartile range) monocular (right eye) and binocular logMAR acuity and stereoacuity improved significantly from spectacles to RGP contact lenses in the bilateral keratoconus cohort (P < .001). Only monocular logMAR acuity of affected eye and stereoacuity improved from spectacles to RGP contact lenses in the unilateral keratoconus cohort (P < .001). There was no significant change in the binocular logMAR acuity from spectacles to RGP contact lenses in the unilateral keratoconus cohort. The magnitude of improvement in binocular logMAR acuity and stereoacuity was also greater for the bilateral compared with the unilateral keratoconus cohort. All outcome measures of cases with RGP contact lenses remained poorer than control subjects (P < .001). Binocular resolution and stereoacuity improve from spectacles to RGP contact lenses in bilateral keratoconus, whereas only stereoacuity improves from spectacles to RGP contact lenses in unilateral keratoconus. The magnitude of improvement in visual performance is greater for the binocular compared with the unilateral keratoconus cohort.

  6. Resultant vertical prism in toric soft contact lenses.

    PubMed

    Sulley, Anna; Hawke, Ryan; Lorenz, Kathrine Osborn; Toubouti, Youssef; Olivares, Giovanna

    2015-08-01

    Rotational stability of toric soft contact lenses (TSCLs) is achieved using a range of designs. Designs utilising prism or peripheral ballast may result in residual prism in the optic zone. This study quantifies the vertical prism in the central 6mm present in TSCLs with various stabilisation methods. Vertical prism was computed using published refractive index and vertical thickness changes in the central optic zone on a full lens thickness map. Thickness maps were measured using scanning transmission microscopy. Designs tested were reusable, silicone hydrogel and hydrogel TSCLs: SofLens(®) Toric, PureVision(®)2 for Astigmatism, PureVision(®) Toric, Biofinity(®) Toric, Avaira(®) Toric, clariti(®) toric, AIR OPTIX(®) for ASTIGMATISM and ACUVUE OASYS(®) for ASTIGMATISM; with eight parameter combinations for each lens (-6.00DS to +3.00DS, -1.25DC, 90° and 180° axes). All TSCL designs evaluated had vertical prism in the optic zone except one which had virtually none (0.01Δ). Mean prism ranged from 0.52Δ to 1.15Δ, with three designs having prism that varied with sphere power. Vertical prism in ACUVUE OASYS(®) for ASTIGMATISM was significantly lower than all other TSCLs tested. TSCL designs utilising prism-ballast and peri-ballast for stabilisation have vertical prism in the central optic zone. In monocular astigmats fitted with a TSCL or those wearing a mix of toric designs, vertical prism imbalance could create or exacerbate disturbances in binocular vision function. Practitioners should be aware of this potential effect when selecting which TSCL designs to prescribe, particularly for monocular astigmats with pre-existing binocular vision anomalies, and when managing complaints of asthenopia in monocular astigmats. Copyright © 2015 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  7. Longitudinal study of visual function in patients with relapsing-remitting multiple sclerosis with and without a history of optic neuritis.

    PubMed

    González Gómez, A; García-Ben, A; Soler García, A; García-Basterra, I; Padilla Parrado, F; García-Campos, J M

    2017-03-15

    The contrast sensitivity test determines the quality of visual function in patients with multiple sclerosis (MS). The purpose of this study is to analyse changes in visual function in patients with relapsing-remitting MS with and without a history of optic neuritis (ON). We conducted a longitudinal study including 61 patients classified into 3 groups as follows: a) disease-free patients (control group); b) patients with MS and no history of ON; and c) patients with MS and a history of unilateral ON. All patients underwent baseline and 6-year follow-up ophthalmologic examinations, which included visual acuity and monocular and binocular Pelli-Robson contrast sensitivity tests. Monocular contrast sensitivity was significantly lower in MS patients with and without a history of ON than in controls both at baseline (P=.00 and P=.01, respectively) and at 6 years (P=.01 and P=.02). Patients with MS and no history of ON remained stable throughout follow-up whereas those with a history of ON displayed a significant loss of contrast sensitivity (P=.01). Visual acuity and binocular contrast sensitivity at baseline and at 6 years was significantly lower in the group of patients with a history of ON than in the control group (P=.003 and P=.002 vs P=.006 and P=.005) and the group with no history of ON (P=.04 and P=.038 vs P=.008 and P=.01). However, no significant differences were found in follow-up results (P=.1 and P=.5). Monocular Pelli-Robson contrast sensitivity test may be used to detect changes in visual function in patients with ON. Copyright © 2017 The Author(s). Publicado por Elsevier España, S.L.U. All rights reserved.

  8. The Effect of Lens-Induced Anisometropia on Accommodation and Vergence during Human Visual Development

    PubMed Central

    Candy, T. Rowan

    2011-01-01

    Purpose. Clear and single binocular vision, a prerequisite for normal human visual development, is achieved through accommodation and vergence. Anisometropia is associated with abnormal visual development, but its impact on accommodation and vergence, and therefore on the individual's visual experience, is not known. This study determined the impact of transiently induced anisometropia on accommodative and vergence performance of the typically developing human visual system. Methods. One hundred eighteen subjects (age range, 2.9 months to 41.1 years) watched a cartoon movie that moved between 80 and 33 cm under six different viewing conditions: binocular and monocular, and with ±2 diopters (D) and ±4 D of lens-induced anisometropia. Twenty-one subjects (age range, 3.1 months to 12.1 years) also watched the movie with 11% induced aniseikonia. Accommodation and vergence were recorded in both eyes using a videoretinoscope (25 Hz). Results. The main effect of viewing condition was statistically significant for both accommodation and vergence (both P < 0.001), with monocular accommodative and vergence gains statistically significantly smaller than the binocular and four induced anisometropia conditions (P < 0.001 for both accommodation and vergence). The main effect of age approached significance for accommodation (P = 0.06) and was not significant for vergence (P = 0.32). Accommodative and vergence gains with induced aniseikonia were not statistically significantly different from the binocular condition (both P > 0.5). Conclusions. Accommodative and vergence gains of the typically developing visual system deteriorated marginally (accommodation more than vergence) with transiently induced anisometropia (up to ±4 D) and did not deteriorate significantly with induced aniseikonia of 11%. Some binocular cues remained with ±4 D of induced anisometropia and 11% induced aniseikonia, as indicated by the accommodative and vergence gains being higher than in monocular viewing. PMID:21296822

  9. Monocular oral reading after treatment of dense congenital unilateral cataract

    PubMed Central

    Birch, Eileen E.; Cheng, Christina; Christina, V; Stager, David R.

    2010-01-01

    Background Good long-term visual acuity outcomes for children with dense congenital unilateral cataracts have been reported following early surgery and good compliance with postoperative amblyopia therapy. However, treated eyes rarely achieve normal visual acuity and there has been no formal evaluation of the utility of the treated eye for reading. Methods Eighteen children previously treated for dense congenital unilateral cataract were tested monocularly with the Gray Oral Reading Test, 4th edition (GORT-4) at 7 to 13 years of age using two passages for each eye, one at grade level and one at +1 above grade level. In addition, right eyes of 55 normal children age 7 to 13 served as a control group. The GORT-4 assesses reading rate, accuracy, fluency, and comprehension. Results Visual acuity of treated eyes ranged from 0.1 to 2.0 logMAR and of fellow eyes from −0.1 to 0.2 logMAR. Treated eyes scored significantly lower than fellow and normal control eyes on all scales at grade level and at +1 above grade level. Monocular reading rate, accuracy, fluency, and comprehension were correlated with visual acuity of treated eyes (rs = −0.575 to −0.875, p < 0.005). Treated eyes with 0.1-0.3 logMAR visual acuity did not differ from fellow or normal control eyes in rate, accuracy, fluency, or comprehension when reading at grade level or at +1 above grade level. Fellow eyes did not differ from normal controls on any reading scale. Conclusions Excellent visual acuity outcomes following treatment of dense congenital unilateral cataracts are associated with normal reading ability of the treated eye in school-age children. PMID:20603057

  10. Non-Cartesian Balanced SSFP Pulse Sequences for Real-Time Cardiac MRI

    PubMed Central

    Feng, Xue; Salerno, Michael; Kramer, Christopher M.; Meyer, Craig H.

    2015-01-01

    Purpose To develop a new spiral-in/out balanced steady-state free precession (bSSFP) pulse sequence for real-time cardiac MRI and compare it with radial and spiral-out techniques. Methods Non-Cartesian sampling strategies are efficient and robust to motion and thus have important advantages for real-time bSSFP cine imaging. This study describes a new symmetric spiral-in/out sequence with intrinsic gradient moment compensation and SSFP refocusing at TE=TR/2. In-vivo real-time cardiac imaging studies were performed to compare radial, spiral-out, and spiral-in/out bSSFP pulse sequences. Furthermore, phase-based fat-water separation taking advantage of the refocusing mechanism of the spiral-in/out bSSFP sequence was also studied. Results The image quality of the spiral-out and spiral-in/out bSSFP sequences was improved with off-resonance and k-space trajectory correction. The spiral-in/out bSSFP sequence had the highest SNR, CNR, and image quality ratings, with spiral-out bSSFP sequence second in each category and the radial bSSFP sequence third. The spiral-in/out bSSFP sequence provides separated fat and water images with no additional scan time. Conclusions In this work a new spiral-in/out bSSFP sequence was developed and tested. The superiority of spiral bSSFP sequences over the radial bSSFP sequence in terms of SNR and reduced artifacts was demonstrated in real-time MRI of cardiac function without image acceleration. PMID:25960254

  11. Robust temporal alignment of multimodal cardiac sequences

    NASA Astrophysics Data System (ADS)

    Perissinotto, Andrea; Queirós, Sandro; Morais, Pedro; Baptista, Maria J.; Monaghan, Mark; Rodrigues, Nuno F.; D'hooge, Jan; Vilaça, João. L.; Barbosa, Daniel

    2015-03-01

    Given the dynamic nature of cardiac function, correct temporal alignment of pre-operative models and intraoperative images is crucial for augmented reality in cardiac image-guided interventions. As such, the current study focuses on the development of an image-based strategy for temporal alignment of multimodal cardiac imaging sequences, such as cine Magnetic Resonance Imaging (MRI) or 3D Ultrasound (US). First, we derive a robust, modality-independent signal from the image sequences, estimated by computing the normalized cross-correlation between each frame in the temporal sequence and the end-diastolic frame. This signal is a resembler for the left-ventricle (LV) volume curve over time, whose variation indicates different temporal landmarks of the cardiac cycle. We then perform the temporal alignment of these surrogate signals derived from MRI and US sequences of the same patient through Dynamic Time Warping (DTW), allowing to synchronize both sequences. The proposed framework was evaluated in 98 patients, which have undergone both 3D+t MRI and US scans. The end-systolic frame could be accurately estimated as the minimum of the image-derived surrogate signal, presenting a relative error of 1.6 +/- 1.9% and 4.0 +/- 4.2% for the MRI and US sequences, respectively, thus supporting its association with key temporal instants of the cardiac cycle. The use of DTW reduces the desynchronization of the cardiac events in MRI and US sequences, allowing to temporally align multimodal cardiac imaging sequences. Overall, a generic, fast and accurate method for temporal synchronization of MRI and US sequences of the same patient was introduced. This approach could be straightforwardly used for the correct temporal alignment of pre-operative MRI information and intra-operative US images.

  12. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.

  13. [The Role of Imaging in Central Nervous System Infections].

    PubMed

    Yokota, Hajime; Tazoe, Jun; Yamada, Kei

    2015-07-01

    Many infections invade the central nervous system. Magnetic resonance imaging (MRI) is the main tool that is used to evaluate infectious lesions of the central nervous system. The useful sequences on MRI are dependent on the locations, such as intra-axial, extra-axial, and spinal cord. For intra-axial lesions, besides the fundamental sequences, including T1-weighted images, T2-weighted images, and fluid-attenuated inversion recovery (FLAIR) images, advanced sequences, such as diffusion-weighted imaging, diffusion tensor imaging, susceptibility-weighted imaging, and MR spectroscopy, can be applied. They are occasionally used as determinants for quick and correct diagnosis. For extra-axial lesions, understanding the differences among 2D-conventional T1-weighted images, 2D-fat-saturated T1-weighted images, 3D-Spin echo sequences, and 3D-Gradient echo sequence after the administration of gadolinium is required to avoid wrong interpretations. FLAIR plus gadolinium is a useful tool for revealing abnormal enhancement on the brain surface. For the spinal cord, the sequences are limited. Evaluating the distribution and time course of the spinal cord are essential for correct diagnoses. We summarize the role of imaging in central nervous system infections and show the pitfalls, key points, and latest information in them on clinical practices.

  14. The riches of the cyclopean paradigm

    NASA Astrophysics Data System (ADS)

    Tyler, Christopher W.

    2005-03-01

    The cyclopean paradigm introduced by Bela Julesz remains one of the richest probes into the neural organization of sensory processing, by virtue of both its specificity for purely stereoscopic form and the sophistication of the processing required to retrieve it. The introduction of the sinusoidal stereograting showed that the perceptual limitations of human depth processing are very different from those for monocular form. Their use has also revealed the existence of hypercyclopean form channels selective for specific aspects of the monocularly invisible depth form. The natural extension of stereogratings to patches of stereoGabor ripple has allowed the measurement of the summation properties for depth structure, which is specific for narrow horizontal bars in depth. Consideration of the apparent motion between two cyclopean depth structures reveals the existence of a novel surface correspondence problem operating for cyclopean surfaces over time after the binocular correspondence has been solved. Such concepts imply that remains to be discovered about cyclopean stereopsis and its relationship to 3D form perception from other depth cues.

  15. PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features

    PubMed Central

    Zhao, Ji; Guo, Yue; He, Wenhao; Yuan, Kui

    2018-01-01

    To address the problem of estimating camera trajectory and to build a structural three-dimensional (3D) map based on inertial measurements and visual observations, this paper proposes point–line visual–inertial odometry (PL-VIO), a tightly-coupled monocular visual–inertial odometry system exploiting both point and line features. Compared with point features, lines provide significantly more geometrical structure information on the environment. To obtain both computation simplicity and representational compactness of a 3D spatial line, Plücker coordinates and orthonormal representation for the line are employed. To tightly and efficiently fuse the information from inertial measurement units (IMUs) and visual sensors, we optimize the states by minimizing a cost function which combines the pre-integrated IMU error term together with the point and line re-projection error terms in a sliding window optimization framework. The experiments evaluated on public datasets demonstrate that the PL-VIO method that combines point and line features outperforms several state-of-the-art VIO systems which use point features only. PMID:29642648

  16. A binocular approach to treating amblyopia: antisuppression therapy.

    PubMed

    Hess, Robert F; Mansouri, Behzad; Thompson, Benjamin

    2010-09-01

    We developed a binocular treatment for amblyopia based on antisuppression therapy. A novel procedure is outlined for measuring the extent to which the fixing eye suppresses the fellow amblyopic eye. We hypothesize that suppression renders a structurally binocular system, functionally monocular. We demonstrate using three strabismic amblyopes that information can be combined normally between their eyes under viewing conditions where suppression is reduced. Also, we show that prolonged periods of viewing (under the artificial conditions of stimuli of different contrast in each eye) during which information from the two eyes is combined leads to a strengthening of binocular vision in such cases and eventual combination of binocular information under natural viewing conditions (stimuli of the same contrast in each eye). Concomitant improvement in monocular acuity of the amblyopic eye occurs with this reduction in suppression and strengthening of binocular fusion. Furthermore, in each of the three cases, stereoscopic function is established. This provides the basis for a new treatment of amblyopia, one that is purely binocular and aimed at reducing suppression as a first step.

  17. The role of binocular viewing in a spacing illusion arising in a darkened surround.

    PubMed

    Suzuki, K

    1998-01-01

    A study is reported of the binocular-oculomotor hypothesis of the moon illusion. In a dark hall, a pair of light points was presented straight ahead horizontally, and another pair was presented at the same distance but 50 degrees upward. Twenty subjects compared the spacings of these two pairs. Half of the subjects viewed the stimuli first monocularly and then binocularly, and the other half viewed them in the reverse order. Eye position was also systematically varied, either level or elevated. A spacing illusion was consistently obtained during binocular viewing (with the upper spacing seen as smaller), but no illusion arose during monocular viewing unless it was preceded by binocular viewing. Furthermore, an enhancement of the illusion due to eye elevation was found only during binocular viewing. These findings replicate the report of Taylor and Boring (1942 American Journal of Psychology 55 189-201), in which the moon was used as the stimulus, and support the binocular-oculomotor hypothesis as a partial explanation for the moon illusion.

  18. A Height Estimation Approach for Terrain Following Flights from Monocular Vision.

    PubMed

    Campos, Igor S G; Nascimento, Erickson R; Freitas, Gustavo M; Chaimowicz, Luiz

    2016-12-06

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.

  19. Bayesian modeling of cue interaction: bistability in stereoscopic slant perception.

    PubMed

    van Ee, Raymond; Adams, Wendy J; Mamassian, Pascal

    2003-07-01

    Our two eyes receive different views of a visual scene, and the resulting binocular disparities enable us to reconstruct its three-dimensional layout. However, the visual environment is also rich in monocular depth cues. We examined the resulting percept when observers view a scene in which there are large conflicts between the surface slant signaled by binocular disparities and the slant signaled by monocular perspective. For a range of disparity-perspective cue conflicts, many observers experience bistability: They are able to perceive two distinct slants and to flip between the two percepts in a controlled way. We present a Bayesian model that describes the quantitative aspects of perceived slant on the basis of the likelihoods of both perspective and disparity slant information combined with prior assumptions about the shape and orientation of objects in the scene. Our Bayesian approach can be regarded as an overarching framework that allows researchers to study all cue integration aspects-including perceptual decisions--in a unified manner.

  20. Apparent Corneal Ectasia After Bilateral Intrastromal Femtosecond Laser Treatment for Presbyopia.

    PubMed

    Dukic, Adrijana; Bohac, Maja; Pasalic, Adi; Koncarevic, Mateja; Anticic, Marija; Patel, Sudi

    2016-11-01

    To report a case of apparent corneal ectasia after intrastromal femtosecond laser treatment for presbyopia (INTRACOR). A healthy 56-year-old male with low hyperopia underwent an unremarkable bilateral INTRACOR procedure in March/April 2011. The patient was discharged after follow-up and returned 5 years later. Before discharge, the monocular logarithm of the minimal angle of resolution uncorrected distance visual acuity (UDVA) values were R, 0.0 and L, 0.10. In both eyes near (UNVA) visual acuities were 0.0. There were signs of slight posterior central corneal steepening without loss of corneal stability. Five years postop, monocular UDVA and UNVA values were 0.4 and 0.0, respectively. Ectasia was observed in both eyes, and the centrally placed 5 concentric rings after the INTRACOR procedure were visible under slit-lamp biomicroscopy. There is no clear reason to explain why the patient developed bilateral corneal steepening. It could be that the patient's corneal stromal fibers gradually weakened over this 5-year period.

  1. Diffusion-weighted imaging of the sellar region: a comparison study of BLADE and single-shot echo planar imaging sequences.

    PubMed

    Yiping, Lu; Hui, Liu; Kun, Zhou; Daoying, Geng; Bo, Yin

    2014-07-01

    The purpose of this study is to compare BLADE diffusion-weighted imaging (DWI) with single-shot echo planar imaging (EPI) DWI on the aspects of feasibility of imaging the sellar region and image quality. A total of 3 healthy volunteers and 52 patients with suspected lesions in the sellar region were included in this prospective intra-individual study. All exams were performed at 3.0T with a BLADE DWI sequence and a standard single-shot EP-DWI sequence. Phantom measurements were performed to measure the objective signal-to-noise ratio (SNR). Two radiologists rated the image quality according to the visualisation of the internal carotid arteries, optic chiasm, pituitary stalk, pituitary gland and lesion, and the overall image quality. One radiologist measured lesion sizes for detecting their relationship with the image score. The SNR in BLADE DWI sequence showed no significant difference from the single-shot EPI sequence (P>0.05). All of the assessed regions received higher scores in BLADE DWI images than single-shot EP-DWI. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Pose estimation of industrial objects towards robot operation

    NASA Astrophysics Data System (ADS)

    Niu, Jie; Zhou, Fuqiang; Tan, Haishu; Cao, Yu

    2017-10-01

    With the advantages of wide range, non-contact and high flexibility, the visual estimation technology of target pose has been widely applied in modern industry, robot guidance and other engineering practices. However, due to the influence of complicated industrial environment, outside interference factors, lack of object characteristics, restrictions of camera and other limitations, the visual estimation technology of target pose is still faced with many challenges. Focusing on the above problems, a pose estimation method of the industrial objects is developed based on 3D models of targets. By matching the extracted shape characteristics of objects with the priori 3D model database of targets, the method realizes the recognition of target. Thus a pose estimation of objects can be determined based on the monocular vision measuring model. The experimental results show that this method can be implemented to estimate the position of rigid objects based on poor images information, and provides guiding basis for the operation of the industrial robot.

  3. Stimulus-dependent modulation of spontaneous low-frequency oscillations in the rat visual cortex.

    PubMed

    Huang, Liangming; Liu, Yadong; Gui, Jianjun; Li, Ming; Hu, Dewen

    2014-08-06

    Research on spontaneous low-frequency oscillations is important to reveal underlying regulatory mechanisms in the brain. The mechanism for the stimulus modulation of low-frequency oscillations is not known. Here, we used the intrinsic optical imaging technique to examine stimulus-modulated low-frequency oscillation signals in the rat visual cortex. The stimulation was presented monocularly as a flashing light with different frequencies and intensities. The phases of low-frequency oscillations in different regions tended to be synchronized and the rhythms typically accelerated within a 30-s period after stimulation. These phenomena were confined to visual stimuli with specific flashing frequencies (12.5-17.5 Hz) and intensities (5-10 mA). The acceleration and synchronization induced by the flashing frequency were more marked than those induced by the intensity. These results show that spontaneous low-frequency oscillations can be modulated by parameter-dependent flashing lights and indicate the potential utility of the visual stimulus paradigm in exploring the origin and function of low-frequency oscillations.

  4. Homography-based visual servo regulation of mobile robots.

    PubMed

    Fang, Yongchun; Dixon, Warren E; Dawson, Darren M; Chawda, Prakash

    2005-10-01

    A monocular camera-based vision system attached to a mobile robot (i.e., the camera-in-hand configuration) is considered in this paper. By comparing corresponding target points of an object from two different camera images, geometric relationships are exploited to derive a transformation that relates the actual position and orientation of the mobile robot to a reference position and orientation. This transformation is used to synthesize a rotation and translation error system from the current position and orientation to the fixed reference position and orientation. Lyapunov-based techniques are used to construct an adaptive estimate to compensate for a constant, unmeasurable depth parameter, and to prove asymptotic regulation of the mobile robot. The contribution of this paper is that Lyapunov techniques are exploited to craft an adaptive controller that enables mobile robot position and orientation regulation despite the lack of an object model and the lack of depth information. Experimental results are provided to illustrate the performance of the controller.

  5. Compression and reflection of visually evoked cortical waves

    PubMed Central

    Xu, Weifeng; Huang, Xiaoying; Takagaki, Kentaroh; Wu, Jian-young

    2007-01-01

    Summary Neuronal interactions between primary and secondary visual cortical areas are important for visual processing, but the spatiotemporal patterns of the interaction are not well understood. We used voltage-sensitive dye imaging to visualize neuronal activity in rat visual cortex and found novel visually evoked waves propagating from V1 to other visual areas. A primary wave originated in the monocular area of V1 and was “compressed” when propagating to V2. A reflected wave initiated after compression and propagated backward into V1. The compression occurred at the V1/V2 border, and local GABAA inhibition is important for the compression. The compression/reflection pattern provides a two-phase modulation: V1 is first depolarized by the primary wave and then V1 and V2 are simultaneously depolarized by the reflected and primary waves, respectively. The compression/reflection pattern only occurred for evoked but not for spontaneous waves, suggesting that it is organized by an internal mechanism associated with visual processing. PMID:17610821

  6. Functional architecture of an optic flow-responsive area that drives horizontal eye movements in zebrafish.

    PubMed

    Kubo, Fumi; Hablitzel, Bastian; Dal Maschio, Marco; Driever, Wolfgang; Baier, Herwig; Arrenberg, Aristides B

    2014-03-19

    Animals respond to whole-field visual motion with compensatory eye and body movements in order to stabilize both their gaze and position with respect to their surroundings. In zebrafish, rotational stimuli need to be distinguished from translational stimuli to drive the optokinetic and the optomotor responses, respectively. Here, we systematically characterize the neural circuits responsible for these operations using a combination of optogenetic manipulation and in vivo calcium imaging during optic flow stimulation. By recording the activity of thousands of neurons within the area pretectalis (APT), we find four bilateral pairs of clusters that process horizontal whole-field motion and functionally classify eleven prominent neuron types with highly selective response profiles. APT neurons are prevalently direction selective, either monocularly or binocularly driven, and hierarchically organized to distinguish between rotational and translational optic flow. Our data predict a wiring diagram of a neural circuit tailored to drive behavior that compensates for self-motion. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Wavelet Fusion for Concealed Object Detection Using Passive Millimeter Wave Sequence Images

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Pang, L.; Liu, H.; Xu, X.

    2018-04-01

    PMMW imaging system can create interpretable imagery on the objects concealed under clothing, which gives the great advantage to the security check system. Paper addresses wavelet fusion to detect concealed objects using passive millimeter wave (PMMW) sequence images. According to PMMW real-time imager acquired image characteristics and storage methods firstly, using the sum of squared difference (SSD) as the image-related parameters to screen the sequence images. Secondly, the selected images are optimized using wavelet fusion algorithm. Finally, the concealed objects are detected by mean filter, threshold segmentation and edge detection. The experimental results show that this method improves the detection effect of concealed objects by selecting the most relevant images from PMMW sequence images and using wavelet fusion to enhance the information of the concealed objects. The method can be effectively applied to human body concealed object detection in millimeter wave video.

  8. Deblurring sequential ocular images from multi-spectral imaging (MSI) via mutual information.

    PubMed

    Lian, Jian; Zheng, Yuanjie; Jiao, Wanzhen; Yan, Fang; Zhao, Bojun

    2018-06-01

    Multi-spectral imaging (MSI) produces a sequence of spectral images to capture the inner structure of different species, which was recently introduced into ocular disease diagnosis. However, the quality of MSI images can be significantly degraded by motion blur caused by the inevitable saccades and exposure time required for maintaining a sufficiently high signal-to-noise ratio. This degradation may confuse an ophthalmologist, reduce the examination quality, or defeat various image analysis algorithms. We propose an early work specially on deblurring sequential MSI images, which is distinguished from many of the current image deblurring techniques by resolving the blur kernel simultaneously for all the images in an MSI sequence. It is accomplished by incorporating several a priori constraints including the sharpness of the latent clear image, the spatial and temporal smoothness of the blur kernel and the similarity between temporally-neighboring images in MSI sequence. Specifically, we model the similarity between MSI images with mutual information considering the different wavelengths used for capturing different images in MSI sequence. The optimization of the proposed approach is based on a multi-scale framework and stepwise optimization strategy. Experimental results from 22 MSI sequences validate that our approach outperforms several state-of-the-art techniques in natural image deblurring.

  9. Medical Surveillance Programs for Aircraft Maintenance Personnel Performing Nondestructive Inspection and Testing

    DTIC Science & Technology

    2005-11-01

    visible and fl uorescent inspection techniques, while radiography relies on the individual’s ability to detect subtle differences in contrast either...binocular measurement of visual acuity may better predict a person’s functional capability in the workplace . However, measurement of monocular acuities

  10. The measurement and treatment of suppression in amblyopia.

    PubMed

    Black, Joanna M; Hess, Robert F; Cooperstock, Jeremy R; To, Long; Thompson, Benjamin

    2012-12-14

    Amblyopia, a developmental disorder of the visual cortex, is one of the leading causes of visual dysfunction in the working age population. Current estimates put the prevalence of amblyopia at approximately 1-3%(1-3), the majority of cases being monocular(2). Amblyopia is most frequently caused by ocular misalignment (strabismus), blur induced by unequal refractive error (anisometropia), and in some cases by form deprivation. Although amblyopia is initially caused by abnormal visual input in infancy, once established, the visual deficit often remains when normal visual input has been restored using surgery and/or refractive correction. This is because amblyopia is the result of abnormal visual cortex development rather than a problem with the amblyopic eye itself(4,5) . Amblyopia is characterized by both monocular and binocular deficits(6,7) which include impaired visual acuity and poor or absent stereopsis respectively. The visual dysfunction in amblyopia is often associated with a strong suppression of the inputs from the amblyopic eye under binocular viewing conditions(8). Recent work has indicated that suppression may play a central role in both the monocular and binocular deficits associated with amblyopia(9,10) . Current clinical tests for suppression tend to verify the presence or absence of suppression rather than giving a quantitative measurement of the degree of suppression. Here we describe a technique for measuring amblyopic suppression with a compact, portable device(11,12) . The device consists of a laptop computer connected to a pair of virtual reality goggles. The novelty of the technique lies in the way we present visual stimuli to measure suppression. Stimuli are shown to the amblyopic eye at high contrast while the contrast of the stimuli shown to the non-amblyopic eye are varied. Patients perform a simple signal/noise task that allows for a precise measurement of the strength of excitatory binocular interactions. The contrast offset at which neither eye has a performance advantage is a measure of the "balance point" and is a direct measure of suppression. This technique has been validated psychophysically both in control(13,14) and patient(6,9,11) populations. In addition to measuring suppression this technique also forms the basis of a novel form of treatment to decrease suppression over time and improve binocular and often monocular function in adult patients with amblyopia(12,15,16) . This new treatment approach can be deployed either on the goggle system described above or on a specially modified iPod touch device(15).

  11. CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition

    PubMed Central

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764

  12. Diffusion-weighted Imaging of the Liver with Multiple b Values: Effect of Diffusion Gradient Polarity and Breathing Acquisition on Image Quality and Intravoxel Incoherent Motion Parameters—A Pilot Study

    PubMed Central

    Dyvorne, Hadrien A.; Galea, Nicola; Nevers, Thomas; Fiel, M. Isabel; Carpenter, David; Wong, Edmund; Orton, Matthew; de Oliveira, Andre; Feiweier, Thorsten; Vachon, Marie-Louise; Babb, James S.

    2013-01-01

    Purpose: To optimize intravoxel incoherent motion (IVIM) diffusion-weighted (DW) imaging by estimating the effects of diffusion gradient polarity and breathing acquisition scheme on image quality, signal-to-noise ratio (SNR), IVIM parameters, and parameter reproducibility, as well as to investigate the potential of IVIM in the detection of hepatic fibrosis. Materials and Methods: In this institutional review board–approved prospective study, 20 subjects (seven healthy volunteers, 13 patients with hepatitis C virus infection; 14 men, six women; mean age, 46 years) underwent IVIM DW imaging with four sequences: (a) respiratory-triggered (RT) bipolar (BP) sequence, (b) RT monopolar (MP) sequence, (c) free-breathing (FB) BP sequence, and (d) FB MP sequence. Image quality scores were assessed for all sequences. A biexponential analysis with the Bayesian method yielded true diffusion coefficient (D), pseudodiffusion coefficient (D*), and perfusion fraction (PF) in liver parenchyma. Mixed-model analysis of variance was used to compare image quality, SNR, IVIM parameters, and interexamination variability between the four sequences, as well as the ability to differentiate areas of liver fibrosis from normal liver tissue. Results: Image quality with RT sequences was superior to that with FB acquisitions (P = .02) and was not affected by gradient polarity. SNR did not vary significantly between sequences. IVIM parameter reproducibility was moderate to excellent for PF and D, while it was less reproducible for D*. PF and D were both significantly lower in patients with hepatitis C virus than in healthy volunteers with the RT BP sequence (PF = 13.5% ± 5.3 [standard deviation] vs 9.2% ± 2.5, P = .038; D = [1.16 ± 0.07] × 10−3 mm2/sec vs [1.03 ± 0.1] × 10−3 mm2/sec, P = .006). Conclusion: The RT BP DW imaging sequence had the best results in terms of image quality, reproducibility, and ability to discriminate between healthy and fibrotic liver with biexponential fitting. © RSNA, 2012 PMID:23220895

  13. The Extraction of 3D Shape from Texture and Shading in the Human Brain

    PubMed Central

    Georgieva, Svetlana S.; Todd, James T.; Peeters, Ronald

    2008-01-01

    We used functional magnetic resonance imaging to investigate the human cortical areas involved in processing 3-dimensional (3D) shape from texture (SfT) and shading. The stimuli included monocular images of randomly shaped 3D surfaces and a wide variety of 2-dimensional (2D) controls. The results of both passive and active experiments reveal that the extraction of 3D SfT involves the bilateral caudal inferior temporal gyrus (caudal ITG), lateral occipital sulcus (LOS) and several bilateral sites along the intraparietal sulcus. These areas are largely consistent with those involved in the processing of 3D shape from motion and stereo. The experiments also demonstrate, however, that the analysis of 3D shape from shading is primarily restricted to the caudal ITG areas. Additional results from psychophysical experiments reveal that this difference in neuronal substrate cannot be explained by a difference in strength between the 2 cues. These results underscore the importance of the posterior part of the lateral occipital complex for the extraction of visual 3D shape information from all depth cues, and they suggest strongly that the importance of shading is diminished relative to other cues for the analysis of 3D shape in parietal regions. PMID:18281304

  14. The significance of retinal image contrast and spatial frequency composition for eye growth modulation in young chicks

    PubMed Central

    Tran, Nina; Chiu, Sara; Tian, Yibin; Wildsoet, Christine F.

    2009-01-01

    Purpose This study sought further insight into the stimulus dependence of form deprivation myopia, a common response to retinal image degradation in young animals. Methods Each of 4 Bangerter diffusing filters (0.6, 0.1, <0.1, and LP (light perception only)) combined with clear plano lenses, as well as plano lenses alone, were fitted monocularly to 4-day-old chicks. Axial ocular dimensions and refractive errors were monitored over a 14-day treatment period, using high frequency A-scan ultrasonography and an autorefractor, respectively. Results Only the <0.1 and LP filters induced significant form deprivation myopia; these filters induced similarly large myopic shifts in refractive error (mean interocular differences ±SEM: -9.92 ±1.99, -7.26 ± 1.60 D respectively), coupled to significant increases in both vitreous chamber depths and optical axial lengths (p<0.001). The other 3 groups showed comparable, small changes in their ocular dimensions (p>0.05), and only small myopic shifts in refraction (<3.00 D). The myopia-inducing filters eliminated mid-and-high spatial frequency information. Conclusions Our results are consistent with emmetropization being tuned to mid-spatial frequencies. They also imply that form deprivation is not a graded phenomenon. PMID:18533221

  15. Slower Rate of Binocular Rivalry in Autism

    PubMed Central

    Kravitz, Dwight J.; Freyberg, Jan; Baron-Cohen, Simon; Baker, Chris I.

    2013-01-01

    An imbalance between cortical excitation and inhibition is a central component of many models of autistic neurobiology. We tested a potential behavioral footprint of this proposed imbalance using binocular rivalry, a visual phenomenon in which perceptual experience is thought to mirror the push and pull of excitatory and inhibitory cortical dynamics. In binocular rivalry, two monocularly presented images compete, leading to a percept that alternates between them. In a series of trials, we presented separate images of objects (e.g., a baseball and a broccoli) to each eye using a mirror stereoscope and asked human participants with autism and matched control subjects to continuously report which object they perceived, or whether they perceived a mixed percept. Individuals with autism demonstrated a slower rate of binocular rivalry alternations than matched control subjects, with longer durations of mixed percepts and an increased likelihood to revert to the previously perceived object when exiting a mixed percept. Critically, each of these findings was highly predictive of clinical measures of autistic symptomatology. Control “playback” experiments demonstrated that differences in neither response latencies nor response criteria could account for the atypical dynamics of binocular rivalry we observed in autistic spectrum conditions. Overall, these results may provide an index of atypical cortical dynamics that may underlie both the social and nonsocial symptoms of autism. PMID:24155303

  16. A segmentation method for lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise

    PubMed Central

    Zhang, Wei; Zhang, Xiaolong; Qiang, Yan; Tian, Qi; Tang, Xiaoxian

    2017-01-01

    The fast and accurate segmentation of lung nodule image sequences is the basis of subsequent processing and diagnostic analyses. However, previous research investigating nodule segmentation algorithms cannot entirely segment cavitary nodules, and the segmentation of juxta-vascular nodules is inaccurate and inefficient. To solve these problems, we propose a new method for the segmentation of lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise (DBSCAN). First, our method uses three-dimensional computed tomography image features of the average intensity projection combined with multi-scale dot enhancement for preprocessing. Hexagonal clustering and morphological optimized sequential linear iterative clustering (HMSLIC) for sequence image oversegmentation is then proposed to obtain superpixel blocks. The adaptive weight coefficient is then constructed to calculate the distance required between superpixels to achieve precise lung nodules positioning and to obtain the subsequent clustering starting block. Moreover, by fitting the distance and detecting the change in slope, an accurate clustering threshold is obtained. Thereafter, a fast DBSCAN superpixel sequence clustering algorithm, which is optimized by the strategy of only clustering the lung nodules and adaptive threshold, is then used to obtain lung nodule mask sequences. Finally, the lung nodule image sequences are obtained. The experimental results show that our method rapidly, completely and accurately segments various types of lung nodule image sequences. PMID:28880916

  17. Comparison of Free-Breathing With Navigator-Triggered Technique in Diffusion Weighted Imaging for Evaluation of Small Hepatocellular Carcinoma: Effect on Image Quality and Intravoxel Incoherent Motion Parameters.

    PubMed

    Shan, Yan; Zeng, Meng-su; Liu, Kai; Miao, Xi-Yin; Lin, Jiang; Fu, Cai xia; Xu, Peng-ju

    2015-01-01

    To evaluate the effect on image quality and intravoxel incoherent motion (IVIM) parameters of small hepatocellular carcinoma (HCC) from choice of either free-breathing (FB) or navigator-triggered (NT) diffusion-weighted (DW) imaging. Thirty patients with 37 small HCCs underwent IVIM DW imaging using 12 b values (0-800 s/mm) with 2 sequences: NT, FB. A biexponential analysis with the Bayesian method yielded true diffusion coefficient (D), pseudodiffusion coefficient (D*), and perfusion fraction (f) in small HCCs and liver parenchyma. Apparent diffusion coefficient (ADC) was also calculated. The acquisition time and image quality scores were assessed for 2 sequences. Independent sample t test was used to compare image quality, signal intensity ratio, IVIM parameters, and ADC values between the 2 sequences; reproducibility of IVIM parameters, and ADC values between 2 sequences was assessed with the Bland-Altman method (BA-LA). Image quality with NT sequence was superior to that with FB acquisition (P = 0.02). The mean acquisition time for FB scheme was shorter than that of NT sequence (6 minutes 14 seconds vs 10 minutes 21 seconds ± 10 seconds P < 0.01). The signal intensity ratio of small HCCs did not vary significantly between the 2 sequences. The ADC and IVIM parameters from the 2 sequences show no significant difference. Reproducibility of D*and f parameters in small HCC was poor (BA-LA: 95% confidence interval, -180.8% to 189.2% for D* and -133.8% to 174.9% for f). A moderate reproducibility of D and ADC parameters was observed (BA-LA: 95% confidence interval, -83.5% to 76.8% for D and -74.4% to 88.2% for ADC) between the 2 sequences. The NT DW imaging technique offers no advantage in IVIM parameters measurements of small HCC except better image quality, whereas FB technique offers greater confidence in fitted diffusion parameters for matched acquisition periods.

  18. Object Rotation Effects on the Timing of a Hitting Action

    ERIC Educational Resources Information Center

    Scott, Mark A.; van der Kamp, John; Savelsbergh, Geert J. P.; Oudejans, Raoul R. D.; Davids, Keith

    2004-01-01

    In this article, the authors investigated how perturbing optical information affects the guidance of an unfolding hitting action. Using monocular and binocular vision, six participants were required to hit a rectangular foam object, released from two different heights, under four different approach conditions, two with object rotation (to perturb…

  19. ELECTROPHYSIOLOGICAL AND MORPHOLOGICAL EVIDENCE FOR A DIAMETER-BASED INNERVATION PATTERN OF THE SUPERIOR COLLICULUS (JOURNAL VERSION)

    EPA Science Inventory

    Neurophysiological and morphological techniques were used to describe changes in the optic tract and superior colliculus (SC) in response to monocular enucleation. Long-Evans, male, (250g) rats were implanted with chronic bipolar stimulating electrodes located in the optic chiasm...

  20. A Monocular SLAM Method to Estimate Relative Pose During Satellite Proximity Operations

    DTIC Science & Technology

    2015-03-26

    localization and mapping with efficient outlier handling. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2013. 5. Herbert Bay...S.H. Spencer . Next generation advanced video guidance sensor. In Aerospace Conference, 2008 IEEE, pages 1–8, March 2008. 12. Michael Calonder, Vincent

  1. Toward autonomous avian-inspired grasping for micro aerial vehicles.

    PubMed

    Thomas, Justin; Loianno, Giuseppe; Polin, Joseph; Sreenath, Koushil; Kumar, Vijay

    2014-06-01

    Micro aerial vehicles, particularly quadrotors, have been used in a wide range of applications. However, the literature on aerial manipulation and grasping is limited and the work is based on quasi-static models. In this paper, we draw inspiration from agile, fast-moving birds such as raptors, that are able to capture moving prey on the ground or in water, and develop similar capabilities for quadrotors. We address dynamic grasping, an approach to prehensile grasping in which the dynamics of the robot and its gripper are significant and must be explicitly modeled and controlled for successful execution. Dynamic grasping is relevant for fast pick-and-place operations, transportation and delivery of objects, and placing or retrieving sensors. We show how this capability can be realized (a) using a motion capture system and (b) without external sensors relying only on onboard sensors. In both cases we describe the dynamic model, and trajectory planning and control algorithms. In particular, we present a methodology for flying and grasping a cylindrical object using feedback from a monocular camera and an inertial measurement unit onboard the aerial robot. This is accomplished by mapping the dynamics of the quadrotor to a level virtual image plane, which in turn enables dynamically-feasible trajectory planning for image features in the image space, and a vision-based controller with guaranteed convergence properties. We also present experimental results obtained with a quadrotor equipped with an articulated gripper to illustrate both approaches.

  2. Discrete Cosine Transform Image Coding With Sliding Block Codes

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Pearlman, William A.

    1989-11-01

    A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.

  3. Cassini Imaging Science: First Results at Saturn

    NASA Astrophysics Data System (ADS)

    Porco, C. C.

    The Cassini Imaging Science experiment at Saturn will commence in early February, 2004 -- five months before Cassini's arrival at Saturn. Approach observations consist of repeated multi-spectral `movie' sequences of Saturn and its rings, image sequences designed to search for previously unseen satellites between the outer edge of the ring system and the orbit of Hyperion, images of known satellites for orbit refinement, observations of Phoebe during Cassini's closest approach to the satellite, and repeated multi-spectral `movie' sequences of Titan to detect and track clouds (for wind determination) and to sense the surface. During Saturn Orbit Insertion, the highest resolution images (~ 100 m) obtained during the whole orbital tour will be collected of the dark side of the rings. Finally, imaging sequences are planned for Cassini's first Titan flyby, on July 2, from a distance of ~ 350,000 km, yielding an image scale of ~ 2.1 km on the South polar region. The highlights of these observation sequences will be presented.

  4. Two-dimensional PCA-based human gait identification

    NASA Astrophysics Data System (ADS)

    Chen, Jinyan; Wu, Rongteng

    2012-11-01

    It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.

  5. Analysis of simulated image sequences from sensors for restricted-visibility operations

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar

    1991-01-01

    A real time model of the visible output from a 94 GHz sensor, based on a radiometric simulation of the sensor, was developed. A sequence of images as seen from an aircraft as it approaches for landing was simulated using this model. Thirty frames from this sequence of 200 x 200 pixel images were analyzed to identify and track objects in the image using the Cantata image processing package within the visual programming environment provided by the Khoros software system. The image analysis operations are described.

  6. Research on respiratory motion correction method based on liver contrast-enhanced ultrasound images of single mode

    NASA Astrophysics Data System (ADS)

    Zhang, Ji; Li, Tao; Zheng, Shiqiang; Li, Yiyong

    2015-03-01

    To reduce the effects of respiratory motion in the quantitative analysis based on liver contrast-enhanced ultrasound (CEUS) image sequencesof single mode. The image gating method and the iterative registration method using model image were adopted to register liver contrast-enhanced ultrasound image sequences of single mode. The feasibility of the proposed respiratory motion correction method was explored preliminarily using 10 hepatocellular carcinomas CEUS cases. The positions of the lesions in the time series of 2D ultrasound images after correction were visually evaluated. Before and after correction, the quality of the weighted sum of transit time (WSTT) parametric images were also compared, in terms of the accuracy and spatial resolution. For the corrected and uncorrected sequences, their mean deviation values (mDVs) of time-intensity curve (TIC) fitting derived from CEUS sequences were measured. After the correction, the positions of the lesions in the time series of 2D ultrasound images were almost invariant. In contrast, the lesions in the uncorrected images all shifted noticeably. The quality of the WSTT parametric maps derived from liver CEUS image sequences were improved more greatly. Moreover, the mDVs of TIC fitting derived from CEUS sequences after the correction decreased by an average of 48.48+/-42.15. The proposed correction method could improve the accuracy of quantitative analysis based on liver CEUS image sequences of single mode, which would help in enhancing the differential diagnosis efficiency of liver tumors.

  7. Rapid Gradient-Echo Imaging

    PubMed Central

    Hargreaves, Brian

    2012-01-01

    Gradient echo sequences are widely used in magnetic resonance imaging (MRI) for numerous applications ranging from angiography to perfusion to functional MRI. Compared with spin-echo techniques, the very short repetition times of gradient-echo methods enable very rapid 2D and 3D imaging, but also lead to complicated “steady states.” Signal and contrast behavior can be described graphically and mathematically, and depends strongly on the type of spoiling: fully balanced (no spoiling), gradient spoiling, or RF-spoiling. These spoiling options trade off between high signal and pure T1 contrast while the flip angle also affects image contrast in all cases, both of which can be demonstrated theoretically and in image examples. As with spin-echo sequences, magnetization preparation can be added to gradient-echo sequences to alter image contrast. Gradient echo sequences are widely used for numerous applications such as 3D perfusion imaging, functional MRI, cardiac imaging and MR angiography. PMID:23097185

  8. Using cellular automata to generate image representation for biological sequences.

    PubMed

    Xiao, X; Shao, S; Ding, Y; Huang, Z; Chen, X; Chou, K-C

    2005-02-01

    A novel approach to visualize biological sequences is developed based on cellular automata (Wolfram, S. Nature 1984, 311, 419-424), a set of discrete dynamical systems in which space and time are discrete. By transforming the symbolic sequence codes into the digital codes, and using some optimal space-time evolvement rules of cellular automata, a biological sequence can be represented by a unique image, the so-called cellular automata image. Many important features, which are originally hidden in a long and complicated biological sequence, can be clearly revealed thru its cellular automata image. With biological sequences entering into databanks rapidly increasing in the post-genomic era, it is anticipated that the cellular automata image will become a very useful vehicle for investigation into their key features, identification of their function, as well as revelation of their "fingerprint". It is anticipated that by using the concept of the pseudo amino acid composition (Chou, K.C. Proteins: Structure, Function, and Genetics, 2001, 43, 246-255), the cellular automata image approach can also be used to improve the quality of predicting protein attributes, such as structural class and subcellular location.

  9. Assessment of cerebral venous sinus thrombosis using T2*-weighted gradient echo magnetic resonance imaging sequences

    PubMed Central

    Bidar, Fatemeh; Faeghi, Fariborz; Ghorbani, Askar

    2016-01-01

    Background: The purpose of this study is to demonstrate the advantages of gradient echo (GRE) sequences in the detection and characterization of cerebral venous sinus thrombosis compared to conventional magnetic resonance sequences. Methods: A total of 17 patients with cerebral venous thrombosis (CVT) were evaluated using different magnetic resonance imaging (MRI) sequences. The MRI sequences included T1-weighted spin echo (SE) imaging, T*2-weighted turbo SE (TSE), fluid attenuated inversion recovery (FLAIR), T*2-weighted conventional GRE, and diffusion weighted imaging (DWI). MR venography (MRV) images were obtained as the golden standard. Results: Venous sinus thrombosis was best detectable in T*2-weighted conventional GRE sequences in all patients except in one case. Venous thrombosis was undetectable in DWI. T*2-weighted GRE sequences were superior to T*2-weighted TSE, T1-weighted SE, and FLAIR. Enhanced MRV was successful in displaying the location of thrombosis. Conclusion: T*2-weighted conventional GRE sequences are probably the best method for the assessment of cerebral venous sinus thrombosis. The mentioned method is non-invasive; therefore, it can be employed in the clinical evaluation of cerebral venous sinus thrombosis. PMID:27326365

  10. Optimising diffusion-weighted imaging in the abdomen and pelvis: comparison of image quality between monopolar and bipolar single-shot spin-echo echo-planar sequences.

    PubMed

    Kyriazi, Stavroula; Blackledge, Matthew; Collins, David J; Desouza, Nandita M

    2010-10-01

    To compare geometric distortion, signal-to-noise ratio (SNR), apparent diffusion coefficient (ADC), efficacy of fat suppression and presence of artefact between monopolar (Stejskal and Tanner) and bipolar (twice-refocused, eddy-current-compensating) diffusion-weighted imaging (DWI) sequences in the abdomen and pelvis. A semiquantitative distortion index (DI) was derived from the subtraction images with b = 0 and 1,000 s/mm(2) in a phantom and compared between the two sequences. Seven subjects were imaged with both sequences using four b values (0, 600, 900 and 1,050 s/mm(2)) and SNR, ADC for different organs and fat-to-muscle signal ratio (FMR) were compared. Image quality was evaluated by two radiologists on a 5-point scale. DI was improved in the bipolar sequence, indicating less geometric distortion. SNR was significantly lower for all tissues and b values in the bipolar images compared with the monopolar (p < 0.05), whereas FMR was not statistically different. ADC in liver, kidney and sacrum was higher in the bipolar scheme compared to the monopolar (p < 0.03), whereas in muscle it was lower (p = 0.018). Image quality scores were higher for the bipolar sequence (p ≤ 0.025). Artefact reduction makes the bipolar DWI sequence preferable in abdominopelvic applications, although the trade-off in SNR may compromise ADC measurements in muscle.

  11. Steady-state MR imaging sequences: physics, classification, and clinical applications.

    PubMed

    Chavhan, Govind B; Babyn, Paul S; Jankharia, Bhavin G; Cheng, Hai-Ling M; Shroff, Manohar M

    2008-01-01

    Steady-state sequences are a class of rapid magnetic resonance (MR) imaging techniques based on fast gradient-echo acquisitions in which both longitudinal magnetization (LM) and transverse magnetization (TM) are kept constant. Both LM and TM reach a nonzero steady state through the use of a repetition time that is shorter than the T2 relaxation time of tissue. When TM is maintained as multiple radiofrequency excitation pulses are applied, two types of signal are formed once steady state is reached: preexcitation signal (S-) from echo reformation; and postexcitation signal (S+), which consists of free induction decay. Depending on the signal sampled and used to form an image, steady-state sequences can be classified as (a) postexcitation refocused (only S+ is sampled), (b) preexcitation refocused (only S- is sampled), and (c) fully refocused (both S+ and S- are sampled) sequences. All tissues with a reasonably long T2 relaxation time will show additional signals due to various refocused echo paths. Steady-state sequences have revolutionized cardiac imaging and have become the standard for anatomic functional cardiac imaging and for the assessment of myocardial viability because of their good signal-to-noise ratio and contrast-to-noise ratio and increased speed of acquisition. They are also useful in abdominal and fetal imaging and hold promise for interventional MR imaging. Because steady-state sequences are now commonly used in MR imaging, radiologists will benefit from understanding the underlying physics, classification, and clinical applications of these sequences.

  12. Spatio-temporal alignment of pedobarographic image sequences.

    PubMed

    Oliveira, Francisco P M; Sousa, Andreia; Santos, Rubim; Tavares, João Manuel R S

    2011-07-01

    This article presents a methodology to align plantar pressure image sequences simultaneously in time and space. The spatial position and orientation of a foot in a sequence are changed to match the foot represented in a second sequence. Simultaneously with the spatial alignment, the temporal scale of the first sequence is transformed with the aim of synchronizing the two input footsteps. Consequently, the spatial correspondence of the foot regions along the sequences as well as the temporal synchronizing is automatically attained, making the study easier and more straightforward. In terms of spatial alignment, the methodology can use one of four possible geometric transformation models: rigid, similarity, affine, or projective. In the temporal alignment, a polynomial transformation up to the 4th degree can be adopted in order to model linear and curved time behaviors. Suitable geometric and temporal transformations are found by minimizing the mean squared error (MSE) between the input sequences. The methodology was tested on a set of real image sequences acquired from a common pedobarographic device. When used in experimental cases generated by applying geometric and temporal control transformations, the methodology revealed high accuracy. In addition, the intra-subject alignment tests from real plantar pressure image sequences showed that the curved temporal models produced better MSE results (P < 0.001) than the linear temporal model. This article represents an important step forward in the alignment of pedobarographic image data, since previous methods can only be applied on static images.

  13. Integrated circuit layer image segmentation

    NASA Astrophysics Data System (ADS)

    Masalskis, Giedrius; Petrauskas, Romas

    2010-09-01

    In this paper we present IC layer image segmentation techniques which are specifically created for precise metal layer feature extraction. During our research we used many samples of real-life de-processed IC metal layer images which were obtained using optical light microscope. We have created sequence of various image processing filters which provides segmentation results of good enough precision for our application. Filter sequences were fine tuned to provide best possible results depending on properties of IC manufacturing process and imaging technology. Proposed IC image segmentation filter sequences were experimentally tested and compared with conventional direct segmentation algorithms.

  14. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  15. On the fallacy of quantitative segmentation for T1-weighted MRI

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Harrigan, Robert L.; Newton, Allen T.; Rane, Swati; Pallavaram, Srivatsan; D'Haese, Pierre F.; Dawant, Benoit M.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    T1-weighted magnetic resonance imaging (MRI) generates contrasts with primary sensitivity to local T1 properties (with lesser T2 and PD contributions). The observed signal intensity is determined by these local properties and the sequence parameters of the acquisition. In common practice, a range of acceptable parameters is used to ensure "similar" contrast across scanners used for any particular study (e.g., the ADNI standard MPRAGE). However, different studies may use different ranges of parameters and report the derived data as simply "T1-weighted". Physics and imaging authors pay strong heed to the specifics of the imaging sequences, but image processing authors have historically been more lax. Herein, we consider three T1-weighted sequences acquired the same underlying protocol (MPRAGE) and vendor (Philips), but "normal study-to-study variation" in parameters. We show that the gray matter/white matter/cerebrospinal fluid contrast is subtly but systemically different between these images and yields systemically different measurements of brain volume. The problem derives from the visually apparent boundary shifts, which would also be seen by a human rater. We present and evaluate two solutions to produce consistent segmentation results across imaging protocols. First, we propose to acquire multiple sequences on a subset of the data and use the multi-modal imaging as atlases to segment target images any of the available sequences. Second (if additional imaging is not available), we propose to synthesize atlases of the target imaging sequence and use the synthesized atlases in place of atlas imaging data. Both approaches significantly improve consistency of target labeling.

  16. Real-time UAV trajectory generation using feature points matching between video image sequences

    NASA Astrophysics Data System (ADS)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  17. Motion Detection in Ultrasound Image-Sequences Using Tensor Voting

    NASA Astrophysics Data System (ADS)

    Inba, Masafumi; Yanagida, Hirotaka; Tamura, Yasutaka

    2008-05-01

    Motion detection in ultrasound image sequences using tensor voting is described. We have been developing an ultrasound imaging system adopting a combination of coded excitation and synthetic aperture focusing techniques. In our method, frame rate of the system at distance of 150 mm reaches 5000 frame/s. Sparse array and short duration coded ultrasound signals are used for high-speed data acquisition. However, many artifacts appear in the reconstructed image sequences because of the incompleteness of the transmitted code. To reduce the artifacts, we have examined the application of tensor voting to the imaging method which adopts both coded excitation and synthetic aperture techniques. In this study, the basis of applying tensor voting and the motion detection method to ultrasound images is derived. It was confirmed that velocity detection and feature enhancement are possible using tensor voting in the time and space of simulated ultrasound three-dimensional image sequences.

  18. Evaluation of Skylab (EREP) data for forest and rangeland surveys

    Treesearch

    Robert C. Aldrich

    1976-01-01

    Data products from the Skylab Earth Resources Experiment Package were examined monocularly or stereoscopically using a variety of magnifying interpretation devices. Land use, forest types, physiographic sites, and plant communities, as well as forest stress, were interpreted and mapped at sites in Georgia, South Dakota, and Colorado. Microdensitometric techniques and...

  19. Visibility of Monocular Symbology in Transparent Head-Mounted Display Applications

    DTIC Science & Technology

    2015-07-08

    Displays XX, edited by Daniel D. Desjardins, Peter L. Marasco , Kalluri R. Sarma, Paul R. Havig, Michael P. Browne, James E. Melzer, Proc. of SPIE Vol...simulators. in Head- and Helmet-Mounted Displays XV: Design and Applications, Proceedings of SPIE Volume 7688 (ed. Peter L. Marasco , P. R. H.) 7688, (2010

  20. Adult Visual Experience Promotes Recovery of Primary Visual Cortex from Long-Term Monocular Deprivation

    ERIC Educational Resources Information Center

    Fischer, Quentin S.; Aleem, Salman; Zhou, Hongyi; Pham, Tony A.

    2007-01-01

    Prolonged visual deprivation from early childhood to maturity is believed to cause permanent visual impairment. However, there have been case reports of substantial improvement of binocular vision in human adults following lifelong visual impairment or deprivation. These observations, together with recent findings of adult ocular dominance…

  1. Higher Brain Functions Served by the Lowly Rodent Primary Visual Cortex

    ERIC Educational Resources Information Center

    Gavornik, Jeffrey P.; Bear, Mark F.

    2014-01-01

    It has been more than 50 years since the first description of ocular dominance plasticity--the profound modification of primary visual cortex (V1) following temporary monocular deprivation. This discovery immediately attracted the intense interest of neurobiologists focused on the general question of how experience and deprivation modify the brain…

  2. Dynamics of the near response under natural viewing conditions with an open-view sensor

    PubMed Central

    Chirre, Emmanuel; Prieto, Pedro; Artal, Pablo

    2015-01-01

    We have studied the temporal dynamics of the near response (accommodation, convergence and pupil constriction) in healthy subjects when accommodation was performed under natural binocular and monocular viewing conditions. A binocular open-view multi-sensor based on an invisible infrared Hartmann-Shack sensor was used for non-invasive measurements of both eyes simultaneously in real time at 25Hz. Response times for each process under different conditions were measured. The accommodative responses for binocular vision were faster than for monocular conditions. When one eye was blocked, accommodation and convergence were triggered simultaneously and synchronized, despite the fact that no retinal disparity was available. We found that upon the onset of the near target, the unblocked eye rapidly changes its line of sight to fix it on the stimulus while the blocked eye moves in the same direction, producing the equivalent to a saccade, but then converges to the (blocked) target in synchrony with accommodation. This open-view instrument could be further used for additional experiments with other tasks and conditions. PMID:26504666

  3. Optimization of dynamic envelope measurement system for high speed train based on monocular vision

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong

    2018-01-01

    The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.

  4. Indirect carotid cavernous fistula mimicking ocular myasthenia.

    PubMed

    Leishangthem, Lakshmi; Satti, Sudhakar Reddy

    2017-10-19

    71-year-old woman with progressive left-sided, monocular diplopia and ptosis. Her symptoms mimicked ocular myasthenia, but she had an indirect carotid cavernous fistula (CCF). She was diagnosed with monocular myasthenia gravis (negative acetylcholinesterase antibody) after a positive ice test and started on Mestinon and underwent a thymectomy complicated by a brachial plexus injury. Months later, she developed left-sided proptosis and ocular bruit. She was urgently referred to neuro-interventional surgery and was diagnosed with an indirect high-flow left CCF, which was treated with Onyx liquid and platinum coil embolisation. Mestinon was discontinued. Her ophthalmic symptoms resolved. However, she was left with a residual left arm and hand hemiparesis and dysmetria secondary to a brachial plexus injury. Indirect CCF usually can present with subtle and progressive symptoms leading to delayed diagnosis or misdiagnosis. It is important for ophthalmologists to consider this differential in a patient with progressive ocular symptoms. © BMJ Publishing Group Ltd (unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Remote operation: a selective review of research into visual depth perception.

    PubMed

    Reinhardt-Rutland, A H

    1996-07-01

    Some perceptual motor operations are performed remotely; examples include the handling of life-threatening materials and surgical procedures. A camera conveys the site of operation to a TV monitor, so depth perception relies mainly on pictorial information, perhaps with enhancement of the occlusion cue by motion. However, motion information such as motion parallax is not likely to be important. The effectiveness of pictorial information is diminished by monocular and binocular information conveying flatness of the screen and by difficulties in scaling: Only a degree of relative depth can be conveyed. Furthermore, pictorial information can mislead. Depth perception is probably adequate in remote operation, if target objects are well separated, with well-defined edges and familiar shapes. Stereoscopic viewing systems are being developed to introduce binocular information to remote operation. However, stereoscopic viewing is problematic because binocular disparity conflicts with convergence and monocular information. An alternative strategy to improve precision in remote operation may be to rely on individuals who lack binocular function: There is redundancy in depth information, and such individuals seem to compensate for the lack of binocular function.

  6. Head Worn Display System for Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Cupero, Frank; Valimont, Brian; Wise, John; Best. Carl; DeMers, Bob

    2009-01-01

    Head-Worn Displays or so-called, near-to-eye displays have potentially significant advantages in terms of cost, overcoming cockpit space constraints, and for the display of spatially-integrated information. However, many technical issues need to be overcome before these technologies can be successfully introduced into commercial aircraft cockpits. The results of three activities are reported. First, the near-to-eye display design, technological, and human factors issues are described and a literature review is presented. Second, the results of a fixed-base piloted simulation, investigating the impact of near to eye displays on both operational and visual performance is reported. Straight-in approaches were flown in simulated visual and instrument conditions while using either a biocular or a monocular display placed on either the dominant or non-dominant eye. The pilot's flight performance, visual acuity, and ability to detect unsafe conditions on the runway were tested. The data generally supports a monocular design with minimal impact due to eye dominance. Finally, a method for head tracker system latency measurement is developed and used to compare two different devices.

  7. Restoration of binocular vision in amblyopia.

    PubMed

    Hess, R F; Mansouri, B; Thompson, B

    2011-09-01

    To develop a treatment for amblyopia based on re-establishing binocular vision. A novel procedure is outlined for measuring and reducing the extent to which the fixing eye suppresses the fellow amblyopic eye in adults with amblyopia. We hypothesize that suppression renders a structurally binocular system, functionally monocular. We demonstrate that strabismic amblyopes can combine information normally between their eyes under viewing conditions where suppression is reduced by presenting stimuli of different contrast to each eye. Furthermore we show that prolonged periods of binocular combination leads to a strengthening of binocular vision in strabismic amblyopes and eventual combination of binocular information under natural viewing conditions (stimuli of the same contrast in each eye). Concomitant improvement in monocular acuity of the amblyopic eye occurs with this reduction in suppression and strengthening of binocular fusion. Additionally, stereoscopic function was established in the majority of patients tested. We have implemented this approach on a headmounted device as well as on a handheld iPod. This provides the basis for a new treatment of amblyopia, one that is purely binocular and aimed at reducing suppression as a first step.

  8. A Height Estimation Approach for Terrain Following Flights from Monocular Vision

    PubMed Central

    Campos, Igor S. G.; Nascimento, Erickson R.; Freitas, Gustavo M.; Chaimowicz, Luiz

    2016-01-01

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80% for positives and 90% for negatives, while the height estimation algorithm presented good accuracy. PMID:27929424

  9. gr-MRI: A software package for magnetic resonance imaging using software defined radios

    NASA Astrophysics Data System (ADS)

    Hasselwander, Christopher J.; Cao, Zhipeng; Grissom, William A.

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5 Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately 2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500 kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.

  10. Application safety evaluation of the radio frequency identification tag under magnetic resonance imaging.

    PubMed

    Fei, Xiaolu; Li, Shanshan; Gao, Shan; Wei, Lan; Wang, Lihong

    2014-09-04

    Radio Frequency Identification(RFID) has been widely used in healthcare facilities, but it has been paid little attention whether RFID applications are safe enough under healthcare environment. The purpose of this study is to assess the effects of RFID tags on Magnetic Resonance (MR) imaging in a typical electromagnetic environment in hospitals, and to evaluate the safety of their applications. A Magphan phantom was used to simulate the imaging objects, while active RFID tags were placed at different distances (0, 4, 8, 10 cm) from the phantom border. The phantom was scanned by using three typical sequences including spin-echo (SE) sequence, gradient-echo (GRE) sequence and inversion-recovery (IR) sequence. The quality of the image was quantitatively evaluated by using signal-to-noise ratio (SNR), uniformity, high-contrast resolution, and geometric distortion. RFID tags were read by an RFID reader to calculate their usable rate. RFID tags can be read properly after being placed in high magnetic field for up to 30 minutes. SNR: There were no differences between the group with RFID tags and the group without RFID tags using SE and IR sequence, but it was lower when using GRE sequence.Uniformity: There was a significant difference between the group with RFID tags and the group without RFID tags using SE and GRE sequence. Geometric distortion and high-contrast resolution: There were no obvious differences found. Active RFID tags can affect MR imaging quality, especially using the GRE sequence. Increasing the distance from the RFID tags to the imaging objects can reduce that influence. When the distance was longer than 8 cm, MR imaging quality were almost unaffected. However, the Gradient Echo related sequence is not recommended when patients wear a RFID wristband.

  11. SSh versus TSE sequence protocol in rapid MR examination of pediatric patients with programmable drainage system.

    PubMed

    Brichtová, Eva; Šenkyřík, J

    2017-05-01

    A low radiation burden is essential during diagnostic procedures in pediatric patients due to their high tissue sensitivity. Using MR examination instead of the routinely used CT reduces the radiation exposure and the risk of adverse stochastic effects. Our retrospective study evaluated the possibility of using ultrafast single-shot (SSh) sequences and turbo spin echo (TSE) sequences in rapid MR brain imaging in pediatric patients with hydrocephalus and a programmable ventriculoperitoneal drainage system. SSh sequences seem to be suitable for examining pediatric patients due to the speed of using this technique, but significant susceptibility artifacts due to the programmable drainage valve degrade the image quality. Therefore, a rapid MR examination protocol based on TSE sequences, less sensitive to artifacts due to ferromagnetic components, has been developed. Of 61 pediatric patients who were examined using MR and the SSh sequence protocol, a group of 15 patients with hydrocephalus and a programmable drainage system also underwent TSE sequence MR imaging. The susceptibility artifact volume in both rapid MR protocols was evaluated using a semiautomatic volumetry system. A statistically significant decrease in the susceptibility artifact volume has been demonstrated in TSE sequence imaging in comparison with SSh sequences. Using TSE sequences reduced the influence of artifacts from the programmable valve, and the image quality in all cases was rated as excellent. In all patients, rapid MR examinations were performed without any need for intravenous sedation or general anesthesia. Our study results strongly suggest the superiority of the TSE sequence MR protocol compared to the SSh sequence protocol in pediatric patients with a programmable ventriculoperitoneal drainage system due to a significant reduction of susceptibility artifact volume. Both rapid sequence MR protocols provide quick and satisfactory brain imaging with no ionizing radiation and a reduced need for intravenous or general anesthesia.

  12. The processing of images of biological threats in visual short-term memory.

    PubMed

    Quinlan, Philip T; Yue, Yue; Cohen, Dale J

    2017-08-30

    The idea that there is enhanced memory for negatively, emotionally charged pictures was examined. Performance was measured under rapid, serial visual presentation (RSVP) conditions in which, on every trial, a sequence of six photo-images was presented. Briefly after the offset of the sequence, two alternative images (a target and a foil) were presented and participants attempted to choose which image had occurred in the sequence. Images were of threatening and non-threatening cats and dogs. The target depicted either an animal expressing an emotion distinct from the other images, or the sequences contained only images depicting the same emotional valence. Enhanced memory was found for targets that differed in emotional valence from the other sequence images, compared to targets that expressed the same emotional valence. Further controls in stimulus selection were then introduced and the same emotional distinctiveness effect obtained. In ruling out possible visual and attentional accounts of the data, an informal dual route topic model is discussed. This places emphasis on how visual short-term memory reveals a sensitivity to the emotional content of the input as it unfolds over time. Items that present with a distinctive emotional content stand out in memory. © 2017 The Author(s).

  13. PROPELLER technique to improve image quality of MRI of the shoulder.

    PubMed

    Dietrich, Tobias J; Ulbrich, Erika J; Zanetti, Marco; Fucentese, Sandro F; Pfirrmann, Christian W A

    2011-12-01

    The purpose of this article is to evaluate the use of the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) technique for artifact reduction and overall image quality improvement for intermediate-weighted and T2-weighted MRI of the shoulder. One hundred eleven patients undergoing MR arthrography of the shoulder were included. A coronal oblique intermediate-weighted turbo spin-echo (TSE) sequence with fat suppression and a sagittal oblique T2-weighted TSE sequence with fat suppression were obtained without (standard) and with the PROPELLER technique. Scanning time increased from 3 minutes 17 seconds to 4 minutes 17 seconds (coronal oblique plane) and from 2 minutes 52 seconds to 4 minutes 10 seconds (sagittal oblique) using PROPELLER. Two radiologists graded image artifacts, overall image quality, and delineation of several anatomic structures on a 5-point scale (5, no artifact, optimal diagnostic quality; and 1, severe artifacts, diagnostically not usable). The Wilcoxon signed rank test was used to compare the data of the standard and PROPELLER images. Motion artifacts were significantly reduced in PROPELLER images (p < 0.001). Observer 1 rated motion artifacts with diagnostic impairment in one patient on coronal oblique PROPELLER images compared with 33 patients on standard images. Ratings for the sequences with PROPELLER were significantly better for overall image quality (p < 0.001). Observer 1 noted an overall image quality with diagnostic impairment in nine patients on sagittal oblique PROPELLER images compared with 23 patients on standard MRI. The PROPELLER technique for MRI of the shoulder reduces the number of sequences with diagnostic impairment as a result of motion artifacts and increases image quality compared with standard TSE sequences. PROPELLER sequences increase the acquisition time.

  14. Tracking prominent points in image sequences

    NASA Astrophysics Data System (ADS)

    Hahn, Michael

    1994-03-01

    Measuring image motion and inferring scene geometry and camera motion are main aspects of image sequence analysis. The determination of image motion and the structure-from-motion problem are tasks that can be addressed independently or in cooperative processes. In this paper we focus on tracking prominent points. High stability, reliability, and accuracy are criteria for the extraction of prominent points. This implies that tracking should work quite well with those features; unfortunately, the reality looks quite different. In the experimental investigations we processed a long sequence of 128 images. This mono sequence is taken in an outdoor environment at the experimental field of Mercedes Benz in Rastatt. Different tracking schemes are explored and the results with respect to stability and quality are reported.

  15. Dedicated phantom to study susceptibility artifacts caused by depth electrode in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Garcia, J.; Hidalgo, S. S.; Solis, S. E.; Vazquez, D.; Nuñez, J.; Rodriguez, A. O.

    2012-10-01

    The susceptibility artifacts can degrade of magnetic resonance image quality. Electrodes are an important source of artifacts when performing brain imaging. A dedicated phantom was built using a depth electrode to study the susceptibility effects under different pulse sequences. T2-weighted images were acquired with both gradient-and spin-echo sequences. The spin-echo sequences can significantly attenuate the susceptibility artifacts allowing a straightforward visualization of the regions surrounding the electrode.

  16. Isometric Non-Rigid Shape-from-Motion with Riemannian Geometry Solved in Linear Time.

    PubMed

    Parashar, Shaifali; Pizarro, Daniel; Bartoli, Adrien

    2017-10-06

    We study Isometric Non-Rigid Shape-from-Motion (Iso-NRSfM): given multiple intrinsically calibrated monocular images, we want to reconstruct the time-varying 3D shape of a thin-shell object undergoing isometric deformations. We show that Iso-NRSfM is solvable from local warps, the inter-image geometric transformations. We propose a new theoretical framework based on the Riemmanian manifold to represent the unknown 3D surfaces as embeddings of the camera's retinal plane. This allows us to use the manifold's metric tensor and Christoffel Symbol (CS) fields. These are expressed in terms of the first and second order derivatives of the inverse-depth of the 3D surfaces, which are the unknowns for Iso-NRSfM. We prove that the metric tensor and the CS are related across images by simple rules depending only on the warps. This forms a set of important theoretical results. We show that current solvers cannot solve for the first and second order derivatives of the inverse-depth simultaneously. We thus propose an iterative solution in two steps. 1) We solve for the first order derivatives assuming that the second order derivatives are known. We initialise the second order derivatives to zero, which is an infinitesimal planarity assumption. We derive a system of two cubics in two variables for each image pair. The sum-of-squares of these polynomials is independent of the number of images and can be solved globally, forming a well-posed problem for N ≥ 3 images. 2) We solve for the second order derivatives by initialising the first order derivatives from the previous step. We solve a linear system of 4N-4 equations in three variables. We iterate until the first order derivatives converge. The solution for the first order derivatives gives the surfaces' normal fields which we integrate to recover the 3D surfaces. The proposed method outperforms existing work in terms of accuracy and computation cost on synthetic and real datasets.

  17. Restoration of distorted depth maps calculated from stereo sequences

    NASA Technical Reports Server (NTRS)

    Damour, Kevin; Kaufman, Howard

    1991-01-01

    A model-based Kalman estimator is developed for spatial-temporal filtering of noise and other degradations in velocity and depth maps derived from image sequences or cinema. As an illustration of the proposed procedures, edge information from image sequences of rigid objects is used in the processing of the velocity maps by selecting from a series of models for directional adaptive filtering. Adaptive filtering then allows for noise reduction while preserving sharpness in the velocity maps. Results from several synthetic and real image sequences are given.

  18. Meta-image navigation augmenters for GPS denied mountain navigation of small UAS

    NASA Astrophysics Data System (ADS)

    Wang, Teng; ćelik, Koray; Somani, Arun K.

    2014-06-01

    We present a novel approach to use mountain drainage patterns for GPS-Denied navigation of small unmanned aerial systems (UAS) such as the ScanEagle, utilizing a down-looking fixed focus monocular imager. Our proposal allows extension of missions to GPS-denied mountain areas, with no assumption of human-made geographic objects. We leverage the analogy between mountain drainage patterns, human arteriograms, and human fingerprints, to match local drainage patterns to Graphics Processing Unit (GPU) rendered parallax occlusion maps of geo-registered radar returns (GRRR). Details of our actual GPU algorithm is beyond the subject of this paper, and is planned as a future paper. The matching occurs in real-time, while GRRR data is loaded on-board the aircraft pre-mission, so as not to require a scanning aperture radar during the mission. For recognition purposes, we represent a given mountain area with a set of spatially distributed mountain minutiae, i.e., details found in the drainage patterns, so that conventional minutiae-based fingerprint matching approaches can be used to match real-time camera image against template images in the training set. We use medical arteriography processing techniques to extract the patterns. The minutiae-based representation of mountains is achieved by first exposing mountain ridges and valleys with a series of filters and then extracting mountain minutiae from these ridges/valleys. Our results are experimentally validated on actual terrain data and show the effectiveness of minutiae-based mountain representation method. Furthermore, we study how to select landmarks for UAS navigation based on the proposed mountain representation and give a set of examples to show its feasibility. This research was in part funded by Rockwell Collins Inc.

  19. Bilayer segmentation of webcam videos using tree-based classifiers.

    PubMed

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.

  20. Retrieval of Sentence Sequences for an Image Stream via Coherence Recurrent Convolutional Networks.

    PubMed

    Park, Cesc Chunseong; Kim, Youngjin; Kim, Gunhee

    2018-04-01

    We propose an approach for retrieving a sequence of natural sentences for an image stream. Since general users often take a series of pictures on their experiences, much online visual information exists in the form of image streams, for which it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt with the relation between a single image and a single natural sentence, our work extends both input and output dimension to a sequence of images and a sequence of sentences. For retrieving a coherent flow of multiple sentences for a photo stream, we propose a multimodal neural architecture called coherence recurrent convolutional network (CRCN), which consists of convolutional neural networks, bidirectional long short-term memory (LSTM) networks, and an entity-based local coherence model. Our approach directly learns from vast user-generated resource of blog posts as text-image parallel training data. We collect more than 22 K unique blog posts with 170 K associated images for the travel topics of NYC, Disneyland , Australia, and Hawaii. We demonstrate that our approach outperforms other state-of-the-art image captioning methods for text sequence generation, using both quantitative measures and user studies via Amazon Mechanical Turk.

  1. Free-breathing echo-planar imaging based diffusion-weighted magnetic resonance imaging of the liver with prospective acquisition correction.

    PubMed

    Asbach, Patrick; Hein, Patrick A; Stemmer, Alto; Wagner, Moritz; Huppertz, Alexander; Hamm, Bernd; Taupitz, Matthias; Klessen, Christian

    2008-01-01

    To evaluate soft tissue contrast and image quality of a respiratory-triggered echo-planar imaging based diffusion-weighted sequence (EPI-DWI) with different b values for magnetic resonance imaging (MRI) of the liver. Forty patients were examined. Quantitative and qualitative evaluation of contrast was performed. Severity of artifacts and overall image quality in comparison with a T2w turbo spin-echo (T2-TSE) sequence were scored. The liver-spleen contrast was significantly higher (P < 0.05) for the EPI-DWI compared with the T2-TSE sequence (0.47 +/- 0.11 (b50); 0.48 +/- 0.13 (b300); 0.47 +/- 0.13 (b600) vs 0.38 +/- 0.11). Liver-lesion contrast strongly depends on the b value of the DWI sequence and decreased with higher b values (b50, 0.47 +/- 0.19; b300, 0.40 +/- 0.20; b600, 0.28 +/- 0.23). Severity of artifacts and overall image quality were comparable to the T2-TSE sequence when using a low b value (P > 0.05), artifacts increased and image quality decreased with higher b values (P < 0.05). Respiratory-triggered EPI-DWI of the liver is feasible because good image quality and favorable soft tissue contrast can be achieved.

  2. [Contrastive analysis of artifacts produced by metal dental crowns in 3.0 T magnetic resonance imaging with six sequences].

    PubMed

    Lan, Gao; Yunmin, Lian; Pu, Wang; Haili, Huai

    2016-06-01

    This study aimed to observe and evaluate six 3.0 T sequences of metallic artifacts produced by metal dental crowns. Dental crowns fabricated with four different materials (Co-Gr, Ni-Gr, Ti alloy and pure Ti) were evaluated. A mature crossbreed dog was used as the experimental animal, and crowns were fabricated for its upper right second premolar. Each crown was examined through head MRI (3.0 T) with six sequences, namely, T₁ weighted-imaging of spin echo (T₁W/SE), T₂ weighted-imaging of inversion recovery (T₂W/IR), T₂ star gradient echo (T₂*/GRE), T2 weighted-imaging of fast spin echo (T₂W/FSE), T₁ weighted-imaging of fluid attenuate inversion recovery (T₂W/FLAIR), and T₂ weighted-imaging of propeller (T₂W/PROP). The largest area and layers of artifacts were assessed and compared. The artifact in the T₂*/GRE sequence was significantly wider than those in the other sequences (P < 0.01), whose artifact extent was not significantly different (P > 0.05). T₂*/GRE exhibit the strongest influence on the artifact, whereas the five other sequences contribute equally to artifact generation.

  3. Relating Lateralization of Eye Use to Body Motion in the Avoidance Behavior of the Chameleon (Chamaeleo chameleon)

    PubMed Central

    Lustig, Avichai; Ketter-Katz, Hadas; Katzir, Gadi

    2013-01-01

    Lateralization is mostly analyzed for single traits, but seldom for two or more traits while performing a given task (e.g. object manipulation). We examined lateralization in eye use and in body motion that co-occur during avoidance behaviour of the common chameleon, Chamaeleo chameleon. A chameleon facing a moving threat smoothly repositions its body on the side of its perch distal to the threat, to minimize its visual exposure. We previously demonstrated that during the response (i) eye use and body motion were, each, lateralized at the tested group level (N = 26), (ii) in body motion, we observed two similar-sized sub-groups, one exhibiting a greater reduction in body exposure to threat approaching from the left and one – to threat approaching from the right (left- and right-biased subgroups), (iii) the left-biased sub-group exhibited weak lateralization of body exposure under binocular threat viewing and none under monocular viewing while the right-biased sub-group exhibited strong lateralization under both monocular and binocular threat viewing. In avoidance, how is eye use related to body motion at the entire group and at the sub-group levels? We demonstrate that (i) in the left-biased sub-group, eye use is not lateralized, (ii) in the right-biased sub-group, eye use is lateralized under binocular, but not monocular viewing of the threat, (iii) the dominance of the right-biased sub-group determines the lateralization of the entire group tested. We conclude that in chameleons, patterns of lateralization of visual function and body motion are inter-related at a subtle level. Presently, the patterns cannot be compared with humans' or related to the unique visual system of chameleons, with highly independent eye movements, complete optic nerve decussation and relatively few inter-hemispheric commissures. We present a model to explain the possible inter-hemispheric differences in dominance in chameleons' visual control of body motion during avoidance. PMID:23967099

  4. Relating lateralization of eye use to body motion in the avoidance behavior of the chameleon (Chamaeleo chameleon).

    PubMed

    Lustig, Avichai; Ketter-Katz, Hadas; Katzir, Gadi

    2013-01-01

    Lateralization is mostly analyzed for single traits, but seldom for two or more traits while performing a given task (e.g. object manipulation). We examined lateralization in eye use and in body motion that co-occur during avoidance behaviour of the common chameleon, Chamaeleo chameleon. A chameleon facing a moving threat smoothly repositions its body on the side of its perch distal to the threat, to minimize its visual exposure. We previously demonstrated that during the response (i) eye use and body motion were, each, lateralized at the tested group level (N = 26), (ii) in body motion, we observed two similar-sized sub-groups, one exhibiting a greater reduction in body exposure to threat approaching from the left and one--to threat approaching from the right (left- and right-biased subgroups), (iii) the left-biased sub-group exhibited weak lateralization of body exposure under binocular threat viewing and none under monocular viewing while the right-biased sub-group exhibited strong lateralization under both monocular and binocular threat viewing. In avoidance, how is eye use related to body motion at the entire group and at the sub-group levels? We demonstrate that (i) in the left-biased sub-group, eye use is not lateralized, (ii) in the right-biased sub-group, eye use is lateralized under binocular, but not monocular viewing of the threat, (iii) the dominance of the right-biased sub-group determines the lateralization of the entire group tested. We conclude that in chameleons, patterns of lateralization of visual function and body motion are inter-related at a subtle level. Presently, the patterns cannot be compared with humans' or related to the unique visual system of chameleons, with highly independent eye movements, complete optic nerve decussation and relatively few inter-hemispheric commissures. We present a model to explain the possible inter-hemispheric differences in dominance in chameleons' visual control of body motion during avoidance.

  5. Management of a neurotrophic deep corneal ulcer with amniotic membrane transplantation in a patient with functional monocular vision

    PubMed Central

    Röck, Tobias; Bartz-Schmidt, Karl Ulrich; Röck, Daniel

    2017-01-01

    Abstract Rationale: Amniotic membrane transplantation (AMT) has been performed therapeutically in humans for over 100 years. In recent 2 decades AMTs have been used increasingly and successfully to treat various types of ophthalmic indications. Patient concerns: An 83-year-old man was referred to our eye hospital with a refractory neurotrophic deep corneal ulcer of the left eye. Diagnoses: The best-corrected visual acuity of the left eye was 0.5 (0.3 logMAR) and of the right eye was 0.05 (1.3 logMAR), which was caused by a central retinal vein occlusion 5 years previously. In cases of binocular vision, a large amniotic membrane patch can cover the whole cornea, including the optical axis. However, in cases with functional monocular vision, as in the case reported here, the AMT has to be performed without the involvement of the optical axis to ensure vision for the patient. Otherwise the patient would have a massively restricted view like looking through waxed paper for at least 2–4 weeks until the overlay dissolved. Interventions: For this case, an AMT using a modified sandwich technique was applied without involvement of the optic axis to ensure vision for the patient. This case report illustrates this eye's course of healing over time. Outcomes: A reduction in the inflammation and healing of the corneal ulcer could be seen. In addition, the corneal vascularization decreased. Six months after the AMT, a slit-lamp examination revealed stable findings. The best-corrected visual acuity of the left eye had increased to 0.8 (0.1 logMAR). Lessons: To the best of our knowledge, a case report on the management of a neurotrophic deep corneal ulcer with AMT in a patient with functional monocular vision has never been undertaken before. PMID:29390295

  6. Crystalens HD intraocular lens analysis using an adaptive optics visual simulator.

    PubMed

    Pérez-Vives, Cari; Montés-Micó, Robert; López-Gil, Norberto; Ferrer-Blasco, Teresa; García-Lázaro, Santiago

    2013-12-01

    To compare visual and optical quality of the Crystalens HD intraocular lens (IOL) with that of a monofocal IOL. The wavefront aberration patterns of the monocular Akreos Adapt AO IOL and the single-optic accommodating Crystalens HD IOL were measured in a model eye. The Crystalens IOL was measured in its nonaccommodative state and then, after flexing the haptic to produce 1.4 mm of movement, in its accommodative state. Using an adaptive optics system, subjects' aberrations were removed and replaced with those of pseudophakes viewing with either lens. Monocular distance visual acuity (DVA) at high (100%), medium (50%), and low (10%) contrast and contrast sensitivity (CS) were measured for both IOL optics. Near VA (NVA) and CS were measured for the Crystalens HD IOL in its accommodative state. Depth of focus around the distance and near focus was also evaluated for the Crystalens HD IOL. Modulation transfer function (MTF), point spread function (PSF), and Strehl ratio were also calculated. All measures were taken for 3- and 5-mm pupils. The MTF, PSF, and Strehl ratio showed comparable values between IOLs (p > 0.05). There were no significant differences in DVA and CS between IOLs for all contrasts and pupils (p > 0.05). When spherically focused, mean DVA and NVA with the Crystalens HD IOL were ≥20/20 at 100 and 50% contrasts for both pupils. Monocular DVA, NVA, and CS were slightly better with 3- than 5-mm pupils, but without statistically significant differences. The Crystalens HD IOL showed about 0.75 and 0.50 D of depth of focus in its accommodative state and nonaccommodative state, respectively. The optical and visual quality with the nonaccommodatied Crystalens HD IOL was comparable to that of a monofocal IOL. If this lens can move 1.4 mm in the eye, it will provide high-quality optics for near vision as well.

  7. Ocular wavefront aberrations in the common marmoset Callithrix jacchus: effects of age and refractive error

    PubMed Central

    Coletta, Nancy J.; Marcos, Susana; Troilo, David

    2012-01-01

    The common marmoset, Callithrix jacchus, is a primate model for emmetropization studies. The refractive development of the marmoset eye depends on visual experience, so knowledge of the optical quality of the eye is valuable. We report on the wavefront aberrations of the marmoset eye, measured with a clinical Hartmann-Shack aberrometer (COAS, AMO Wavefront Sciences). Aberrations were measured on both eyes of 23 marmosets whose ages ranged from 18 to 452 days. Twenty-one of the subjects were members of studies of emmetropization and accommodation, and two were untreated normal subjects. Eleven of the 21 experimental subjects had worn monocular diffusers or occluders and ten had worn binocular spectacle lenses of equal power. Monocular deprivation or lens rearing began at about 45 days of age and ended at about 108 days of age. All refractions and aberration measures were performed while the eyes were cyclopleged; most aberration measures were made while subjects were awake, but some control measurements were performed under anesthesia. Wavefront error was expressed as a seventh-order Zernike polynomial expansion, using the Optical Society of America’s naming convention. Aberrations in young marmosets decreased up to about 100 days of age, after which the higher-order RMS aberration leveled off to about 0.10 micron over a 3 mm diameter pupil. Higher-order aberrations were 1.8 times greater when the subjects were under general anesthesia than when they were awake. Young marmoset eyes were characterized by negative spherical aberration. Visually deprived eyes of the monocular deprivation animals had greater wavefront aberrations than their fellow untreated eyes, particularly for asymmetric aberrations in the odd-numbered Zernike orders. Both lens-treated and deprived eyes showed similar significant increases in Z3-3 trefoil aberration, suggesting the increase in trefoil may be related to factors that do not involve visual feedback. PMID:20800078

  8. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    PubMed

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.

  9. Binocular Summation and Other Forms of Non-Dominant Eye Contribution in Individuals with Strabismic Amblyopia during Habitual Viewing

    PubMed Central

    Barrett, Brendan T.; Panesar, Gurvinder K.; Scally, Andrew J.; Pacey, Ian E.

    2013-01-01

    Background Adults with amblyopia (‘lazy eye’), long-standing strabismus (ocular misalignment) or both typically do not experience visual symptoms because the signal from weaker eye is given less weight than the signal from its fellow. Here we examine the contribution of the weaker eye of individuals with strabismus and amblyopia with both eyes open and with the deviating eye in its anomalous motor position. Methodology/Results The task consisted of a blue-on-yellow detection task along a horizontal line across the central 50 degrees of the visual field. We compare the results obtained in ten individuals with strabismic amblyopia with ten visual normals. At each field location in each participant, we examined how the sensitivity exhibited under binocular conditions compared with sensitivity from four predictions, (i) a model of binocular summation, (ii) the average of the monocular sensitivities, (iii) dominant-eye sensitivity or (iv) non-dominant-eye sensitivity. The proportion of field locations for which the binocular summation model provided the best description of binocular sensitivity was similar in normals (50.6%) and amblyopes (48.2%). Average monocular sensitivity matched binocular sensitivity in 14.1% of amblyopes’ field locations compared to 8.8% of normals’. Dominant-eye sensitivity explained sensitivity at 27.1% of field locations in amblyopes but 21.2% in normals. Non-dominant-eye sensitivity explained sensitivity at 10.6% of field locations in amblyopes but 19.4% in normals. Binocular summation provided the best description of the sensitivity profile in 6/10 amblyopes compared to 7/10 of normals. In three amblyopes, dominant-eye sensitivity most closely reflected binocular sensitivity (compared to two normals) and in the remaining amblyope, binocular sensitivity approximated to an average of the monocular sensitivities. Conclusions Our results suggest a strong positive contribution in habitual viewing from the non-dominant eye in strabismic amblyopes. This is consistent with evidence from other sources that binocular mechanisms are frequently intact in strabismic and amblyopic individuals. PMID:24205005

  10. Accommodation and the Visual Regulation of Refractive State in Marmosets

    PubMed Central

    Troilo, David; Totonelly, Kristen; Harb, Elise

    2009-01-01

    Purpose To determine the effects of imposed anisometropic retinal defocus on accommodation, ocular growth, and refractive state changes in marmosets. Methods Marmosets were raised with extended-wear soft contact lenses for an average duration of 10 wks beginning at an average age of 76 d. Experimental animals wore either a positive or negative contact lens over one eye and a plano lens or no lens over the other. Another group wore binocular lenses of equal magnitude but opposite sign. Untreated marmosets served as controls and three wore plano lenses monocularly. Cycloplegic refractive state, corneal curvature, and vitreous chamber depth were measured before, during, and after the period of lens wear. To investigate the accommodative response, the effective refractive state was measured through each anisometropic condition at varying accommodative stimuli positions using an infrared refractometer. Results Eye growth and refractive state are significantly correlated with the sign and power of the contact lens worn. The eyes of marmosets reared with monocular negative power lenses had longer vitreous chambers and were myopic relative to contralateral control eyes (p<0.01). Monocular positive power lenses produced a significant reduction in vitreous chamber depth and hyperopia relative to the contralateral control eyes (p<0.05). In marmosets reared binocularly with lenses of opposite sign, we found larger interocular differences in vitreous chamber depths and refractive state (p<0.001). Accommodation influences the defocus experienced through the lenses, however, the mean effective refractive state was still hyperopia in the negative-lens-treated eyes and myopia in the positive-lens-treated eyes. Conclusions Imposed anisometropia effectively alters marmoset eye growth and refractive state to compensate for the imposed defocus. The response to imposed hyperopia is larger and faster than the response to imposed myopia. The pattern of accommodation under imposed anisometropia produces effective refractive states that are consistent with the changes in eye growth and refractive state observed. PMID:19104464

  11. Quantitative measurement of interocular suppression in anisometropic amblyopia: a case-control study.

    PubMed

    Li, Jinrong; Hess, Robert F; Chan, Lily Y L; Deng, Daming; Yang, Xiao; Chen, Xiang; Yu, Minbin; Thompson, Benjamin

    2013-08-01

    The aims of this study were to assess (1) the relationship between interocular suppression and visual function in patients with anisometropic amblyopia, (2) whether suppression can be simulated in matched controls using monocular defocus or neutral density filters, (3) the effects of spectacle or rigid gas-permeable contact lens correction on suppression in patients with anisometropic amblyopia, and (4) the relationship between interocular suppression and outcomes of occlusion therapy. Case-control study (aims 1-3) and cohort study (aim 4). Forty-five participants with anisometropic amblyopia and 45 matched controls (mean age, 8.8 years for both groups). Interocular suppression was assessed using Bagolini striated lenses, neutral density filters, and an objective psychophysical technique that measures the amount of contrast imbalance between the 2 eyes that is required to overcome suppression (dichoptic motion coherence thresholds). Visual acuity was assessed using a logarithm minimum angle of resolution tumbling E chart and stereopsis using the Randot preschool test. Interocular suppression assessed using dichoptic motion coherence thresholds. Patients exhibited significantly stronger suppression than controls, and stronger suppression was correlated significantly with poorer visual acuity in amblyopic eyes. Reducing monocular acuity in controls to match that of cases using neutral density filters (luminance reduction) resulted in levels of interocular suppression comparable with that in patients. This was not the case for monocular defocus (optical blur). Rigid gas-permeable contact lens correction resulted in less suppression than spectacle correction, and stronger suppression was associated with poorer outcomes after occlusion therapy. Interocular suppression plays a key role in the visual deficits associated with anisometropic amblyopia and can be simulated in controls by inducing a luminance difference between the eyes. Accurate quantification of suppression using the dichoptic motion coherence threshold technique may provide useful information for the management and treatment of anisometropic amblyopia. Proprietary or commercial disclosure may be found after the references. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  12. A comparison of visuomotor cue integration strategies for object placement and prehension.

    PubMed

    Greenwald, Hal S; Knill, David C

    2009-01-01

    Visual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for grasping. Subjects placed an object on or picked up a disc in a virtual environment. On some trials, the monocular cues (aspect ratio and texture compression) and binocular cues (e.g., binocular disparity) suggested slightly different 3D orientations for the disc; these conflicts either were present upon initial stimulus presentation or were introduced after movement initiation, which allowed us to quantify how information from the cues accumulated over time. We analyzed the time-varying orientations of subjects' fingers in the grasping task and those of the object in the object placement task to quantify how different visual cues influenced motor control. In the first experiment, different subjects performed each task, and those performing the grasping task relied on binocular information more when orienting their hands than those performing the object placement task. When subjects in the second experiment performed both tasks in interleaved sessions, binocular cues were still more influential during grasping than object placement, and the different cue integration strategies observed for each task in isolation were maintained. In both experiments, the temporal analyses showed that subjects processed binocular information faster than monocular information, but task demands did not affect the time course of cue processing. How one uses visual cues for motor control depends on the task being performed, although how quickly the information is processed appears to be task invariant.

  13. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  14. Contrast-enhanced T1-weighted fluid-attenuated inversion-recovery BLADE magnetic resonance imaging of the brain: an alternative to spin-echo technique for detection of brain lesions in the unsedated pediatric patient?

    PubMed

    Alibek, Sedat; Adamietz, Boris; Cavallaro, Alexander; Stemmer, Alto; Anders, Katharina; Kramer, Manuel; Bautz, Werner; Staatz, Gundula

    2008-08-01

    We compared contrast-enhanced T1-weighted magnetic resonance (MR) imaging of the brain using different types of data acquisition techniques: periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER, BLADE) imaging versus standard k-space sampling (conventional spin-echo pulse sequence) in the unsedated pediatric patient with focus on artifact reduction, overall image quality, and lesion detectability. Forty-eight pediatric patients (aged 3 months to 18 years) were scanned with a clinical 1.5-T whole body MR scanner. Cross-sectional contrast-enhanced T1-weighted spin-echo sequence was compared to a T1-weighted dark-fluid fluid-attenuated inversion-recovery (FLAIR) BLADE sequence for qualitative and quantitative criteria (image artifacts, image quality, lesion detectability) by two experienced radiologists. Imaging protocols were matched for imaging parameters. Reader agreement was assessed using the exact Bowker test. BLADE images showed significantly less pulsation and motion artifacts than the standard T1-weighted spin-echo sequence scan. BLADE images showed statistically significant lower signal-to-noise ratio but higher contrast-to-noise ratios with superior gray-white matter contrast. All lesions were demonstrated on FLAIR BLADE imaging, and one false-positive lesion was visible in spin-echo sequence images. BLADE MR imaging at 1.5 T is applicable for central nervous system imaging of the unsedated pediatric patient, reduces motion and pulsation artifacts, and minimizes the need for sedation or general anesthesia without loss of relevant diagnostic information.

  15. Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study.

    PubMed

    Zhang, Xiaoyong; Homma, Noriyasu; Ichiji, Kei; Takai, Yoshihiro; Yoshizawa, Makoto

    2015-05-01

    To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the tracking result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors' proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors' algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.

  16. Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoyong, E-mail: xiaoyong@ieee.org; Homma, Noriyasu, E-mail: homma@ieee.org; Ichiji, Kei, E-mail: ichiji@yoshizawa.ecei.tohoku.ac.jp

    2015-05-15

    Purpose: To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. Methods: A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the trackingmore » result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. Results: For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors’ proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. Conclusions: In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors’ algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.« less

  17. Image encryption using random sequence generated from generalized information domain

    NASA Astrophysics Data System (ADS)

    Xia-Yan, Zhang; Guo-Ji, Zhang; Xuan, Li; Ya-Zhou, Ren; Jie-Hua, Wu

    2016-05-01

    A novel image encryption method based on the random sequence generated from the generalized information domain and permutation-diffusion architecture is proposed. The random sequence is generated by reconstruction from the generalized information file and discrete trajectory extraction from the data stream. The trajectory address sequence is used to generate a P-box to shuffle the plain image while random sequences are treated as keystreams. A new factor called drift factor is employed to accelerate and enhance the performance of the random sequence generator. An initial value is introduced to make the encryption method an approximately one-time pad. Experimental results show that the random sequences pass the NIST statistical test with a high ratio and extensive analysis demonstrates that the new encryption scheme has superior security.

  18. (Pea)nuts and bolts of visual narrative: Structure and meaning in sequential image comprehension

    PubMed Central

    Cohn, Neil; Paczynski, Martin; Jackendoff, Ray; Holcomb, Phillip J.; Kuperberg, Gina R.

    2012-01-01

    Just as syntax differentiates coherent sentences from scrambled word strings, the comprehension of sequential images must also use a cognitive system to distinguish coherent narrative sequences from random strings of images. We conducted experiments analogous to two classic studies of language processing to examine the contributions of narrative structure and semantic relatedness to processing sequential images. We compared four types of comic strips: 1) Normal sequences with both structure and meaning, 2) Semantic Only sequences (in which the panels were related to a common semantic theme, but had no narrative structure), 3) Structural Only sequences (narrative structure but no semantic relatedness), and 4) Scrambled sequences of randomly-ordered panels. In Experiment 1, participants monitored for target panels in sequences presented panel-by-panel. Reaction times were slowest to panels in Scrambled sequences, intermediate in both Structural Only and Semantic Only sequences, and fastest in Normal sequences. This suggests that both semantic relatedness and narrative structure offer advantages to processing. Experiment 2 measured ERPs to all panels across the whole sequence. The N300/N400 was largest to panels in both the Scrambled and Structural Only sequences, intermediate in Semantic Only sequences and smallest in the Normal sequences. This implies that a combination of narrative structure and semantic relatedness can facilitate semantic processing of upcoming panels (as reflected by the N300/N400). Also, panels in the Scrambled sequences evoked a larger left-lateralized anterior negativity than panels in the Structural Only sequences. This localized effect was distinct from the N300/N400, and appeared despite the fact that these two sequence types were matched on local semantic relatedness between individual panels. These findings suggest that sequential image comprehension uses a narrative structure that may be independent of semantic relatedness. Altogether, we argue that the comprehension of visual narrative is guided by an interaction between structure and meaning. PMID:22387723

  19. MRI of the hip at 7T: feasibility of bone microarchitecture, high-resolution cartilage, and clinical imaging.

    PubMed

    Chang, Gregory; Deniz, Cem M; Honig, Stephen; Egol, Kenneth; Regatte, Ravinder R; Zhu, Yudong; Sodickson, Daniel K; Brown, Ryan

    2014-06-01

    To demonstrate the feasibility of performing bone microarchitecture, high-resolution cartilage, and clinical imaging of the hip at 7T. This study had Institutional Review Board approval. Using an 8-channel coil constructed in-house, we imaged the hips of 15 subjects on a 7T magnetic resonance imaging (MRI) scanner. We applied: 1) a T1-weighted 3D fast low angle shot (3D FLASH) sequence (0.23 × 0.23 × 1-1.5 mm(3) ) for bone microarchitecture imaging; 2) T1-weighted 3D FLASH (water excitation) and volumetric interpolated breath-hold examination (VIBE) sequences (0.23 × 0.23 × 1.5 mm(3) ) with saturation or inversion recovery-based fat suppression for cartilage imaging; 3) 2D intermediate-weighted fast spin-echo (FSE) sequences without and with fat saturation (0.27 × 0.27 × 2 mm) for clinical imaging. Bone microarchitecture images allowed visualization of individual trabeculae within the proximal femur. Cartilage was well visualized and fat was well suppressed on FLASH and VIBE sequences. FSE sequences allowed visualization of cartilage, the labrum (including cartilage and labral pathology), joint capsule, and tendons. This is the first study to demonstrate the feasibility of performing a clinically comprehensive hip MRI protocol at 7T, including high-resolution imaging of bone microarchitecture and cartilage, as well as clinical imaging. Copyright © 2013 Wiley Periodicals, Inc.

  20. Rapid acquisition of magnetic resonance imaging of the shoulder using three-dimensional fast spin echo sequence with compressed sensing.

    PubMed

    Lee, Seung Hyun; Lee, Young Han; Song, Ho-Taek; Suh, Jin-Suck

    2017-10-01

    To evaluate the feasibility of 3D fast spin-echo (FSE) imaging with compressed sensing (CS) for the assessment of shoulder. Twenty-nine patients who underwent shoulder MRI including image sets of axial 3D-FSE sequence without CS and with CS, using an acceleration factor of 1.5, were included. Quantitative assessment was performed by calculating the root mean square error (RMSE) and structural similarity index (SSIM). Two musculoskeletal radiologists compared image quality of 3D-FSE sequences without CS and with CS, and scored the qualitative agreement between sequences, using a five-point scale. Diagnostic agreement for pathologic shoulder lesions between the two sequences was evaluated. The acquisition time of 3D-FSE MRI was reduced using CS (3min 23s vs. 2min 22s). Quantitative evaluations showed a significant correlation between the two sequences (r=0.872-0.993, p<0.05) and SSIM was in an acceptable range (0.940-0.993; mean±standard deviation, 0.968±0.018). Qualitative image quality showed good to excellent agreement between 3D-FSE images without CS and with CS. Diagnostic agreement for pathologic shoulder lesions between the two sequences was very good (κ=0.915-1). The 3D-FSE sequence with CS is feasible in evaluating the shoulder joint with reduced scan time compared to 3D-FSE without CS. Copyright © 2017 Elsevier Inc. All rights reserved.

Top