3D vision system for intelligent milking robot automation
NASA Astrophysics Data System (ADS)
Akhloufi, M. A.
2013-12-01
In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.
Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun
2011-01-01
In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm. PMID:22319408
Three-dimensional vision enhances task performance independently of the surgical method.
Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A
2012-10-01
Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P < 0.0001-0.02). Simple tasks took 25 % to 30 % longer to complete and more complex tasks took 75 % longer with 2D than with 3D vision. Only the difficult task was performed faster with the robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.
Nose, Atsushi; Yamazaki, Tomohiro; Katayama, Hironobu; Uehara, Shuji; Kobayashi, Masatsugu; Shida, Sayaka; Odahara, Masaki; Takamiya, Kenichi; Matsumoto, Shizunori; Miyashita, Leo; Watanabe, Yoshihiro; Izawa, Takashi; Muramatsu, Yoshinori; Nitta, Yoshikazu; Ishikawa, Masatoshi
2018-04-24
We have developed a high-speed vision chip using 3D stacking technology to address the increasing demand for high-speed vision chips in diverse applications. The chip comprises a 1/3.2-inch, 1.27 Mpixel, 500 fps (0.31 Mpixel, 1000 fps, 2 × 2 binning) vision chip with 3D-stacked column-parallel Analog-to-Digital Converters (ADCs) and 140 Giga Operation per Second (GOPS) programmable Single Instruction Multiple Data (SIMD) column-parallel PEs for new sensing applications. The 3D-stacked structure and column parallel processing architecture achieve high sensitivity, high resolution, and high-accuracy object positioning.
Chiang, Mao-Hsiung; Lin, Hao-Ting
2011-01-01
This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to verify the feasibility of the proposed parallel mechanism robot driven by three vertical pneumatic servo actuators, a full-scale test rig of the proposed parallel mechanism pneumatic robot is set up. Thus, simulations and experiments for different complex 3D motion profiles of the robot end-effector can be successfully achieved. The desired, the actual and the calculated 3D position of the end-effector can be compared in the complex 3D motion control.
Chiang, Mao-Hsiung; Lin, Hao-Ting
2011-01-01
This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot’s end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H∞ tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to verify the feasibility of the proposed parallel mechanism robot driven by three vertical pneumatic servo actuators, a full-scale test rig of the proposed parallel mechanism pneumatic robot is set up. Thus, simulations and experiments for different complex 3D motion profiles of the robot end-effector can be successfully achieved. The desired, the actual and the calculated 3D position of the end-effector can be compared in the complex 3D motion control. PMID:22247676
Student performance and appreciation using 3D vs. 2D vision in a virtual learning environment.
de Boer, I R; Wesselink, P R; Vervoorn, J M
2016-08-01
The aim of this study was to investigate the differences in the performance and appreciation of students working in a virtual learning environment with two (2D)- or three (3D)-dimensional vision. One hundred and twenty-four randomly divided first-year dental students performed a manual dexterity exercise on the Simodont dental trainer with an automatic assessment. Group 1 practised in 2D vision and Group 2 in 3D. All of the students practised five times for 45 min and then took a test using the vision they had practised in. After test 1, all of the students switched the type of vision to control for the learning curve: Group 1 practised in 3D and took a test in 3D, whilst Group 2 practised in 2D and took the test in 2D. To pass, three of five exercises had to be successfully completed within a time limit. The students filled out a questionnaire after completing test 2. The results show that students working with 3D vision achieved significantly better results than students who worked in 2D. Ninety-five per cent of the students filled out the questionnaire, and over 90 per cent preferred 3D vision. The use of 3D vision in a virtual learning environment has a significant positive effect on the performance of the students as well as on their appreciation of the environment. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding
Zeng, Jinle; Chang, Baohua; Du, Dong; Peng, Guodong; Chang, Shuhe; Hong, Yuxiang; Wang, Li; Shan, Jiguo
2017-01-01
For better welding quality, accurate path teaching for actuators must be achieved before welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may be different from the predetermined path. Therefore, it is significant to recognize the actual groove position using machine vision methods and perform an accurate path teaching process. However, during the teaching process of a narrow butt joint, the existing machine vision methods may fail because of poor adaptability, low resolution, and lack of 3D information. This paper proposes a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition. The 3D position and pose between the welding torch and groove can be calculated after information fusion. The image resolution can reach 12.5 μm. Experiments are carried out at an actuator speed of 2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable for groove recognition before narrow butt joint welding and can be applied in path teaching fields of 3D complex components. PMID:28492481
A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding.
Zeng, Jinle; Chang, Baohua; Du, Dong; Peng, Guodong; Chang, Shuhe; Hong, Yuxiang; Wang, Li; Shan, Jiguo
2017-05-11
For better welding quality, accurate path teaching for actuators must be achieved before welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may be different from the predetermined path. Therefore, it is significant to recognize the actual groove position using machine vision methods and perform an accurate path teaching process. However, during the teaching process of a narrow butt joint, the existing machine vision methods may fail because of poor adaptability, low resolution, and lack of 3D information. This paper proposes a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition. The 3D position and pose between the welding torch and groove can be calculated after information fusion. The image resolution can reach 12.5 μm. Experiments are carried out at an actuator speed of 2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable for groove recognition before narrow butt joint welding and can be applied in path teaching fields of 3D complex components.
A multimodal 3D framework for fire characteristics estimation
NASA Astrophysics Data System (ADS)
Toulouse, T.; Rossi, L.; Akhloufi, M. A.; Pieri, A.; Maldague, X.
2018-02-01
In the last decade we have witnessed an increasing interest in using computer vision and image processing in forest fire research. Image processing techniques have been successfully used in different fire analysis areas such as early detection, monitoring, modeling and fire front characteristics estimation. While the majority of the work deals with the use of 2D visible spectrum images, recent work has introduced the use of 3D vision in this field. This work proposes a new multimodal vision framework permitting the extraction of the three-dimensional geometrical characteristics of fires captured by multiple 3D vision systems. The 3D system is a multispectral stereo system operating in both the visible and near-infrared (NIR) spectral bands. The framework supports the use of multiple stereo pairs positioned so as to capture complementary views of the fire front during its propagation. Multimodal registration is conducted using the captured views in order to build a complete 3D model of the fire front. The registration process is achieved using multisensory fusion based on visual data (2D and NIR images), GPS positions and IMU inertial data. Experiments were conducted outdoors in order to show the performance of the proposed framework. The obtained results are promising and show the potential of using the proposed framework in operational scenarios for wildland fire research and as a decision management system in fighting.
NASA Astrophysics Data System (ADS)
Li, Peng; Chong, Wenyan; Ma, Yongjun
2017-10-01
In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.
3D gaze tracking system for NVidia 3D Vision®.
Wibirama, Sunu; Hamamoto, Kazuhiko
2013-01-01
Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.
Wide field-of-view bifocal eyeglasses
NASA Astrophysics Data System (ADS)
Barbero, Sergio; Rubinstein, Jacob
2015-09-01
When vision is affected simultaneously by presbyopia and myopia or hyperopia, a solution based on eyeglasses implies a surface with either segmented focal regions (e.g. bifocal lenses) or a progressive addition profile (PALs). However, both options have the drawback of reducing the field-of-view for each power position, which restricts the natural eye-head movements of the wearer. To avoid this serious limitation we propose a new solution which is essentially a bifocal power-adjustable optical design ensuring a wide field-of-view for every viewing distance. The optical system is based on the Alvarez principle. Spherical refraction correction is considered for different eccentric gaze directions covering a field-of-view range up to 45degrees. Eye movements during convergence for near objects are included. We designed three bifocal systems. The first one provides 3 D for far vision (myopic eye) and -1 D for near vision (+2 D Addition). The second one provides a +3 D addition with 3 D for far vision. Finally the last system is an example of reading glasses with +1 D power Addition.
Audible vision for the blind and visually impaired in indoor open spaces.
Yu, Xunyi; Ganz, Aura
2012-01-01
In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.
Systems Analysis of Remote Piloting/Robotics Technology Applicable to Assault Rafts.
1982-01-01
LTOG 3.936 m n Driver Position Driver is only member of crew seated under armor - seated in front leftI of hull - 3 M17 periscopes - single piece...hatch cover. Vision Data Summary I.D - 3P; H - 82.5* to 165* V - 110 to 220 SC - Is not under armor ; therefore has freedom of vision. Mobility Information... under armor - seated in front left * of hull - 3 Ml7 periscopes - single piece hatch cover. 4Vision Data Summary SD - 3P; H - 82.50 to 1650 V - 110 to
Clinically Normal Stereopsis Does Not Ensure Performance Benefit from Stereoscopic 3D Depth Cues
2014-10-28
Stereopsis, Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 16...Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 1 Distribution A: Approved
Identification and location of catenary insulator in complex background based on machine vision
NASA Astrophysics Data System (ADS)
Yao, Xiaotong; Pan, Yingli; Liu, Li; Cheng, Xiao
2018-04-01
It is an important premise to locate insulator precisely for fault detection. Current location algorithms for insulator under catenary checking images are not accurate, a target recognition and localization method based on binocular vision combined with SURF features is proposed. First of all, because of the location of the insulator in complex environment, using SURF features to achieve the coarse positioning of target recognition; then Using binocular vision principle to calculate the 3D coordinates of the object which has been coarsely located, realization of target object recognition and fine location; Finally, Finally, the key is to preserve the 3D coordinate of the object's center of mass, transfer to the inspection robot to control the detection position of the robot. Experimental results demonstrate that the proposed method has better recognition efficiency and accuracy, can successfully identify the target and has a define application value.
A vision-based method for planar position measurement
NASA Astrophysics Data System (ADS)
Chen, Zong-Hao; Huang, Peisen S.
2016-12-01
In this paper, a vision-based method is proposed for three-degree-of-freedom (3-DOF) planar position (XY{θZ} ) measurement. This method uses a single camera to capture the image of a 2D periodic pattern and then uses the 2D discrete Fourier transform (2D DFT) method to estimate the phase of its fundamental frequency component for position measurement. To improve position measurement accuracy, the phase estimation error of 2D DFT is analyzed and a phase estimation method is proposed. Different simulations are done to verify the feasibility of this method and study the factors that influence the accuracy and precision of phase estimation. To demonstrate the performance of the proposed method for position measurement, a prototype encoder consisting of a black-and-white industrial camera with VGA resolution (480 × 640 pixels) and an iPhone 4s has been developed. Experimental results show the peak-to-peak resolutions to be 3.5 nm in X axis, 8 nm in Y axis and 4 μ \\text{rad} in {θZ} axis. The corresponding RMS resolutions are 0.52 nm, 1.06 nm, and 0.60 μ \\text{rad} respectively.
Detecting High Hyperopia: The Plus Lens Test and the Spot Vision Screener.
Feldman, Samuel; Peterseim, Mae Millicent W; Trivedi, Rupal H; Edward Wilson, M; Cheeseman, Edward W; Papa, Carrie E
2017-05-01
To evaluate the usefulness of the Plus Lens (Goodlite Company, Elgin, IL) test and the Spot Vision Screener (Welch Allyn, Skaneateles Falls, NY) in detecting high hyperopia in a pediatric population. Between June and August 2015, patients were screened with the Spot Vision Screener and the Plus Lens test prior to a scheduled pediatric ophthalmology visit. The following data were analyzed: demographic data, Plus Lens result, Spot Vision Screener result, cycloplegic refraction, and examination findings. Sensitivity/specificity and positive/negative predictive values were calculated for the Plus Lens test and Spot Vision Screener in detecting hyperopia as determined by the "gold-standard" cycloplegic refraction. A total of 109 children (average age: 82 months) were included. Compared to the ophthalmologist's cycloplegic refraction, the Spot Vision Screener sensitivity for +3.50 diopters (D) hyperopia was 31.25% and the specificity was 100%. The Plus Lens sensitivity for +3.50 D hyperopia was 43.75% and the specificity was 89.25%. Spot Vision Screener sensitivity increased with higher degrees of hyperopia. In this preliminary study, the Plus Lens test and the Spot Vision Screener demonstrated moderate sensitivity with good specificity in detecting high hyperopia. [J Pediatr Ophthalmol Strabismus. 2017;54(3):163-167.]. Copyright 2017, SLACK Incorporated.
Progress in building a cognitive vision system
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Lyons, Damian; Yue, Hong
2016-05-01
We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.
Railway clearance intrusion detection method with binocular stereo vision
NASA Astrophysics Data System (ADS)
Zhou, Xingfang; Guo, Baoqing; Wei, Wei
2018-03-01
In the stage of railway construction and operation, objects intruding railway clearance greatly threaten the safety of railway operation. Real-time intrusion detection is of great importance. For the shortcomings of depth insensitive and shadow interference of single image method, an intrusion detection method with binocular stereo vision is proposed to reconstruct the 3D scene for locating the objects and judging clearance intrusion. The binocular cameras are calibrated with Zhang Zhengyou's method. In order to improve the 3D reconstruction speed, a suspicious region is firstly determined by background difference method of a single camera's image sequences. The image rectification, stereo matching and 3D reconstruction process are only executed when there is a suspicious region. A transformation matrix from Camera Coordinate System(CCS) to Track Coordinate System(TCS) is computed with gauge constant and used to transfer the 3D point clouds into the TCS, then the 3D point clouds are used to calculate the object position and intrusion in TCS. The experiments in railway scene show that the position precision is better than 10mm. It is an effective way for clearance intrusion detection and can satisfy the requirement of railway application.
NASA Astrophysics Data System (ADS)
Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad
2009-02-01
In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.
Synthetic vision in the cockpit: 3D systems for general aviation
NASA Astrophysics Data System (ADS)
Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth
2001-08-01
Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.
NASA Astrophysics Data System (ADS)
Hannachi, Ammar; Kohler, Sophie; Lallement, Alex; Hirsch, Ernest
2015-04-01
3D modeling of scene contents takes an increasing importance for many computer vision based applications. In particular, industrial applications of computer vision require efficient tools for the computation of this 3D information. Routinely, stereo-vision is a powerful technique to obtain the 3D outline of imaged objects from the corresponding 2D images. As a consequence, this approach provides only a poor and partial description of the scene contents. On another hand, for structured light based reconstruction techniques, 3D surfaces of imaged objects can often be computed with high accuracy. However, the resulting active range data in this case lacks to provide data enabling to characterize the object edges. Thus, in order to benefit from the positive points of various acquisition techniques, we introduce in this paper promising approaches, enabling to compute complete 3D reconstruction based on the cooperation of two complementary acquisition and processing techniques, in our case stereoscopic and structured light based methods, providing two 3D data sets describing respectively the outlines and surfaces of the imaged objects. We present, accordingly, the principles of three fusion techniques and their comparison based on evaluation criterions related to the nature of the workpiece and also the type of the tackled application. The proposed fusion methods are relying on geometric characteristics of the workpiece, which favour the quality of the registration. Further, the results obtained demonstrate that the developed approaches are well adapted for 3D modeling of manufactured parts including free-form surfaces and, consequently quality control applications using these 3D reconstructions.
Handheld pose tracking using vision-inertial sensors with occlusion handling
NASA Astrophysics Data System (ADS)
Li, Juan; Slembrouck, Maarten; Deboeverie, Francis; Bernardos, Ana M.; Besada, Juan A.; Veelaert, Peter; Aghajan, Hamid; Casar, José R.; Philips, Wilfried
2016-07-01
Tracking of a handheld device's three-dimensional (3-D) position and orientation is fundamental to various application domains, including augmented reality (AR), virtual reality, and interaction in smart spaces. Existing systems still offer limited performance in terms of accuracy, robustness, computational cost, and ease of deployment. We present a low-cost, accurate, and robust system for handheld pose tracking using fused vision and inertial data. The integration of measurements from embedded accelerometers reduces the number of unknown parameters in the six-degree-of-freedom pose calculation. The proposed system requires two light-emitting diode (LED) markers to be attached to the device, which are tracked by external cameras through a robust algorithm against illumination changes. Three data fusion methods have been proposed, including the triangulation-based stereo-vision system, constraint-based stereo-vision system with occlusion handling, and triangulation-based multivision system. Real-time demonstrations of the proposed system applied to AR and 3-D gaming are also included. The accuracy assessment of the proposed system is carried out by comparing with the data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved high accuracy of few centimeters in position estimation and few degrees in orientation estimation.
Using the auxiliary camera for system calibration of 3D measurement by digital speckle
NASA Astrophysics Data System (ADS)
Xue, Junpeng; Su, Xianyu; Zhang, Qican
2014-06-01
The study of 3D shape measurement by digital speckle temporal sequence correlation have drawn a lot of attention by its own advantages, however, the measurement mainly for depth z-coordinate, horizontal physical coordinate (x, y) are usually marked as image pixel coordinate. In this paper, a new approach for the system calibration is proposed. With an auxiliary camera, we made up the temporary binocular vision system, which are used for the calibration of horizontal coordinates (mm) while the temporal sequence reference-speckle-sets are calibrated. First, the binocular vision system has been calibrated using the traditional method. Then, the digital speckles are projected on the reference plane, which is moved by equal distance in the direction of depth, temporal sequence speckle images are acquired with camera as reference sets. When the reference plane is in the first position and final position, crossed fringe pattern are projected to the plane respectively. The control points of pixel coordinates are extracted by Fourier analysis from the images, and the physical coordinates are calculated by the binocular vision. The physical coordinates corresponding to each pixel of the images are calculated by interpolation algorithm. Finally, the x and y corresponding to arbitrary depth value z are obtained by the geometric formula. Experiments prove that our method can fast and flexibly measure the 3D shape of an object as point cloud.
Fast vision-based catheter 3D reconstruction
NASA Astrophysics Data System (ADS)
Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.
2016-07-01
Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of ±0.6 mm and ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.
Gong, Yuanzheng; Seibel, Eric J.
2017-01-01
Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection. PMID:28286351
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Seibel, Eric J.
2017-01-01
Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.
Pedrotti, Emilio; Carones, Francesco; Aiello, Francesco; Mastropasqua, Rodolfo; Bruni, Enrico; Bonacci, Erika; Talli, Pietro; Nucci, Carlo; Mariotti, Cesare; Marchini, Giorgio
2018-02-01
To compare the visual acuity, refractive outcomes, and quality of vision in patients with bilateral implantation of 4 intraocular lenses (IOLs). Department of Neurosciences, Biomedicine and Movement Sciences, Eye Clinic, University of Verona, Verona, and Carones Ophthalmology Center, Milano, Italy. Prospective case series. The study included patients who had bilateral cataract surgery with the implantation of 1 of 4 IOLs as follows: Tecnis 1-piece monofocal (monofocal IOL), Tecnis Symfony extended range of vision (extended-range-of-vision IOL), Restor +2.5 diopter (D) (+2.5 D multifocal IOL), and Restor +3.0 D (+3.0 D multifocal IOL). Visual acuity, refractive outcome, defocus curve, objective optical quality, contrast sensitivity, spectacle independence, and glare perception were evaluated 6 months after surgery. The study comprised 185 patients. The extended-range-of-vision IOL (55 patients) showed better distance visual outcomes than the monofocal IOL (30 patients) and high-addition apodized diffractive-refractive multifocal IOLs (P ≤ .002). The +3.0 D multifocal IOL (50 patients) showed the best near visual outcomes (P < .001). The +2.5 D multifocal IOL (50 patients) and extended-range-of-vision IOL provided significantly better intermediate visual outcomes than the other 2 IOLs, with significantly better vision for a defocus level of -1.5 D (P < .001). Better spectacle independence was shown for the +2.5 D multifocal IOL and extended-range-of-vision IOL (P < .001). The extended-range-of-vision IOL and +2.5 D multifocal IOL provided significantly better intermediate visual restoration after cataract surgery than the monofocal IOL and +3.0 D multifocal IOL, with significantly better quality of vision with the extended-range-of-vision IOL. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Four-dimensional (4D) tracking of high-temperature microparticles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui, E-mail: zwang@lanl.gov; Liu, Q.; Waganaar, W.
High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.
Four-dimensional (4D) tracking of high-temperature microparticles
NASA Astrophysics Data System (ADS)
Wang, Zhehui; Liu, Q.; Waganaar, W.; Fontanese, J.; James, D.; Munsat, T.
2016-11-01
High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.
Four-dimensional (4D) tracking of high-temperature microparticles
Wang, Zhehui; Liu, Qiuguang; Waganaar, Bill; ...
2016-07-08
High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. As a result, velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.
Four-dimensional (4D) tracking of high-temperature microparticles.
Wang, Zhehui; Liu, Q; Waganaar, W; Fontanese, J; James, D; Munsat, T
2016-11-01
High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
Stereo 3-D Vision in Teaching Physics
ERIC Educational Resources Information Center
Zabunov, Svetoslav
2012-01-01
Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…
Impact of 3D vision on mental workload and laparoscopic performance in inexperienced subjects.
Gómez-Gómez, E; Carrasco-Valiente, J; Valero-Rosa, J; Campos-Hernández, J P; Anglada-Curado, F J; Carazo-Carazo, J L; Font-Ugalde, P; Requena-Tapia, M J
2015-05-01
To assess the effect of vision in three dimensions (3D) versus two dimensions (2D) on mental workload and laparoscopic performance during simulation-based training. A prospective, randomized crossover study on inexperienced students in operative laparoscopy was conducted. Forty-six candidates executed five standardized exercises on a pelvitrainer with both vision systems (3D and 2D). Laparoscopy performance was assessed using the total time (in seconds) and the number of failed attempts. For workload assessment, the validated NASA-TLX questionnaire was administered. 3D vision improves the performance reducing the time (3D = 1006.08 ± 315.94 vs. 2D = 1309.17 ± 300.28; P < .001) and the total number of failed attempts (3D = .84 ± 1.26 vs. 2D = 1.86 ± 1.60; P < .001). For each exercise, 3D vision also shows better performance times: "transfer objects" (P = .001), "single knot" (P < .001), "clip and cut" (P < .05), and "needle guidance" (P < .001). Besides, according to the NASA-TLX results, less mental workload is experienced with the use of 3D (P < .001). However, 3D vision was associated with greater visual impairment (P < .01) and headaches (P < .05). The incorporation of 3D systems in laparoscopic training programs would facilitate the acquisition of laparoscopic skills, because they reduce mental workload and improve the performance on inexperienced surgeons. However, some undesirable effects such as visual discomfort or headache are identified initially. Copyright © 2014 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, Jason L.
2005-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results evinced the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.
[Accommodation to monochromatic targets in people with different color vision statuses].
Qian, Yishan; Huang, Jia; Chu, Renyuan
2015-01-01
To compare the accommodation response (AR) to monochromatic targets in subjects with different color vision statuses, and to investigate the role of color vision in the control of accommodation and emmetropization. It was a case-control study. Accommodation was measured with a dynamic infrared optometer while subjects [17 protans, 47 deutans, and 23 normals; mean age: (20.0 ± 4.4) years] viewed a (1) red on black or (2) green on black vertical square-wave gratings of iso-luminance (3 cycles/deg; 0.9 contrast) in a Badal optic system. The grating stepped 1.00 D towards the eye from an initial position of 0 D until 5.00 D. With red-black targets, the AR in the protans (AR = 1.98 D) was worse than that in the normals (AR = 2.55 D) when the accommodation stimulus (AS) was 4.00 D (LSD, P = 0.031). The AR in the deutans were worse than that in the normals when the AS was 3.00, 4.00, and 5.00 D (3.00 D: 1.23 D vs. 1.69 D, P = 0.002; 4.00 D: 1.89 D vs. 2.55 D, P = 0.002; 5.00 D: 2.40 D vs. 3.17 D, P = 0.003). With green-black targets, the AR in the protans were worse than that in the normals when the AS was 3.00 and 4.00 D (3.00 D: 1.13 D vs. 1.61 D, P = 0.004; 4.00 D: 1.80 D vs. 2.34 D, P = 0.021). In the deutans, the AR was worse with stimuli of 3.00, 4.00, and 5.00 D (3.00 D: 1.21 D vs. 1.61 D, P = 0.003; 4.00 D: 1.65 D vs. 2.34 D, P < 0.001; 5.00 D: 2.36 D vs. 2.93 D, P = 0.007). No significant differences between the protans and deutans were found for all the stimulus conditions. In the protans, accommodation to red-black targets was better than that to green-black targets when the stimulus was 2.00, 3.00, and 5.00 D (2.00 D: t = -2.81, P = 0.013; 3.00 D: t = -4.55, P < 0.001; 5.00 D: t = -3.15, P = 0.006). In the deutans, accommodation to red-black targets was better than that to green-black targets when the stimulus was 4.00 D (t = -2.19, P = 0.034). In the normals, accommodation to red-black targets were better than that to green-black targets when the stimulus was 2.00, 4.00, and 5.00 D (2.00 D: t = -2.57, P = 0.017; 4.00 D, t = -2.67, P = 0.014; 5.00 D: t = -2.15, P = 0.043). Individuals with a color vision deficiency tend to have a larger accommodative lag than normals. Red targets tend to induce better accommodation response than green ones. Color vision may play a role in the control of accommodation and emmetropization.
Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra
2013-11-01
This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena
2013-01-01
In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.
Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision
NASA Astrophysics Data System (ADS)
Gai, Qiyang
2018-01-01
Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.
Position Accuracy Analysis of a Robust Vision-Based Navigation
NASA Astrophysics Data System (ADS)
Gaglione, S.; Del Pizzo, S.; Troisi, S.; Angrisano, A.
2018-05-01
Using images to determine camera position and attitude is a consolidated method, very widespread for application like UAV navigation. In harsh environment, where GNSS could be degraded or denied, image-based positioning could represent a possible candidate for an integrated or alternative system. In this paper, such method is investigated using a system based on single camera and 3D maps. A robust estimation method is proposed in order to limit the effect of blunders or noisy measurements on position solution. The proposed approach is tested using images collected in an urban canyon, where GNSS positioning is very unaccurate. A previous photogrammetry survey has been performed to build the 3D model of tested area. The position accuracy analysis is performed and the effect of the robust method proposed is validated.
Associations between hyperopia and other vision and refractive error characteristics.
Kulp, Marjean Taylor; Ying, Gui-Shuang; Huang, Jiayan; Maguire, Maureen; Quinn, Graham; Ciner, Elise B; Cyert, Lynn A; Orel-Bixler, Deborah A; Moore, Bruce D
2014-04-01
To investigate the association of hyperopia greater than +3.25 diopters (D) with amblyopia, strabismus, anisometropia, astigmatism, and reduced stereoacuity in preschoolers. Three- to five-year-old Head Start preschoolers (N = 4040) underwent vision examination including monocular visual acuity (VA), cover testing, and cycloplegic refraction during the Vision in Preschoolers Study. Visual acuity was tested with habitual correction and was retested with full cycloplegic correction when VA was reduced below age norms in the presence of significant refractive error. Stereoacuity testing (Stereo Smile II) was performed on 2898 children during study years 2 and 3. Hyperopia was classified into three levels of severity (based on the most positive meridian on cycloplegic refraction): group 1: greater than or equal to +5.00 D, group 2: greater than +3.25 D to less than +5.00 D with interocular difference in spherical equivalent greater than or equal to 0.50 D, and group 3: greater than +3.25 D to less than +5.00 D with interocular difference in spherical equivalent less than 0.50 D. "Without" hyperopia was defined as refractive error of +3.25 D or less in the most positive meridian in both eyes. Standard definitions were applied for amblyopia, strabismus, anisometropia, and astigmatism. Relative to children without hyperopia, children with hyperopia greater than +3.25 D (n = 472, groups 1, 2, and 3) had a higher proportion of amblyopia (34.5 vs. 2.8%, p < 0.0001) and strabismus (17.0 vs. 2.2%, p < 0.0001). More severe levels of hyperopia were associated with higher proportions of amblyopia (51.5% in group 1 vs. 13.2% in group 3) and strabismus (32.9% in group 1 vs. 8.4% in group 3; trend p < 0.0001 for both). The presence of hyperopia greater than +3.25 D was also associated with a higher proportion of anisometropia (26.9 vs. 5.1%, p < 0.0001) and astigmatism (29.4 vs. 10.3%, p < 0.0001). Median stereoacuity of nonstrabismic, nonamblyopic children with hyperopia (n = 206) (120 arcsec) was worse than that of children without hyperopia (60 arcsec) (p < 0.0001), and more severe levels of hyperopia were associated with worse stereoacuity (480 arcsec for group 1 and 120 arcsec for groups 2 and 3, p < 0.0001). The presence and magnitude of hyperopia among preschoolers were associated with higher proportions of amblyopia, strabismus, anisometropia, and astigmatism and with worse stereoacuity even among nonstrabismic, nonamblyopic children.
CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System
Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman
1991-01-01
Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...
Amigó, Alfredo; Martinez-Sorribes, Paula; Recuerda, Margarita
2017-07-01
To study the effect on vision of induced negative and positive spherical aberration within the range of laser vision correction procedures. In 10 eyes (mean age: 35.8 years) under cyclopegic conditions, spherical aberration values from -0.75 to +0.75 µm in 0.25-µm steps were induced by an adaptive optics system. Astigmatism and spherical refraction were corrected, whereas the other natural aberrations remained untouched. Visual acuity, depth of focus defined as the interval of vision for which the target was still perceived acceptable, contrast sensitivity, and change in spherical refraction associated with the variation in pupil diameter from 6 to 2.5 mm were measured. A refractive change of 1.60 D/µm of induced spherical aberration was obtained. Emmetropic eyes became myopic when positive spherical aberration was induced and hyperopic when negative spherical aberration was induced (R 2 = 81%). There were weak correlations between spherical aberration and visual acuity or depth of focus (R 2 = 2% and 3%, respectively). Contrast sensitivity worsened with the increment of spherical aberration (R 2 = 59%). When pupil size decreased, emmetropic eyes became hyperopic when preexisting spherical aberration was positive and myopic when spherical aberration was negative, with an average refractive change of 0.60 D/µm of spherical aberration (R 2 = 54%). An inverse linear correlation exists between the refractive state of the eye and spherical aberration induced within the range of laser vision correction. Small values of spherical aberration do not worsen visual acuity or depth of focus, but positive spherical aberration may induce night myopia. In addition, the changes in spherical refraction when the pupil constricts may worsen near vision when positive spherical aberration is induced or improve it when spherical aberration is negative. [J Refract Surg. 2017;33(7):470-474.]. Copyright 2017, SLACK Incorporated.
A laser-based vision system for weld quality inspection.
Huang, Wei; Kovacevic, Radovan
2011-01-01
Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved.
A Laser-Based Vision System for Weld Quality Inspection
Huang, Wei; Kovacevic, Radovan
2011-01-01
Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308
Abdelrahman, M; Belramman, A; Salem, R; Patel, B
2018-05-01
To compare the performance of novices in laparoscopic peg transfer and intra-corporeal suturing tasks in two-dimensional (2D), three-dimensional (3D) and ultra-high definition (4K) vision systems. Twenty-four novices were randomly assigned to 2D, 3D and 4K groups, eight in each group. All participants performed the two tasks on a box trainer until reaching proficiency. Their performance was assessed based on completion time, number of errors and number of repetitions using the validated FLS proficiency criteria. Eight candidates in each group completed the training curriculum. The mean performance time (in minutes) for the 2D group was 558.3, which was more than that of the 3D and 4K groups of 316.7 and 310.4 min respectively (P < 0.0001). The mean number of repetitions was lower for the 3D and 4K groups versus the 2D group: 125.9 and 127.4 respectively versus 152.1 (P < 0.0001). The mean number of errors was lower for the 4K group versus the 3D and 2D groups: 1.2 versus 26.1 and 50.2 respectively (P < 0.0001). The 4K vision system improved accuracy in acquiring laparoscopic skills for novices in complex tasks, which was shown in significant reduction in number of errors compared to the 3D and the 2D vision systems. The 3D and the 4K vision systems significantly improved speed and accuracy when compared to the 2D vision system based on shorter performance time, fewer errors and lesser number of repetitions. Copyright © 2018 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
3D vision upgrade kit for the TALON robot system
NASA Astrophysics Data System (ADS)
Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott
2010-02-01
In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.
Feature-based three-dimensional registration for repetitive geometry in machine vision
Gong, Yuanzheng; Seibel, Eric J.
2016-01-01
As an important step in three-dimensional (3D) machine vision, 3D registration is a process of aligning two or multiple 3D point clouds that are collected from different perspectives together into a complete one. The most popular approach to register point clouds is to minimize the difference between these point clouds iteratively by Iterative Closest Point (ICP) algorithm. However, ICP does not work well for repetitive geometries. To solve this problem, a feature-based 3D registration algorithm is proposed to align the point clouds that are generated by vision-based 3D reconstruction. By utilizing texture information of the object and the robustness of image features, 3D correspondences can be retrieved so that the 3D registration of two point clouds is to solve a rigid transformation. The comparison of our method and different ICP algorithms demonstrates that our proposed algorithm is more accurate, efficient and robust for repetitive geometry registration. Moreover, this method can also be used to solve high depth uncertainty problem caused by little camera baseline in vision-based 3D reconstruction. PMID:28286703
Estimation of 3D reconstruction errors in a stereo-vision system
NASA Astrophysics Data System (ADS)
Belhaoua, A.; Kohler, S.; Hirsch, E.
2009-06-01
The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.
Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization
NASA Technical Reports Server (NTRS)
Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.
2012-01-01
The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.
Krause, Matthias; Anschütz, Wilma; Vettorazzi, Eik; Breer, Stefan; Amling, Michael; Barvencik, Florian
2014-01-01
Due to inconsistent findings, the influence of vitamin D on postural body sway (PBS) is currently under debate. This study evaluated the impact of vitamin D on PBS with regards to different foot positions and eye opening states in community-dwelling older individuals. In a cross-sectional study, we assessed PBS in 342 older individuals (264 females [average age (± SD): 68.3 ± 9.0 years], 78 males [65.7 ± 9.6 years]). A detailed medical history and vitamin D level were obtained for each individual. Fall risk was evaluated using the New York-Presbyterian Fall Risk Assessment Tool (NY PFRA). PBS parameters (area, distance, velocity, frequency) were evaluated on a pressure plate with feet in closed stance (CS) or hip-width stance (HWS), open eyes and closed eyes. Statistical analysis included logarithmic mixed models for repeated measures with the MIXED model procedure to test the influence of vitamin D (categorized in <10 μg/l, 10-20 μg/l, 21-30 μg/l, >30 μg/l), foot position, eye opening state, age, sex and frequency of physical activity on PBS. Vitamin D was not an independent risk factor for falls experienced in the last 12 months. Nonetheless, PBS was higher in patients with vitamin D deficiency (<10 μg/l) in HWS (A/P p=0.028 and area p=0.037). Additionally, vitamin D deficiency intensified the deleterious effects of male sex (distance p=0.002) and absence of vision (area p<0.001) on PBS. Independent risk factors for increased PBS like male sex and absence of vision are additionally compromised by vitamin D deficiency. Copyright © 2013 Elsevier B.V. All rights reserved.
Integrity Determination for Image Rendering Vision Navigation
2016-03-01
identifying an object within a scene, tracking a SIFT feature between frames or matching images and/or features for stereo vision applications. This... object level, either in 2-D or 3-D, versus individual features. There is a breadth of information, largely from the machine vision community...matching or image rendering image correspondence approach is based upon using either 2-D or 3-D object models or templates to perform object detection or
A dental vision system for accurate 3D tooth modeling.
Zhang, Li; Alemzadeh, K
2006-01-01
This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.
Autonomous Aerial Refueling Ground Test Demonstration—A Sensor-in-the-Loop, Non-Tracking Method
Chen, Chao-I; Koseluk, Robert; Buchanan, Chase; Duerner, Andrew; Jeppesen, Brian; Laux, Hunter
2015-01-01
An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR). This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques. The method overcomes the inherit ambiguity issues when reconstructing 3D information from traditional 2D images by taking advantage of ready to use 3D point cloud data from the camera, followed by well-established computer vision techniques. These techniques include curve fitting algorithms and outlier removal with the random sample consensus (RANSAC) algorithm to reliably estimate the drogue center in 3D space, as well as to establish the relative position between the probe and the drogue. To demonstrate the feasibility of the proposed method on a real system, a ground navigation robot was designed and fabricated. Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously. PMID:25970254
Comparative visual performance with monofocal and multifocal intraocular lenses
Gundersen, Kjell Gunnar; Potvin, Richard
2013-01-01
Background To compare near, intermediate, and distance vision, and quality of vision using appropriate subjective questionnaires, when monofocal or apodized diffractive multifocal intraocular lenses (IOLs) are binocularly implanted. Methods Patients with different binocular IOLs implanted were recruited after surgery and had their visual acuity tested, and quality of vision evaluated, at a single diagnostic visit between 3 and 8 months after second-eye surgery. Lenses tested included an aspheric monofocal and two apodized diffractive multifocal IOLs with slightly different design parameters. A total of 94 patients were evaluated. Results Subjects with the ReSTOR® +2.5 D IOL had better near and intermediate vision than those subjects with a monofocal IOL. Intermediate vision was similar to, and near vision slightly lower than, that of subjects with a ReSTOR® +3.0 D IOL implanted. The preferred reading distance was slightly farther out for the +2.5 D relative to the +3.0 D lens, and farthest for the monofocal. Visual acuity at the preferred reading distance was equal with the two multifocal IOLs and significantly worse with the monofocal IOL. Quality of vision measures were highest with the monofocal IOL and similar between the two multifocal IOLs. Conclusion The data indicate that the ReSTOR +2.5 D IOL provided good intermediate and functional near vision for patients who did not want to accept a higher potential for visual disturbances associated with the ReSTOR +3.0 D IOL, but wanted more near vision than a monofocal IOL generally provides. Quality of vision was not significantly different between the multifocal IOLs, but patient self-selection for each lens type may have been a factor. PMID:24143064
Vision System Measures Motions of Robot and External Objects
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2008-01-01
A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.
Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation
NASA Technical Reports Server (NTRS)
Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri
2002-01-01
The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.
A position and attitude vision measurement system for wind tunnel slender model
NASA Astrophysics Data System (ADS)
Cheng, Lei; Yang, Yinong; Xue, Bindang; Zhou, Fugen; Bai, Xiangzhi
2014-11-01
A position and attitude vision measurement system for drop test slender model in wind tunnel is designed and developed. The system used two high speed cameras, one is put to the side of the model and another is put to the position where the camera can look up the model. Simple symbols are set on the model. The main idea of the system is based on image matching technique between the 3D-digital model projection image and the image captured by the camera. At first, we evaluate the pitch angles, the roll angles and the position of the centroid of a model through recognizing symbols in the images captured by the side camera. And then, based on the evaluated attitude info, giving a series of yaw angles, a series of projection images of the 3D-digital model are obtained. Finally, these projection images are matched with the image which captured by the looking up camera, and the best match's projection images corresponds to the yaw angle is the very yaw angle of the model. Simulation experiments are conducted and the results show that the maximal error of attitude measurement is less than 0.05°, which can meet the demand of test in wind tunnel.
Impact of 2D and 3D vision on performance of novice subjects using da Vinci robotic system.
Blavier, A; Gaudissart, Q; Cadière, G B; Nyssen, A S
2006-01-01
The aim of this study was to evaluate the impact of 3D and 2D vision on performance of novice subjects using da Vinci robotic system. 224 nurses without any surgical experience were divided into two groups and executed a motor task with the robotic system in 2D for one group and with the robotic system in 3D for the other group. Time to perform the task was recorded. Our data showed significant better time performance in 3D view (24.67 +/- 11.2) than in 2D view (40.26 +/- 17.49, P < 0.001). Our findings emphasized the advantage of 3D vision over 2D view in performing surgical task, encouraging the development of efficient and less expensive 3D systems in order to improve the accuracy of surgical gesture, the resident training and the operating time.
NASA Astrophysics Data System (ADS)
Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.
2018-04-01
Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.
Use of 3D vision for fine robot motion
NASA Technical Reports Server (NTRS)
Lokshin, Anatole; Litwin, Todd
1989-01-01
An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.
The viewpoint-specific failure of modern 3D displays in laparoscopic surgery.
Sakata, Shinichiro; Grove, Philip M; Hill, Andrew; Watson, Marcus O; Stevenson, Andrew R L
2016-11-01
Surgeons conventionally assume the optimal viewing position during 3D laparoscopic surgery and may not be aware of the potential hazards to team members positioned across different suboptimal viewing positions. The first aim of this study was to map the viewing positions within a standard operating theatre where individuals may experience visual ghosting (i.e. double vision images) from crosstalk. The second aim was to characterize the standard viewing positions adopted by instrument nurses and surgical assistants during laparoscopic pelvic surgery and report the associated levels of visual ghosting and discomfort. In experiment 1, 15 participants viewed a laparoscopic 3D display from 176 different viewing positions around the screen. In experiment 2, 12 participants (randomly assigned to four clinically relevant viewing positions) viewed laparoscopic suturing in a simulation laboratory. In both experiments, we measured the intensity of visual ghosting. In experiment 2, participants also completed the Simulator Sickness Questionnaire. We mapped locations within the dimensions of a standard operating theatre at which visual ghosting may result during 3D laparoscopy. Head height relative to the bottom of the image and large horizontal eccentricities away from the surface normal were important contributors to high levels of visual ghosting. Conventional viewing positions adopted by instrument nurses yielded high levels of visual ghosting and severe discomfort. The conventional viewing positions adopted by surgical team members during laparoscopic pelvic operations are suboptimal for viewing 3D laparoscopic displays, and even short periods of viewing can yield high levels of discomfort.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.
2006-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results showed the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.
Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong
2015-04-14
Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.
Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge
2011-01-01
This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly “coupled” as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a “zero” or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy. PMID:22346569
Can the Farnsworth D15 Color Vision Test Be Defeated through Practice?
Ng, Jason S; Liem, Sophia C
2018-05-01
This study suggests that it is possible for some patients with severe red-green color vision deficiency to do perfectly on the Farnsworth D15 test after practicing it. The Farnsworth D15 is a commonly used test to qualify people for certain occupations. For patients with color vision deficiency, there may be high motivation to try to pass the test through practice to gain entry into a particular occupation. There is no evidence in the literature on whether it is possible for patients to learn to pass the D15 test through practice. Ten subjects with inherited red-green color vision deficiency and 15 color-normal subjects enrolled in the study. All subjects had anomaloscope testing, color vision book tests, and a Farnsworth D15 at an initial visit. For the D15, the number of major crossovers was determined for each subject. Failing the D15 was determined as greater than 1 major crossover. Subjects with color vision deficiency practiced the D15 as long as desired to achieve a perfect score and then returned for a second visit for D15 testing. A paired t test was used to analyze the number of major crossovers at visit 1 versus visit 2. Color-normal subjects did not have any major crossovers. Subjects with color vision deficiency had significantly (P < .001) fewer major crossovers on the D15 test at visit 2 (mean/SD = 2.5/3.0), including five subjects with dichromacy that achieved perfect D15 performance, compared to visit 1 (mean/SD = 8.7/1.3). Practice of the Farnsworth D15 test can lead to perfect performance for some patients with color vision deficiency, and this should be considered in certain cases where occupational entry is dependent on D15 testing.
Recent results in visual servoing
NASA Astrophysics Data System (ADS)
Chaumette, François
2008-06-01
Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.
Feature detection on 3D images of dental imprints
NASA Astrophysics Data System (ADS)
Mokhtari, Marielle; Laurendeau, Denis
1994-09-01
A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.
NASA Astrophysics Data System (ADS)
Jones, Christopher W.; O’Connor, Daniel
2018-07-01
Dimensional surface metrology is required to enable advanced manufacturing process control for products such as large-area electronics, microfluidic structures, and light management films, where performance is determined by micrometre-scale geometry or roughness formed over metre-scale substrates. While able to perform 100% inspection at a low cost, commonly used 2D machine vision systems are insufficient to assess all of the functionally relevant critical dimensions in such 3D products on their own. While current high-resolution 3D metrology systems are able to assess these critical dimensions, they have a relatively small field of view and are thus much too slow to keep up with full production speeds. A hybrid 2D/3D inspection concept is demonstrated, combining a small field of view, high-performance 3D topography-measuring instrument with a large field of view, high-throughput 2D machine vision system. In this concept, the location of critical dimensions and defects are first registered using the 2D system, then smart routing algorithms and high dynamic range (HDR) measurement strategies are used to efficiently acquire local topography using the 3D sensor. A motion control platform with a traceable position referencing system is used to recreate various sheet-to-sheet and roll-to-roll inline metrology scenarios. We present the artefacts and procedures used to calibrate this hybrid sensor system for traceable dimensional measurement, as well as exemplar measurement of optically challenging industrial test structures.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-28
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-01
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496
Recent advances in the development and transfer of machine vision technologies for space
NASA Technical Reports Server (NTRS)
Defigueiredo, Rui J. P.; Pendleton, Thomas
1991-01-01
Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.
Fractal tomography and its application in 3D vision
NASA Astrophysics Data System (ADS)
Trubochkina, N.
2018-01-01
A three-dimensional artistic fractal tomography method that implements a non-glasses 3D visualization of fractal worlds in layered media is proposed. It is designed for the glasses-free 3D vision of digital art objects and films containing fractal content. Prospects for the development of this method in art galleries and the film industry are considered.
Application of Stereo Vision to the Reconnection Scaling Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klarenbeek, Johnny; Sears, Jason A.; Gao, Kevin W.
The measurement and simulation of the three-dimensional structure of magnetic reconnection in astrophysical and lab plasmas is a challenging problem. At Los Alamos National Laboratory we use the Reconnection Scaling Experiment (RSX) to model 3D magnetohydrodynamic (MHD) relaxation of plasma filled tubes. These magnetic flux tubes are called flux ropes. In RSX, the 3D structure of the flux ropes is explored with insertable probes. Stereo triangulation can be used to compute the 3D position of a probe from point correspondences in images from two calibrated cameras. While common applications of stereo triangulation include 3D scene reconstruction and robotics navigation, wemore » will investigate the novel application of stereo triangulation in plasma physics to aid reconstruction of 3D data for RSX plasmas. Several challenges will be explored and addressed, such as minimizing 3D reconstruction errors in stereo camera systems and dealing with point correspondence problems.« less
Integration of a 3D perspective view in the navigation display: featuring pilot's mental model
NASA Astrophysics Data System (ADS)
Ebrecht, L.; Schmerwitz, S.
2015-05-01
Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.
Panoramic stereo sphere vision
NASA Astrophysics Data System (ADS)
Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian
2013-01-01
Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.
Photogrammetric 3d Building Reconstruction from Thermal Images
NASA Astrophysics Data System (ADS)
Maset, E.; Fusiello, A.; Crosilla, F.; Toldo, R.; Zorzetto, D.
2017-08-01
This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.
3-D Vision Techniques for Autonomous Vehicles
1988-08-01
TITLE (Include Security Classification) W 3-D Vision Techniques for Autonomous Vehicles 12 PERSONAL AUTHOR(S) Martial Hebert, Takeo Kanade, inso Kweoni... Autonomous Vehicles Martial Hebert, Takeo Kanade, Inso Kweon CMU-RI-TR-88-12 The Robotics Institute Carnegie Mellon University Acession For Pittsburgh
[Evaluation of Motion Sickness Induced by 3D Video Clips].
Matsuura, Yasuyuki; Takada, Hiroki
2016-01-01
The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology.
Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays.
Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira
2015-11-30
A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array.
3-D rigid body tracking using vision and depth sensors.
Gedik, O Serdar; Alatan, A Aydn
2013-10-01
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.
A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system
NASA Astrophysics Data System (ADS)
Ge, Zhuo; Zhu, Ying; Liang, Guanhao
2017-01-01
To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.
X-ray and optical stereo-based 3D sensor fusion system for image-guided neurosurgery.
Kim, Duk Nyeon; Chae, You Seong; Kim, Min Young
2016-04-01
In neurosurgery, an image-guided operation is performed to confirm that the surgical instruments reach the exact lesion position. Among the multiple imaging modalities, an X-ray fluoroscope mounted on C- or O-arm is widely used for monitoring the position of surgical instruments and the target position of the patient. However, frequently used fluoroscopy can result in relatively high radiation doses, particularly for complex interventional procedures. The proposed system can reduce radiation exposure and provide the accurate three-dimensional (3D) position information of surgical instruments and the target position. X-ray and optical stereo vision systems have been proposed for the C- or O-arm. Two subsystems have same optical axis and are calibrated simultaneously. This provides easy augmentation of the camera image and the X-ray image. Further, the 3D measurement of both systems can be defined in a common coordinate space. The proposed dual stereoscopic imaging system is designed and implemented for mounting on an O-arm. The calibration error of the 3D coordinates of the optical stereo and X-ray stereo is within 0.1 mm in terms of the mean and the standard deviation. Further, image augmentation with the camera image and the X-ray image using an artificial skull phantom is achieved. As the developed dual stereoscopic imaging system provides 3D coordinates of the point of interest in both optical images and fluoroscopic images, it can be used by surgeons to confirm the position of surgical instruments in a 3D space with minimum radiation exposure and to verify whether the instruments reach the surgical target observed in fluoroscopic images.
NASA Astrophysics Data System (ADS)
Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid
2017-10-01
Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.
Evaluation of vision training using 3D play game
NASA Astrophysics Data System (ADS)
Kim, Jung-Ho; Kwon, Soon-Chul; Son, Kwang-Chul; Lee, Seung-Hyun
2015-03-01
The present study aimed to examine the effect of the vision training, which is a benefit of watching 3D video images (3D video shooting game in this study), focusing on its accommodative facility and vergence facility. Both facilities, which are the scales used to measure human visual performance, are very important factors for man in leading comfortable and easy life. This study was conducted on 30 participants in their 20s through 30s (19 males and 11 females at 24.53 ± 2.94 years), who can watch 3D video images and play 3D game. Their accommodative and vergence facility were measured before and after they watched 2D and 3D game. It turned out that their accommodative facility improved after they played both 2D and 3D games and more improved right after they played 3D game than 2D game. Likewise, their vergence facility was proved to improve after they played both 2D and 3D games and more improved soon after they played 3D game than 2D game. In addition, it was demonstrated that their accommodative facility improved to greater extent than their vergence facility. While studies have been so far conducted on the adverse effects of 3D contents, from the perspective of human factor, on the imbalance of visual accommodation and convergence, the present study is expected to broaden the applicable scope of 3D contents by utilizing the visual benefit of 3D contents for vision training.
Visual discomfort while watching stereoscopic three-dimensional movies at the cinema.
Zeri, Fabrizio; Livi, Stefano
2015-05-01
This study investigates discomfort symptoms while watching Stereoscopic three-dimensional (S3D) movies in the 'real' condition of a cinema. In particular, it had two main objectives: to evaluate the presence and nature of visual discomfort while watching S3D movies, and to compare visual symptoms during S3D and 2D viewing. Cinema spectators of S3D or 2D films were interviewed by questionnaire at the theatre exit of different multiplex cinemas immediately after viewing a movie. A total of 854 subjects were interviewed (mean age 23.7 ± 10.9 years; range 8-81 years; 392 females and 462 males). Five hundred and ninety-nine of them viewed different S3D movies, and 255 subjects viewed a 2D version of a film seen in S3D by 251 subjects from the S3D group for a between-subjects design for that comparison. Exploratory factor analysis revealed two factors underlying symptoms: External Symptoms Factors (ESF) with a mean ± S.D. symptom score of 1.51 ± 0.58 comprised of eye burning, eye ache, eye strain, eye irritation and tearing; and Internal Symptoms Factors (ISF) with a mean ± S.D. symptom score of 1.38 ± 0.51 comprised of blur, double vision, headache, dizziness and nausea. ISF and ESF were significantly correlated (Spearman r = 0.55; p = 0.001) but with external symptoms significantly higher than internal ones (Wilcoxon Signed-ranks test; p = 0.001). The age of participants did not significantly affect symptoms. However, females had higher scores than males for both ESF and ISF, and myopes had higher ISF scores than hyperopes. Newly released movies provided lower ESF scores than older movies, while the seat position of spectators had minimal effect. Symptoms while viewing S3D movies were significantly and negatively correlated to the duration of wearing S3D glasses. Kruskal-Wallis results showed that symptoms were significantly greater for S3D compared to those of 2D movies, both for ISF (p = 0.001) and for ESF (p = 0.001). In short, the analysis of the symptoms experienced by S3D movie spectators based on retrospective visual comfort assessments, showed a higher level of external symptoms (eye burning, eye ache, tearing, etc.) when compared to the internal ones that are typically more perceptual (blurred vision, double vision, headache, etc.). Furthermore, spectators of S3D movies reported statistically higher symptoms when compared to 2D spectators. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.
Power profiles of single vision and multifocal soft contact lenses.
Wagner, Sandra; Conrad, Fabian; Bakaraju, Ravi C; Fedtke, Cathleen; Ehrmann, Klaus; Holden, Brien A
2015-02-01
The purpose of this study was to investigate the optical zone power profile of the most commonly prescribed soft contact lenses to assess their potential impact on peripheral refractive error and hence myopia progression. The optical power profiles of six single vision and ten multifocal contact lenses of five manufacturers in the powers -1.00 D, -3.00 D, and -6.00 D were measured using the SHSOphthalmic (Optocraft GmbH, Erlangen, Germany). Instrument repeatability was also investigated. Instrument repeatability was dependent on the distance from the optical centre, manifesting unreliable data for the central 1mm of the optic zone. Single vision contact lens measurements of -6.00 D lenses revealed omafilcon A having the most negative spherical aberration, lotrafilcon A having the least. Somofilcon A had the highest minus power and lotrafilcon A the biggest deviation in positive direction, relative to their respective labelled powers. Negative spherical aberration occurred for almost all of the multifocal contact lenses, including the centre-distance designs etafilcon A bifocal and omafilcon A multifocal. Lotrafilcon B and balafilcon A seem to rely predominantly on the spherical aberration component to provide multifocality. Power profiles of single vision soft contact lenses varied greatly, many having a negative spherical aberration profile that would exacerbate myopia. Some lens types and powers are affected by large intra-batch variability or power offsets of more than 0.25 dioptres. Evaluation of power profiles of multifocal lenses was derived that provides helpful information for prescribing lenses for presbyopes and progressing myopes. Copyright © 2014 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Detailed 3D representations for object recognition and modeling.
Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad
2013-11-01
Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.
Initial Efforts to Coordinate Appreciative Inquiry: Facilitators' Experiences and Perceptions
ERIC Educational Resources Information Center
Breslow, Ken; Crowell, Lyn; Francis, Lee; Gordon, Stephen P.
2015-01-01
Appreciative inquiry (AI) is an alternative approach to action research that moves participants beyond problem solving and builds on existing strengths as the participants co-construct a positive vision of the future and move toward that vision through collaborative inquiry. Ph.D. students enrolled in a doctoral seminar on AI (who also are…
Patino, Cecilia M.; Varma, Rohit; Azen, Stanley P.; Conti, David V.; Nichol, Michael B.; McKean-Cowdin, Roberta
2010-01-01
Purpose To assess the impact of change in visual field (VF) on change in health related quality of life (HRQoL) at the population level. Design Prospective cohort study Participants 3,175 Los Angles Latino Eye Study (LALES) participants Methods Objective measures of VF and visual acuity and self-reported HRQoL were collected at baseline and 4-year follow-up. Analysis of covariance was used to evaluate mean differences in change of HRQoL across severity levels of change in VF and to test for effect modification by covariates. Main outcome measures General and vision-specific HRQoL. Results Of 3,175 participants, 1430 (46%) showed a change in VF (≥1 decibel [dB]) and 1651, 1715 (54%) reported a clinically important change (≥5 points) in vision-specific HRQoL. Progressive worsening and improvement in the VF were associated with increasing losses and gains in vision-specific HRQoL for the composite score and 10 of its 11 subscales (all Ptrends<0.05). Losses in VF > 5 dB and gains > 3 dB were associated with clinically meaningful losses and gains in vision-specific HRQoL, respectively. Areas of vision-specific HRQoL most affected by greater losses in VF were driving, dependency, role-functioning, and mental health. The effect of change in VF (loss or gain) on mean change in vision-specific HRQoL varied by level of baseline vision loss (in visual field and/or visual acuity) and by change in visual acuity (all P-interactions<0.05). Those with moderate/severe VF loss at baseline and with a > 5 dB loss in visual field during the study period had a mean loss of vision-specific HRQoL of 11.3 points, while those with no VF loss at baseline had a mean loss of 0.97 points Similarly, with a > 5 dB loss in VF and baseline visual acuity impairment (mild/severe) there was a loss in vision-specific HRQoL of 10.5 points, whereas with no visual acuity impairment at baseline there was a loss of vision-specific HRQoL of 3.7 points. Conclusion Both losses and gains in VF produce clinically meaningful changes in vision-specific HRQoL. In the presence of pre-existing vision loss (VF and visual acuity), similar levels of visual field change produce greater losses in quality of life. PMID:21458074
Rapid matching of stereo vision based on fringe projection profilometry
NASA Astrophysics Data System (ADS)
Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei
2016-09-01
As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.
Quantum vision in three dimensions
NASA Astrophysics Data System (ADS)
Roth, Yehuda
We present four models for describing a 3-D vision. Similar to the mirror scenario, our models allow 3-D vision with no need for additional accessories such as stereoscopic glasses or a hologram film. These four models are based on brain interpretation rather than pure objective encryption. We consider the observer "subjective" selection of a measuring device and the corresponding quantum collapse into one of his selected states, as a tool for interpreting reality in according to the observer concepts. This is the basic concept of our study and it is introduced in the first model. Other models suggests "soften" versions that might be much easier to implement. Our quantum interpretation approach contribute to the following fields. In technology the proposed models can be implemented into real devices, allowing 3-D vision without additional accessories. Artificial intelligence: In the desire to create a machine that exchange information by using human terminologies, our interpretation approach seems to be appropriate.
Enhanced operator perception through 3D vision and haptic feedback
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren
2012-06-01
Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.
Oregon Elks Children's Eye Clinic vision screening results for astigmatism.
Vaughan, Joannah; Dale, Talitha; Herrera, Daniel; Karr, Daniel
2018-04-19
In the Elks Preschool Vision Screening program, which uses the plusoptiX S12 to screen children 36-60 months of age, the most common reason for over-referral, using the 1.50 D referral criterion, was found to be astigmatism. The goal of this study was to compare the accuracy of the 2.25 D referral criterion for astigmatism to the 1.50 D referral criterion using screening data from 2013-2014. Vision screenings were conducted on Head Start children 36-72 months of age by Head Start teachers and Elks Preschool Vision Screening staff using the plusoptiX S12. Data on 4,194 vision screenings in 2014 and 4,077 in 2013 were analyzed. Area under the curve (AUC) and receiver operating characteristic curve (ROC) analysis were performed to determine the optimal referral criteria. A t test and scatterplot analysis were performed to compare how many children required treatment using the different criteria. The medical records of 136 (2.25 D) and 117 children (1.50 D) who were referred by the plusoptiX screening for potential astigmatism and received dilated eye examinations from their local eye doctors were reviewed retrospectively. Mean subject age was 4 years. Treatment for astigmatism was prescribed to 116 of 136 using the 2.25 D setting compared to 60 of 117 using the 1.50 D setting. In 2013 the program used the 1.50 D setting for astigmatism. Changing the astigmatism setting to 2.25 D; , 85% of referrals required treatment, reducing false positives by 34%. Copyright © 2018. Published by Elsevier Inc.
3-D Signal Processing in a Computer Vision System
Dongping Zhu; Richard W. Conners; Philip A. Araman
1991-01-01
This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...
Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system
NASA Astrophysics Data System (ADS)
Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping
2015-05-01
Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.
A Sensitive Dynamic and Active Pixel Vision Sensor for Color or Neural Imaging Applications.
Moeys, Diederik Paul; Corradi, Federico; Li, Chenghan; Bamford, Simeon A; Longinotti, Luca; Voigt, Fabian F; Berry, Stewart; Taverni, Gemma; Helmchen, Fritjof; Delbruck, Tobi
2018-02-01
Applications requiring detection of small visual contrast require high sensitivity. Event cameras can provide higher dynamic range (DR) and reduce data rate and latency, but most existing event cameras have limited sensitivity. This paper presents the results of a 180-nm Towerjazz CIS process vision sensor called SDAVIS192. It outputs temporal contrast dynamic vision sensor (DVS) events and conventional active pixel sensor frames. The SDAVIS192 improves on previous DAVIS sensors with higher sensitivity for temporal contrast. The temporal contrast thresholds can be set down to 1% for negative changes in logarithmic intensity (OFF events) and down to 3.5% for positive changes (ON events). The achievement is possible through the adoption of an in-pixel preamplification stage. This preamplifier reduces the effective intrascene DR of the sensor (70 dB for OFF and 50 dB for ON), but an automated operating region control allows up to at least 110-dB DR for OFF events. A second contribution of this paper is the development of characterization methodology for measuring DVS event detection thresholds by incorporating a measure of signal-to-noise ratio (SNR). At average SNR of 30 dB, the DVS temporal contrast threshold fixed pattern noise is measured to be 0.3%-0.8% temporal contrast. Results comparing monochrome and RGBW color filter array DVS events are presented. The higher sensitivity of SDAVIS192 make this sensor potentially useful for calcium imaging, as shown in a recording from cultured neurons expressing calcium sensitive green fluorescent protein GCaMP6f.
Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.
Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders
2017-10-01
The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim ® virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim ® course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball ® Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball ® Box, no difference in laparoscopic knot tying after the LapSim ® course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].
2013-10-18
of the enclosed tasks plus the last parallel task for a total of five parallel tasks for one iteration, i). for j = 1…N for i = 1… 8 Figure...drizzling juices culminating in a state of salivating desire to cut a piece and enjoy. On the other hand, the smell could be that of a pungent, unpleasant
2D/3D Synthetic Vision Navigation Display
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.
2008-01-01
Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.
NASA Astrophysics Data System (ADS)
Du, Jia-Wei; Wang, Xuan-Yin; Zhu, Shi-Qiang
2017-10-01
Based on the process by which the spatial depth clue is obtained by a single eye, a monocular stereo vision to measure the depth information of spatial objects was proposed in this paper and a humanoid monocular stereo measuring system with two degrees of freedom was demonstrated. The proposed system can effectively obtain the three-dimensional (3-D) structure of spatial objects of different distances without changing the position of the system and has the advantages of being exquisite, smart, and flexible. The bionic optical imaging system we proposed in a previous paper, named ZJU SY-I, was employed and its vision characteristic was just like the resolution decay of the eye's vision from center to periphery. We simplified the eye's rotation in the eye socket and the coordinated rotation of other organs of the body into two rotations in the orthogonal direction and employed a rotating platform with two rotation degrees of freedom to drive ZJU SY-I. The structure of the proposed system was described in detail. The depth of a single feature point on the spatial object was deduced, as well as its spatial coordination. With the focal length adjustment of ZJU SY-I and the rotation control of the rotation platform, the spatial coordinates of all feature points on the spatial object could be obtained and then the 3-D structure of the spatial object could be reconstructed. The 3-D structure measurement experiments of two spatial objects with different distances and sizes were conducted. Some main factors affecting the measurement accuracy of the proposed system were analyzed and discussed.
3D-model building of the jaw impression
NASA Astrophysics Data System (ADS)
Ahmed, Moumen T.; Yamany, Sameh M.; Hemayed, Elsayed E.; Farag, Aly A.
1997-03-01
A novel approach is proposed to obtain a record of the patient's occlusion using computer vision. Data acquisition is obtained using intra-oral video cameras. The technique utilizes shape from shading to extract 3D information from 2D views of the jaw, and a novel technique for 3D data registration using genetic algorithms. The resulting 3D model can be used for diagnosis, treatment planning, and implant purposes. The overall purpose of this research is to develop a model-based vision system for orthodontics to replace traditional approaches. This system will be flexible, accurate, and will reduce the cost of orthodontic treatments.
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
Machine Vision for Relative Spacecraft Navigation During Approach to Docking
NASA Technical Reports Server (NTRS)
Chien, Chiun-Hong; Baker, Kenneth
2011-01-01
This paper describes a machine vision system for relative spacecraft navigation during the terminal phase of approach to docking that: 1) matches high contrast image features of the target vehicle, as seen by a camera that is bore-sighted to the docking adapter on the chase vehicle, to the corresponding features in a 3d model of the docking adapter on the target vehicle and 2) is robust to on-orbit lighting. An implementation is provided for the case of the Space Shuttle Orbiter docking to the International Space Station (ISS) with quantitative test results using a full scale, medium fidelity mock-up of the ISS docking adapter mounted on a 6-DOF motion platform at the NASA Marshall Spaceflight Center Flight Robotics Laboratory and qualitative test results using recorded video from the Orbiter Docking System Camera (ODSC) during multiple orbiter to ISS docking missions. The Natural Feature Image Registration (NFIR) system consists of two modules: 1) Tracking which tracks the target object from image to image and estimates the position and orientation (pose) of the docking camera relative to the target object and 2) Acquisition which recognizes the target object if it is in the docking camera Field-of-View and provides an approximate pose that is used to initialize tracking. Detected image edges are matched to the 3d model edges whose predicted location, based on the pose estimate and its first time derivative from the previous frame, is closest to the detected edge1 . Mismatches are eliminated using a rigid motion constraint. The remaining 2d image to 3d model matches are used to make a least squares estimate of the change in relative pose from the previous image to the current image. The changes in position and in attitude are used as data for two Kalman filters whose outputs are smoothed estimate of position and velocity plus attitude and attitude rate that are then used to predict the location of the 3d model features in the next image.
High dynamic range vision sensor for automotive applications
NASA Astrophysics Data System (ADS)
Grenet, Eric; Gyger, Steve; Heim, Pascal; Heitger, Friedrich; Kaess, Francois; Nussbaum, Pascal; Ruedi, Pierre-Francois
2005-02-01
A 128 x 128 pixels, 120 dB vision sensor extracting at the pixel level the contrast magnitude and direction of local image features is used to implement a lane tracking system. The contrast representation (relative change of illumination) delivered by the sensor is independent of the illumination level. Together with the high dynamic range of the sensor, it ensures a very stable image feature representation even with high spatial and temporal inhomogeneities of the illumination. Dispatching off chip image feature is done according to the contrast magnitude, prioritizing features with high contrast magnitude. This allows to reduce drastically the amount of data transmitted out of the chip, hence the processing power required for subsequent processing stages. To compensate for the low fill factor (9%) of the sensor, micro-lenses have been deposited which increase the sensitivity by a factor of 5, corresponding to an equivalent of 2000 ASA. An algorithm exploiting the contrast representation output by the vision sensor has been developed to estimate the position of a vehicle relative to the road markings. The algorithm first detects the road markings based on the contrast direction map. Then, it performs quadratic fits on selected kernel of 3 by 3 pixels to achieve sub-pixel accuracy on the estimation of the lane marking positions. The resulting precision on the estimation of the vehicle lateral position is 1 cm. The algorithm performs efficiently under a wide variety of environmental conditions, including night and rainy conditions.
Changes in accommodation and ocular aberration with simultaneous vision multifocal contact lenses.
Ruiz-Alcocer, Javier; Madrid-Costa, David; Radhakrishnan, Hema; Ferrer-Blasco, Teresa; Montés-Micó, Robert
2012-09-01
The aim of this study was to evaluate ocular aberration changes through different simultaneous vision multifocal contact lenses (CLs). Eighteen young-adult subjects with a mean age of 29.8±2.11 years took part. Changes in accommodative response, spherical aberration (C(4)(0)), horizontal coma (C(3)(1)), vertical coma (C(3)(-1)), and root mean square (RMS) of higher-order aberrations (HOAs, third to sixth orders) were evaluated. Measurements were obtained with a distance-single vision CL and 2 aspheric multifocal CLs of simultaneous focus center-near design (PureVision Low Add and PureVision High Add) for 2 accommodative stimuli (-2.50 and -4.00 D). All measurements were performed monocularly with a Hartmann-Shack aberrometer (IRX-3; Imagine Eyes, Orsay, France). No statistically significant differences were found in accommodative responses to -2.50- and -4.00-D stimuli between the single vision CL and the 2 multifocal CLs. Spherical aberration was found to decrease and become more negative with accommodation for both stimuli with all three CLs. Horizontal coma decreased significantly with accommodation (-2.5- and -4.00-D stimuli) for the distance-single vision CLs (P=0.002 and P=0.003). No differences were found in vertical coma Zernike coefficients. The RMS of HOAs was found to decrease only with the single vision CLs for both stimuli (P<0.01). Data obtained in this study suggest that in young subjects, the multifocal CLs studied do not induce large changes in accommodative response compared with the distance-single vision CLs. Spherical aberration reduced significantly with accommodation.
Three-dimensional object recognition based on planar images
NASA Astrophysics Data System (ADS)
Mital, Dinesh P.; Teoh, Eam-Khwang; Au, K. C.; Chng, E. K.
1993-01-01
This paper presents the development and realization of a robotic vision system for the recognition of 3-dimensional (3-D) objects. The system can recognize a single object from among a group of known regular convex polyhedron objects that is constrained to lie on a calibrated flat platform. The approach adopted comprises a series of image processing operations on a single 2-dimensional (2-D) intensity image to derive an image line drawing. Subsequently, a feature matching technique is employed to determine 2-D spatial correspondences of the image line drawing with the model in the database. Besides its identification ability, the system can also provide important position and orientation information of the recognized object. The system was implemented on an IBM-PC AT machine executing at 8 MHz without the 80287 Maths Co-processor. In our overall performance evaluation based on a 600 recognition cycles test, the system demonstrated an accuracy of above 80% with recognition time well within 10 seconds. The recognition time is, however, indirectly dependent on the number of models in the database. The reliability of the system is also affected by illumination conditions which must be clinically controlled as in any industrial robotic vision system.
Kang, Pauline; McAlinden, Colm; Wildsoet, Christine F
2017-02-01
To assess the effects of multifocal soft contact lenses (MF SCLs) used for myopia control on visual acuity (VA) and subjective quality of vision. Twenty-four young adult myopes had baseline high and low-contrast VAs and refractions measured and quality of vision assessed by the Quality of Vision (QoV) questionnaire with single vision SCLs. Additional VA and QoV questionnaire data were collected immediately after subjects were fitted with Proclear MF SCLs and again after a 2-week adaptation period of daily lens wear. Data were collected for two MF SCL designs, incorporating +1.50 and +3.00 D peripheral near additions, with a week washout period allowed between the two lens trials. High- and low-contrast VAs were initially reduced with both MF SCL designs, but subsequently improved to be not significantly reduced in the case of high-contrast VA by the end of the 2-week adaptation period. The quality of vision was also reduced, more so with the +3.00 D MF SCL. Quality of Vision (QoV) scores describing frequency, severity and bothersome nature of visual symptoms indicated symptoms worsening rather than resolving over the 2-week period, particularly so with the +3.00 D MF SCL. Low and high add MF SCLs adversely affected vision on initial insertion, with sustained effects on low-contrast VA and QoV scores but not high-contrast VA. Thus, high-contrast VA is not a suitable surrogate for quality of vision. In prescribing MF SCLs for myopia control, clinicians should educate patients about these effects on vision. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Alexander, Amy L.; Prinzel, Lawrence J., III; Wickens, Christopher D.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.
2007-01-01
Synthetic vision systems provide an in-cockpit view of terrain and other hazards via a computer-generated display representation. Two experiments examined several display concepts for synthetic vision and evaluated how such displays modulate pilot performance. Experiment 1 (24 general aviation pilots) compared three navigational display (ND) concepts: 2D coplanar, 3D, and split-screen. Experiment 2 (12 commercial airline pilots) evaluated baseline 'blue sky/brown ground' or synthetic vision-enabled primary flight displays (PFDs) and three ND concepts: 2D coplanar with and without synthetic vision and a dynamic multi-mode rotatable exocentric format. In general, the results pointed to an overall advantage for a split-screen format, whether it be stand-alone (Experiment 1) or available via rotatable viewpoints (Experiment 2). Furthermore, Experiment 2 revealed benefits associated with utilizing synthetic vision in both the PFD and ND representations and the value of combined ego- and exocentric presentations.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-12
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
NASA Astrophysics Data System (ADS)
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
Real-time synthetic vision cockpit display for general aviation
NASA Astrophysics Data System (ADS)
Hansen, Andrew J.; Smith, W. Garth; Rybacki, Richard M.
1999-07-01
Low cost, high performance graphics solutions based on PC hardware platforms are now capable of rendering synthetic vision of a pilot's out-the-window view during all phases of flight. When coupled to a GPS navigation payload the virtual image can be fully correlated to the physical world. In particular, differential GPS services such as the Wide Area Augmentation System WAAS will provide all aviation users with highly accurate 3D navigation. As well, short baseline GPS attitude systems are becoming a viable and inexpensive solution. A glass cockpit display rendering geographically specific imagery draped terrain in real-time can be coupled with high accuracy (7m 95% positioning, sub degree pointing), high integrity (99.99999% position error bound) differential GPS navigation/attitude solutions to provide both situational awareness and 3D guidance to (auto) pilots throughout en route, terminal area, and precision approach phases of flight. This paper describes the technical issues addressed when coupling GPS and glass cockpit displays including the navigation/display interface, real-time 60 Hz rendering of terrain with multiple levels of detail under demand paging, and construction of verified terrain databases draped with geographically specific satellite imagery. Further, on-board recordings of the navigation solution and the cockpit display provide a replay facility for post-flight simulation based on live landings as well as synchronized multiple display channels with different views from the same flight. PC-based solutions which integrate GPS navigation and attitude determination with 3D visualization provide the aviation community, and general aviation in particular, with low cost high performance guidance and situational awareness in all phases of flight.
Comparative Geometrical Accuracy Investigations of Hand-Held 3d Scanning Systems - AN Update
NASA Astrophysics Data System (ADS)
Kersten, T. P.; Lindstaedt, M.; Starosta, D.
2018-05-01
Hand-held 3D scanning systems are increasingly available on the market from several system manufacturers. These systems are deployed for 3D recording of objects with different size in diverse applications, such as industrial reverse engineering, and documentation of museum exhibits etc. Typical measurement distances range from 0.5 m to 4.5 m. Although they are often easy-to-use, the geometric performance of these systems, especially the precision and accuracy, are not well known to many users. First geometrical investigations of a variety of diverse hand-held 3D scanning systems were already carried out by the Photogrammetry & Laser Scanning Lab of the HafenCity University Hamburg (HCU Hamburg) in cooperation with two other universities in 2016. To obtain more information about the accuracy behaviour of the latest generation of hand-held 3D scanning systems, HCU Hamburg conducted further comparative geometrical investigations using structured light systems with speckle pattern (Artec Spider, Mantis Vision PocketScan 3D, Mantis Vision F5-SR, Mantis Vision F5-B, and Mantis Vision F6), and photogrammetric systems (Creaform HandySCAN 700 and Shining FreeScan X7). In the framework of these comparative investigations geometrically stable reference bodies were used. The appropriate reference data was acquired by measurements with two structured light projection systems (AICON smartSCAN and GOM ATOS I 2M). The comprehensive test results of the different test scenarios are presented and critically discussed in this contribution.
Glass Vision 3D: Digital Discovery for the Deaf
ERIC Educational Resources Information Center
Parton, Becky Sue
2017-01-01
Glass Vision 3D was a grant-funded project focused on developing and researching a Google Glass app that would allowed young Deaf children to look at the QR code of an object in the classroom and see an augmented reality projection that displays an American Sign Language (ASL) related video. Twenty five objects and videos were prepared and tested…
Three-dimensional particle tracking velocimetry using dynamic vision sensors
NASA Astrophysics Data System (ADS)
Borer, D.; Delbruck, T.; Rösgen, T.
2017-12-01
A fast-flow visualization method is presented based on tracking neutrally buoyant soap bubbles with a set of neuromorphic cameras. The "dynamic vision sensors" register only the changes in brightness with very low latency, capturing fast processes at a low data rate. The data consist of a stream of asynchronous events, each encoding the corresponding pixel position, the time instant of the event and the sign of the change in logarithmic intensity. The work uses three such synchronized cameras to perform 3D particle tracking in a medium sized wind tunnel. The data analysis relies on Kalman filters to associate the asynchronous events with individual tracers and to reconstruct the three-dimensional path and velocity based on calibrated sensor information.
Three-Dimensional Motion Estimation Using Shading Information in Multiple Frames
1989-09-01
j. Threle-D.imensionai GO Motion Estimation U sing, Shadin g Ilnformation in Multiple Frames- IJean-Pierre Schotf MIT Artifi -cial intelligence...vision 3-D structure 3-D vision- shape from shading multiple frames 20. ABSTRACT (Cofrn11,00 an reysrf* OWd Of Rssss00n7 Ad 4111111& F~ block f)nseq See...motion and shading have been treated as two disjoint problems. On the one hand, researchers studying motion or structure from motion often assume
Development of a volumetric projection technique for the digital evaluation of field of view.
Marshall, Russell; Summerskill, Stephen; Cook, Sharon
2013-01-01
Current regulations for field of view requirements in road vehicles are defined by 2D areas projected on the ground plane. This paper discusses the development of a new software-based volumetric field of view projection tool and its implementation within an existing digital human modelling system. In addition, the exploitation of this new tool is highlighted through its use in a UK Department for Transport funded research project exploring the current concerns with driver vision. Focusing specifically on rearwards visibility in small and medium passenger vehicles, the volumetric approach is shown to provide a number of distinct advantages. The ability to explore multiple projections of both direct vision (through windows) and indirect vision (through mirrors) provides a greater understanding of the field of view environment afforded to the driver whilst still maintaining compatibility with the 2D projections of the regulatory standards. Field of view requirements for drivers of road vehicles are defined by simplified 2D areas projected onto the ground plane. However, driver vision is a complex 3D problem. This paper presents the development of a new software-based 3D volumetric projection technique and its implementation in the evaluation of driver vision in small- and medium-sized passenger vehicles.
An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)
2010-03-01
technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D
Study on portable optical 3D coordinate measuring system
NASA Astrophysics Data System (ADS)
Ren, Tongqun; Zhu, Jigui; Guo, Yinbiao
2009-05-01
A portable optical 3D coordinate measuring system based on digital Close Range Photogrammetry (CRP) technology and binocular stereo vision theory is researched. Three ultra-red LED with high stability is set on a hand-hold target to provide measuring feature and establish target coordinate system. Ray intersection based field directional calibrating is done for the intersectant binocular measurement system composed of two cameras by a reference ruler. The hand-hold target controlled by Bluetooth wireless communication is free moved to implement contact measurement. The position of ceramic contact ball is pre-calibrated accurately. The coordinates of target feature points are obtained by binocular stereo vision model from the stereo images pair taken by cameras. Combining radius compensation for contact ball and residual error correction, object point can be resolved by transfer of axes using target coordinate system as intermediary. This system is suitable for on-field large-scale measurement because of its excellent portability, high precision, wide measuring volume, great adaptability and satisfying automatization. It is tested that the measuring precision is near to +/-0.1mm/m.
Frick, Kevin D; Drye, Lea T; Kempen, John H; Dunn, James P; Holland, Gary N; Latkany, Paul; Rao, Narsing A; Sen, H Nida; Sugar, Elizabeth A; Thorne, Jennifer E; Wang, Robert C; Holbrook, Janet T
2012-03-01
To evaluate the associations between visual acuity and self-reported visual function; visual acuity and health-related quality of life (QoL) metrics; a summary measure of self-reported visual function and health-related QoL; and individual domains of self-reported visual function and health-related QoL in patients with uveitis. Best-corrected visual acuity, vision-related functioning as assessed by the NEI VFQ-25, and health-related QoL as assessed by the SF-36 and EuroQoL EQ-5D questionnaires were obtained at enrollment in a clinical trial of uveitis treatments. Multivariate regression and Spearman correlations were used to evaluate associations between visual acuity, vision-related function, and health-related QoL. Among the 255 patients, median visual acuity in the better-seeing eyes was 20/25, the vision-related function score indicated impairment (median, 60), and health-related QoL scores were within the normal population range. Better visual acuity was predictive of higher visual function scores (P ≤ 0.001), a higher SF-36 physical component score, and a higher EQ-5D health utility score (P < 0.001). The vision-specific function score was predictive of all general health-related QoL (P < 0.001). The correlations between visual function score and general quality of life measures were moderate (ρ = 0.29-0.52). The vision-related function score correlated positively with visual acuity and moderately positively with general QoL measures. Cost-utility analyses relying on changes in generic healthy utility measures will be more likely to detect changes when there are clinically meaningful changes in vision-related function, rather than when there are only changes in visual acuity. (ClinicalTrials.gov number, NCT00132691.).
Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking
NASA Astrophysics Data System (ADS)
Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas
2018-01-01
The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.
NASA Astrophysics Data System (ADS)
Santagati, C.; Inzerillo, L.; Di Paola, F.
2013-07-01
3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.
NASA Astrophysics Data System (ADS)
Lauinger, Norbert
1994-10-01
In photopic vision, two physical variables (luminance and wavelength) are transformed into three psychological variables (brightness, hue, and saturation). Following on from 3D grating optical explanations of aperture effects (Stiles-Crawford effects SCE I and II), all three variables can be explained via a single 3D chip effect. The 3D grating optical calculations are carried out using the classical von Laue equation and demonstrated using the example of two experimentally confirmed observations in human vision: saturation effects for monochromatic test lights between 485 and 510 nm in the SCE II and the fact that many test lights reverse their hue shift in the SCE II when changing from moderate to high luminances compared with that on changing from low to medium luminances. At the same time, information is obtained on the transition from the trichromatic color system in the retina to the opponent color system.
Stereo vision tracking of multiple objects in complex indoor environments.
Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro
2010-01-01
This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.
The study of stereo vision technique for the autonomous vehicle
NASA Astrophysics Data System (ADS)
Li, Pei; Wang, Xi; Wang, Jiang-feng
2015-08-01
The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.
Implementation of a robotic flexible assembly system
NASA Technical Reports Server (NTRS)
Benton, Ronald C.
1987-01-01
As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.
A Hybrid Synthetic Vision System for the Tele-operation of Unmanned Vehicles
NASA Technical Reports Server (NTRS)
Delgado, Frank; Abernathy, Mike
2004-01-01
A system called SmartCam3D (SC3D) has been developed to provide enhanced situational awareness for operators of a remotely piloted vehicle. SC3D is a Hybrid Synthetic Vision System (HSVS) that combines live sensor data with information from a Synthetic Vision System (SVS). By combining the dual information sources, the operators are afforded the advantages of each approach. The live sensor system provides real-time information for the region of interest. The SVS provides information rich visuals that will function under all weather and visibility conditions. Additionally, the combination of technologies allows the system to circumvent some of the limitations from each approach. Video sensor systems are not very useful when visibility conditions are hampered by rain, snow, sand, fog, and smoke, while a SVS can suffer from data freshness problems. Typically, an aircraft or satellite flying overhead collects the data used to create the SVS visuals. The SVS data could have been collected weeks, months, or even years ago. To that extent, the information from an SVS visual could be outdated and possibly inaccurate. SC3D was used in the remote cockpit during flight tests of the X-38 132 and 131R vehicles at the NASA Dryden Flight Research Center. SC3D was also used during the operation of military Unmanned Aerial Vehicles. This presentation will provide an overview of the system, the evolution of the system, the results of flight tests, and future plans. Furthermore, the safety benefits of the SC3D over traditional and pure synthetic vision systems will be discussed.
Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?
NASA Astrophysics Data System (ADS)
Schild, Jonas; Masuch, Maic
2012-03-01
This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.
Treacy, M P; Treacy, M G; Dimitrov, B D; Seager, F E; Stamp, M A; Murphy, C C
2013-01-01
Purpose Globally, 153 million people are visually impaired from uncorrected refractive error. The aim of this research was to verify a method whereby autorefractors could be used by non-specialist health-workers to prescribe spectacles, which used a small stock of preformed lenses that fit frames with standardised apertures. These spectacles were named S-Glasses (Smart Glasses). Patients and methods This prospective, single-cohort exploratory study enrolled 53 patients with 94 eligible eyes having uncorrected vision of 6/18 or worse. Eyes with best-corrected vision worse than 6/12 were excluded. An autorefractor was used to obtain refractions, which were adjusted so that eyes with astigmatism less than 2.00 dioptres (D) received spherical equivalent lenses, and eyes with more astigmatism received toric lenses with a 2.50 D cylindrical element set at one of four meridians. The primary outcome was to compare S-Glasses vision with the WHO definition of visual impairment (6/18). Where astigmatism was 2.00 D or greater, comparison with spherical equivalent was made. Mixed-model analysis with repeated effect was used to account for possible correlation between the vision of fellow eyes of the same individual. Results S-Glasses corrected 100% of eyes with astigmatism less than 3.00 D and 69% of eyes with astigmatism of 3.00 D or greater. Spherical equivalent lenses corrected 25% of eyes with astigmatism of 2.00−2.99 D and 11% with astigmatism of at least 3.00 D. Discussion S-Glasses could be beneficial to resource-poor populations without trained refractionists. This novel approach, using approximate toric lenses, results in superior vision for astigmatic patients compared with the practice of providing spherical equivalent alone. PMID:23306732
3D vision upgrade kit for TALON robot
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-04-01
In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
Assessing color vision loss among solvent-exposed workers.
Mergler, D; Blain, L
1987-01-01
Acquired color vision loss has been associated with exposure to organic solvents in the workplace. However, not all tests of chromatic discrimination loss are designed to detect acquired, as opposed to congenital, loss. The Lanthony D-15 desaturated panel (D-15-d), a simple 15 cap color arrangement test, designed to identify mild acquired dyschromatopsia, can be administered rapidly in the field, under standard conditions. The objective of the present study was to evaluate the D-15-d among 23 solvent-exposed workers of a paint manufacturing plant, by comparing the results obtained with the D-15-d to those obtained with the Farnsworth-Munsell 100 Hue (FM-100), a highly sensitive measure of color vision loss. The D-15-d revealed a significantly higher prevalence of dyschromatopsia among the ten highly exposed workers (80%) as compared to the 13 moderately exposed workers (30.8%); FM-100 results revealed one false positive. All dyschromatopic workers presented blue-yellow loss; the FM-100 detected eight complex patterns, while the D-15-d identified 5. Comparison of D-15-d and FM-100 scores were highly correlated (corr. coeff. 0.87; p less than 0.001). Multiple regression analyses showed both scores to be significantly related to age and exposure level. The findings of this study indicate that the D-15-d is an adequate instrument for field study batteries. However, the FM-100 should be used for more detailed assessment.
Velocity and Structure Estimation of a Moving Object Using a Moving Monocular Camera
2006-01-01
map the Euclidean position of static landmarks or visual features in the environment . Recent applications of this technique include aerial...From Motion in a Piecewise Planar Environment ,” International Journal of Pattern Recognition and Artificial Intelligence, Vol. 2, No. 3, pp. 485-508...1988. [9] J. M. Ferryman, S. J. Maybank , and A. D. Worrall, “Visual Surveil- lance for Moving Vehicles,” Intl. Journal of Computer Vision, Vol. 37, No
Detection of color vision defects in chloroquine retinopathy.
Vu, B L; Easterbrook, M; Hovis, J K
1999-09-01
The effect of chloroquine toxicity on color vision is unclear. The authors identified the color defects seen in chloroquine retinopathy and determined the sensitivity and specificity of clinical color vision tests for detecting the presence of previously diagnosed chloroquine retinopathy. Case-control study. Chloroquine retinopathy was defined using previously published criteria. Data from 30 patients with retinopathy and 25 patients using chloroquine but with no evidence of retinal toxicity were collected. All patients were tested with the following six clinical color vision tests: Ishihara, Farnsworth D-15, and Adams Desaturated-15 (Dsat-15), City University 2nd Edition (CU), Standard Pseudoisochromatic Plates Part 2 (SPP-2), and American Optical Hardy Rand Rittler (AO HRR). The number of failures was determined for each test. The types of color vision defects were classified as blue-yellow (BY), red-green (RG), or mixed RG and BY (mixed). Of the 30 patients with retinopathy, 28 (93.3%) of 30 patients failed at least 1 color vision test, demonstrating predominantly mixed defects. Five (25%) of 25 of the control subjects failed at least 1 test, and these defects were predominantly BY. The sensitivity and specificity of the tests are as follows: SPP-2 (93.3%, 88%), AO HRR (76.7%, 88%), Ishihara (43.3%, 96%), Dsat-15 (33.3%, 84%), D-15 (16.7%, 96%), and CU (20%, 92%). Color vision can be affected by chloroquine and should be tested routinely with a color vision test designed to detect both mild BY and protan RG defects to maximize sensitivity for toxicity. The SPP-2 and AO HRR are two tests that meet these criteria. The Ishihara has a low sensitivity, as do the D-15 tests and CU. All of the tests have similar specificity for chloroquine toxicity. If color vision defects are detected in patients at risk of developing chloroquine retinopathy, additional testing is indicated to rule out toxicity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, M; Feigenberg, S
Purpose To evaluate the effectiveness of using 3D-surface-image to guide breath-holding (BH) left-side breast treatment. Methods Two 3D surface image guided BH procedures were implemented and evaluated: normal-BH, taking BH at a comfortable level, and deep-inspiration-breath-holding (DIBH). A total of 20 patients (10 Normal-BH and 10 DIBH) were recruited. Patients received a BH evaluation using a commercialized 3D-surface- tracking-system (VisionRT, London, UK) to quantify the reproducibility of BH positions prior to CT scan. Tangential 3D/IMRT plans were conducted. Patients were initially setup under free-breathing (FB) condition using the FB surface obtained from the untaged CT to ensure a correct patientmore » position. Patients were then guided to reach the planned BH position using the BH surface obtained from the BH CT. Action-levels were set at each phase of treatment process based on the information provided by the 3D-surface-tracking-system for proper interventions (eliminate/re-setup/ re-coaching). We reviewed the frequency of interventions to evaluate its effectiveness. The FB-CBCT and port-film were utilized to evaluate the accuracy of 3D-surface-guided setups. Results 25% of BH candidates with BH positioning uncertainty > 2mm are eliminated prior to CT scan. For >90% of fractions, based on the setup deltas from3D-surface-trackingsystem, adjustments of patient setup are needed after the initial-setup using laser. 3D-surface-guided-setup accuracy is comparable as CBCT. For the BH guidance, frequency of interventions (a re-coaching/re-setup) is 40%(Normal-BH)/91%(DIBH) of treatments for the first 5-fractions and then drops to 16%(Normal-BH)/46%(DIBH). The necessity of re-setup is highly patient-specific for Normal-BH but highly random among patients for DIBH. Overall, a −0.8±2.4 mm accuracy of the anterior pericardial shadow position was achieved. Conclusion 3D-surface-image technology provides effective intervention to the treatment process and ensures favorable day-to-day setup accuracy. DIBH setup appears to be more uncertain and this would be the patient group who will definitely benefit from the extra information of 3D surface setup.« less
The Survey of Vision-based 3D Modeling Techniques
NASA Astrophysics Data System (ADS)
Ruan, Mingzhe
2017-10-01
This paper reviews the vision-based localization and map construction methods from the perspectives of VSLAM, SFM, 3DMax and Unity3D. It focuses on the key technologies and the latest research progress on each aspect, analyzes the advantages and disadvantages of each method, illustrates their implementation process and system framework, and further discusses the way to promote the combination for their complementary strength. Finally, the future opportunity of the combination of the four techniques is expected.
Bruce, Alison; Santorelli, Gillian; Wright, John; Bradbury, John; Barrett, Brendan T; Bloj, Marina; Sheldon, Trevor A
2018-06-13
To determine presenting visual acuity levels and explore the factors associated with failing vision screening in a multi-ethnic population of UK children aged 4-5 years. Visual acuity (VA) using the logMAR Crowded Test was measured in 16,541 children in a population-based vision screening programme. Referral for cycloplegic examination was based on national recommendations (>0.20logMAR in one or both eyes). Presenting visual impairment (PVI) was defined as VA >0.3logMAR in the better eye. Multivariable logistic regression was used to assess the association of ethnicity, maternal, and early-life factors with failing vision screening and PVI in participants of the Born in Bradford birth cohort. In total, 2467/16,541 (15%) failed vision screening, 732 (4.4%) had PVI. Children of Pakistani (OR: 2.49; 95% CI: 1.74-3.60) and other ethnicities (OR: 2.00; 95% CI: 1.28-3.12) showed increased odds of PVI compared to white children. Children born to older mothers (OR: 1.63; 95% CI: 1.19-2.24) and of low birth weight (OR: 1.52; 95% CI: 1.00-2.34) also showed increased odds. Follow-up results were available for 1068 (43.3%) children, 993 (93%) were true positives; 932 (94%) of these had significant refractive error. Astigmatism (>1DC) (44%) was more common in children of Pakistani ethnicity and hypermetropia (>3.0DS) (27%) in white children (Fisher's exact, p < 0.001). A high prevalence of PVI is reported. Failing vision screening and PVI were highly associated with ethnicity. The positive predictive value of the vision screening programme was good, with only 7% of children followed up confirmed as false positives.
Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space
Chen, Min; Hashimoto, Koichi
2017-01-01
Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189
Person and gesture tracking with smart stereo cameras
NASA Astrophysics Data System (ADS)
Gordon, Gaile; Chen, Xiangrong; Buck, Ron
2008-02-01
Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.
Capsule endoscope localization based on computer vision technique.
Liu, Li; Hu, Chao; Cai, Wentao; Meng, Max Q H
2009-01-01
To build a new type of wireless capsule endoscope with interactive gastrointestinal tract examination, a localization and orientation system is needed for tracking 3D location and 3D orientation of the capsule movement. The magnetic localization and orientation method produces only 5 DOF, but misses the information of rotation angle along capsule's main axis. In this paper, we presented a complementary orientation approach for the capsule endoscope, and the 3D rotation can be determined by applying computer vision technique on the captured endoscopic images. The experimental results show that the complementary orientation method has good accuracy and high feasibility.
Clinical color vision testing and correlation with visual function.
Zhao, Jiawei; Davé, Sarita B; Wang, Jiangxia; Subramanian, Prem S
2015-09-01
To determine if Hardy-Rand-Rittler (H-R-R) and Ishihara testing are accurate estimates of color vision in subjects with acquired visual dysfunction. Assessment of diagnostic tools. Twenty-two subjects with optic neuropathy (aged 18-65) and 18 control subjects were recruited prospectively from an outpatient clinic. Individuals with visual acuity (VA) <20/200 or with congenital color blindness were excluded. All subjects underwent a comprehensive eye examination including VA, color vision, and contrast sensitivity testing. Color vision was assessed using H-R-R and Ishihara plates and Farnsworth D-15 (D-15) discs. D-15 is the accepted standard for detecting and classifying color vision deficits. Contrast sensitivity was measured using Pelli-Robson contrast sensitivity charts. No relationship was found between H-R-R and D-15 scores (P = .477). H-R-R score and contrast sensitivity were positively correlated (P = .003). On multivariate analysis, contrast sensitivity (β = 8.61, P < .001) and VA (β = 2.01, P = .022) both showed association with H-R-R scores. Similar to H-R-R, Ishihara score did not correlate with D-15 score (P = .973), but on multivariate analysis was related to contrast sensitivity (β = 8.69, P < .001). H-R-R and Ishihara scores had an equivalent relationship with contrast sensitivity (P = .069). Neither H-R-R nor Ishihara testing appears to assess color identification in patients with optic neuropathy. Both H-R-R and Ishihara testing are correlated with contrast sensitivity, and these tests may be useful clinical surrogates for contrast sensitivity testing. Copyright © 2015 Elsevier Inc. All rights reserved.
Ramaswamy, Shankaran; Hovis, Jeffery K
2004-01-01
Color codes in VDT displays often contain sets of colors that are confusing to individuals with color-vision deficiencies. The purpose of this study is to determine whether individuals with color-vision deficiencies (color defectives) can perform as well as individuals without color-vision deficiencies (color normals) on a colored VDT display used in the railway industry and to determine whether clinical color-vision tests can predict their performance. Of the 52 color defectives, 58% failed the VDT test. The kappa coefficients of agreement for the Farnsworth D-15, Adams desaturated D-15, and Richmond 3rd Edition HRR PIC diagnostic plates were significantly greater than chance. In particular, the D-15 tests have a high probability of predicting who fails the practical test. However, all three tests had an unacceptably high false-negative rate (9.5-35%); so that a practical test is still needed.
Learning to perceive differences in solid shape through vision and touch.
Norman, J Farley; Clayton, Anna Marie; Norman, Hideko F; Crabtree, Charles E
2008-01-01
A single experiment was designed to investigate perceptual learning and the discrimination of 3-D object shape. Ninety-six observers were presented with naturally shaped solid objects either visually, haptically, or across the modalities of vision and touch. The observers' task was to judge whether the two sequentially presented objects on any given trial possessed the same or different 3-D shapes. The results of the experiment revealed that significant perceptual learning occurred in all modality conditions, both unimodal and cross-modal. The amount of the observers' perceptual learning, as indexed by increases in hit rate and d', was similar for all of the modality conditions. The observers' hit rates were highest for the unimodal conditions and lowest in the cross-modal conditions. Lengthening the inter-stimulus interval from 3 to 15 s led to increases in hit rates and decreases in response bias. The results also revealed the existence of an asymmetry between two otherwise equivalent cross-modal conditions: in particular, the observers' perceptual sensitivity was higher for the vision-haptic condition and lower for the haptic-vision condition. In general, the results indicate that effective cross-modal shape comparisons can be made between the modalities of vision and active touch, but that complete information transfer does not occur.
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
Integration of Defocus by Dual Power Fresnel Lenses Inhibits Myopia in the Mammalian Eye
McFadden, Sally A.; Tse, Dennis Y.; Bowrey, Hannah E.; Leotta, Amelia J.; Lam, Carly S.; Wildsoet, Christine F.; To, Chi-Ho
2014-01-01
Purpose. Eye growth compensates in opposite directions to single vision (SV) negative and positive lenses. We evaluated the response of the guinea pig eye to Fresnel-type lenses incorporating two different powers. Methods. A total of 114 guinea pigs (10 groups with 9–14 in each) wore a lens over one eye and interocular differences in refractive error and ocular dimensions were measured in each of three experiments. First, the effects of three Fresnel designs with various diopter (D) combinations (−5D/0D; +5D/0D or −5D/+5D dual power) were compared to three SV lenses (−5D, +5D, or 0D). Second, the ratio of −5D and +5D power in a Fresnel lens was varied (50:50 compared with 60:40). Third, myopia was induced by 4 days of exposure to a SV −5D lens, which was then exchanged for a Fresnel lens (−5D/+5D) or one of two SV lenses (+5D or −5D) and ocular parameters tracked for a further 3 weeks. Results. Dual power lenses induced an intermediate response between that to the two constituent powers (lenses +5D, +5D/0D, 0D, −5D/+5D, −5D/0D and −5D induced +2.1 D, +0.7 D, +0.1 D, −0.3 D, −1.6 D and −5.1 D in mean intraocular differences in refractive error, respectively), and changing the ratio of powers induced responses equal to their weighted average. In already myopic animals, continued treatment with SV negative lenses increased their myopia (from −3.3 D to −4.2 D), while switching to SV positive lenses or −5D/+5D Fresnel lenses reduced their myopia (by 2.9 D and 2.3 D, respectively). Conclusions. The mammalian eye integrates competing defocus to guide its refractive development and eye growth. Fresnel lenses, incorporating positive or plano power with negative power, can slow ocular growth, suggesting that such designs may control myopia progression in humans. PMID:24398103
Integration of defocus by dual power Fresnel lenses inhibits myopia in the mammalian eye.
McFadden, Sally A; Tse, Dennis Y; Bowrey, Hannah E; Leotta, Amelia J; Lam, Carly S; Wildsoet, Christine F; To, Chi-Ho
2014-02-14
Eye growth compensates in opposite directions to single vision (SV) negative and positive lenses. We evaluated the response of the guinea pig eye to Fresnel-type lenses incorporating two different powers. A total of 114 guinea pigs (10 groups with 9-14 in each) wore a lens over one eye and interocular differences in refractive error and ocular dimensions were measured in each of three experiments. First, the effects of three Fresnel designs with various diopter (D) combinations (-5D/0D; +5D/0D or -5D/+5D dual power) were compared to three SV lenses (-5D, +5D, or 0D). Second, the ratio of -5D and +5D power in a Fresnel lens was varied (50:50 compared with 60:40). Third, myopia was induced by 4 days of exposure to a SV -5D lens, which was then exchanged for a Fresnel lens (-5D/+5D) or one of two SV lenses (+5D or -5D) and ocular parameters tracked for a further 3 weeks. Dual power lenses induced an intermediate response between that to the two constituent powers (lenses +5D, +5D/0D, 0D, -5D/+5D, -5D/0D and -5D induced +2.1 D, +0.7 D, +0.1 D, -0.3 D, -1.6 D and -5.1 D in mean intraocular differences in refractive error, respectively), and changing the ratio of powers induced responses equal to their weighted average. In already myopic animals, continued treatment with SV negative lenses increased their myopia (from -3.3 D to -4.2 D), while switching to SV positive lenses or -5D/+5D Fresnel lenses reduced their myopia (by 2.9 D and 2.3 D, respectively). The mammalian eye integrates competing defocus to guide its refractive development and eye growth. Fresnel lenses, incorporating positive or plano power with negative power, can slow ocular growth, suggesting that such designs may control myopia progression in humans.
NASA Astrophysics Data System (ADS)
Chuthai, T.; Cole, M. O. T.; Wongratanaphisan, T.; Puangmali, P.
2018-01-01
This paper describes a high-precision motion control implementation for a flexure-jointed micromanipulator. A desktop experimental motion platform has been created based on a 3RUU parallel kinematic mechanism, driven by rotary voice coil actuators. The three arms supporting the platform have rigid links with compact flexure joints as integrated parts and are made by single-process 3D printing. The mechanism overall size is approximately 250x250x100 mm. The workspace is relatively large for a flexure-jointed mechanism, being approximately 20x20x6 mm. A servo-control implementation based on pseudo-rigid-body models (PRBM) of kinematic behavior combined with nonlinear-PID control has been developed. This is shown to achieve fast response with good noise-rejection and platform stability. However, large errors in absolute positioning occur due to deficiencies in the PRBM kinematics, which cannot accurately capture flexure compliance behavior. To overcome this problem, visual servoing is employed, where a digital microscopy system is used to directly measure the platform position by image processing. By adopting nonlinear PID feedback of measured angles for the actuated joints as inner control loops, combined with auxiliary feedback of vision-based measurements, the absolute positioning error can be eliminated. With controller gain tuning, fast dynamic response and low residual vibration of the end platform can be achieved with absolute positioning accuracy within ±1 micron.
NASA Astrophysics Data System (ADS)
Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.
2015-05-01
Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.
Evaluation of novel technologies for the miniaturization of flash imaging lidar
NASA Astrophysics Data System (ADS)
Mitev, V.; Pollini, A.; Haesler, J.; Perenzoni, D.; Stoppa, D.; Kolleck, Christian; Chapuy, M.; Kervendal, E.; Pereira do Carmo, João.
2017-11-01
Planetary exploration constitutes one of the main components in the European Space activities. Missions to Mars, Moon and asteroids are foreseen where it is assumed that the human missions shall be preceded by robotic exploitation flights. The 3D vision is recognised as a key enabling technology in the relative proximity navigation of the space crafts, where imaging LiDAR is one of the best candidates for such 3D vision sensor.
NASA Astrophysics Data System (ADS)
MacDougall, Jean; McLeod, Roger
2006-03-01
Mac Dougall was advised against having a single crystalline lens with a slight cataract surgically removed; it would impact her ability to reengage vision's self-correcting feedback mechanisms. Her Florida ophthalmologist removed both lenses. A Massachusetts ophthalmologist was recently de-licensed for improperly performing just those services. An optometrist says reputed vision repair can easily be tracked and evaluated; we posit that Naturoptics effects on cataracts can be similarly assessed. ``Cures'' are detectable. Naturoptics users may show glaucoma reversal. EDWARD R. ELLIS, Jr., N.D. (The Chelmsford Clinic, Massachusetts), stabilizes RP, preventing blindness.
Full-field 3D shape measurement of specular object having discontinuous surfaces
NASA Astrophysics Data System (ADS)
Zhang, Zonghua; Huang, Shujun; Gao, Nan; Gao, Feng; Jiang, Xiangqian
2017-06-01
This paper presents a novel Phase Measuring Deflectometry (PMD) method to measure specular objects having discontinuous surfaces. A mathematical model is established to directly relate the absolute phase and depth, instead of the phase and gradient. Based on the model, a hardware measuring system has been set up, which consists of a precise translating stage, a projector, a diffuser and a camera. The stage locates the projector and the diffuser together to a known position during measurement. By using the model-based and machine vision methods, system calibration is accomplished to provide the required parameters and conditions. The verification tests are given to evaluate the effectiveness of the developed system. 3D (Three-Dimensional) shapes of a concave mirror and a monolithic multi-mirror array having multiple specular surfaces have been measured. Experimental results show that the proposed method can obtain 3D shape of specular objects having discontinuous surfaces effectively
Improving automated 3D reconstruction methods via vision metrology
NASA Astrophysics Data System (ADS)
Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart
2015-05-01
This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
NASA Astrophysics Data System (ADS)
Choi, Jae Hyung; Kuk, Jung Gap; Kim, Young Il; Cho, Nam Ik
2012-01-01
This paper proposes an algorithm for the detection of pillars or posts in the video captured by a single camera implemented on the fore side of a room mirror in a car. The main purpose of this algorithm is to complement the weakness of current ultrasonic parking assist system, which does not well find the exact position of pillars or does not recognize narrow posts. The proposed algorithm is consisted of three steps: straight line detection, line tracking, and the estimation of 3D position of pillars. In the first step, the strong lines are found by the Hough transform. Second step is the combination of detection and tracking, and the third is the calculation of 3D position of the line by the analysis of trajectory of relative positions and the parameters of camera. Experiments on synthetic and real images show that the proposed method successfully locates and tracks the position of pillars, which helps the ultrasonic system to correctly locate the edges of pillars. It is believed that the proposed algorithm can also be employed as a basic element for vision based autonomous driving system.
Robust object tracking techniques for vision-based 3D motion analysis applications
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.
2016-04-01
Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.
Bakaraju, Ravi C; Fedtke, Cathleen; Ehrmann, Klaus; Ho, Arthur
2015-01-01
To compare the contributions of single vision (SVCL) and multifocal contact lenses (MFCL) to the relative peripheral refraction (RPR) profiles obtained via an autorefractor and an aberrometer in a pilot study. Two instruments, Shin-Nippon NVision K5001 (SN) and COAS-HD, were modified to permit open field PR measurements. Two myopic adults (CF, RB) were refracted (cycloplegia) under eight conditions: baseline (no CL); three SVCLs: Focus Dailies(®) (Alcon, USA), PureVision(®) (Bausch & Lomb, USA) and AirOptix(®) (Alcon, USA); and four MFCLs: AirOptix(®) (Alcon, USA), Proclear(®) Distant and Near (Cooper Vision, USA), and PureVision(®) (Bausch & Lomb, USA). CLs had a distance prescription of -2.00D and for MFCLs, a +2.50D Add was selected. Five independent measurements were performed at field angles from -40° to +40° in 10° increments with both instruments. The COAS-HD measures were analyzed at 3mm pupil diameter. Results are reported as a change in the relative PR profile, as refractive power vector components: M, J180, and J45. Overall, at baseline, M, J180 and J45 measures obtained with SN and COAS-HD were considerably different only for field angles ≥±30°, which agreed well with previous studies. With respect to M, this observation held true for most SVCLs with a few exceptions. The J180 measures obtained with COAS-HD were considerably greater in magnitude than those acquired with SN. For SVCLs, the greatest difference was found at -40° for AirOptix SV (ΔCF=3.20D, ΔRB=1.56D) and for MFCLs it was for Proclear Distance at -40° (ΔCF=2.58D, ΔRB=1.39D). The J45 measures obtained with SN were noticeably different to the respective measures with COAS-HD, both in magnitude and sign. The greatest difference was found with AirOptix Multifocal in subject RB at -40°, where the COAS-HD measurement was 1.50D more positive. In some cases, the difference in the RPR profiles observed between subjects appeared to be associated with CL decentration. For most test conditions, distinct differences were observed between the RPR measures obtained with the two modified instruments. The differences varied with CL design and centration. Although the pilot study supports the interchangeable use of the two instruments for on- and off-axis refraction in unaided eyes or eyes corrected with low/no spherical aberration; we advocate the use of the COAS-HD over the SN for special purposes like refracting through multifocal CLs. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
2015-08-21
using the Open Computer Vision ( OpenCV ) libraries [6] for computer vision and the Qt library [7] for the user interface. The software has the...depth. The software application calibrates the cameras using the plane based calibration model from the OpenCV calib3D module and allows the...6] OpenCV . 2015. OpenCV Open Source Computer Vision. [Online]. Available at: opencv.org [Accessed]: 09/01/2015. [7] Qt. 2015. Qt Project home
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery
2016-10-01
This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Characteristics of visual fatigue under the effect of 3D animation.
Chang, Yu-Shuo; Hsueh, Ya-Hsin; Tung, Kwong-Chung; Jhou, Fong-Yi; Lin, David Pei-Cheng
2015-01-01
Visual fatigue is commonly encountered in modern life. Clinical visual fatigue characteristics caused by 2-D and 3-D animations may be different, but have not been characterized in detail. This study tried to distinguish the differential effects on visual fatigue caused by 2-D and 3-D animations. A total of 23 volunteers were subjected to accommodation and vergence assessments, followed by a 40-min video game program designed to aggravate their asthenopic symptoms. The volunteers were then assessed for accommodation and vergence parameters again and directed to watch a 5-min 3-D video program, and then assessed again for the parameters. The results support that the 3-D animations caused similar characteristics in vision fatigue parameters in some specific aspects as compared to that caused by 2-D animations. Furthermore, 3-D animations may lead to more exhaustion in both ciliary and extra-ocular muscles, and such differential effects were more evident in the high demand of near vision work. The current results indicated that an arbitrary set of indexes may be promoted in the design of 3-D display or equipments.
Optoelectronic instrumentation enhancement using data mining feedback for a 3D measurement system
NASA Astrophysics Data System (ADS)
Flores-Fuentes, Wendy; Sergiyenko, Oleg; Gonzalez-Navarro, Félix F.; Rivas-López, Moisés; Hernandez-Balbuena, Daniel; Rodríguez-Quiñonez, Julio C.; Tyrsa, Vera; Lindner, Lars
2016-12-01
3D measurement by a cyber-physical system based on optoelectronic scanning instrumentation has been enhanced by outliers and regression data mining feedback. The prototype has applications in (1) industrial manufacturing systems that include: robotic machinery, embedded vision, and motion control, (2) health care systems for measurement scanning, and (3) infrastructure by providing structural health monitoring. This paper presents new research performed in data processing of a 3D measurement vision sensing database. Outliers from multivariate data have been detected and removal to improve artificial intelligence regression algorithm results. Physical measurement error regression data has been used for 3D measurements error correction. Concluding, that the joint of physical phenomena, measurement and computation is an effectiveness action for feedback loops in the control of industrial, medical and civil tasks.
Retrospective analysis of refractive errors in children with vision impairment.
Du, Jojo W; Schmid, Katrina L; Bevan, Jennifer D; Frater, Karen M; Ollett, Rhondelle; Hein, Bronwyn
2005-09-01
Emmetropization is the reduction in neonatal refractive errors that occurs after birth. Ocular disease may affect this process. We aimed to determine the relative frequency of ocular conditions causing vision impairment in the pediatric population and characterize the refractive anomalies present. We also compared the causes of vision impairment in children today to those between 1974 and 1981. Causes of vision impairment and refractive data of 872 children attending a pediatric low-vision clinic from 1985 to 2002 were retrospectively collated. As a result of associated impairments, refractive data were not available for 59 children. An analysis was made of the causes of vision impairment, the distribution of refractive errors in children with vision impairment, and the average type of refractive error for the most commonly seen conditions. We found that cortical or cerebral vision impairment (CVI) was the most common condition causing vision impairment, accounting for 27.6% of cases. This was followed by albinism (10.6%), retinopathy of prematurity (ROP; 7.0%), optic atrophy (6.2%), and optic nerve hypoplasia (5.3%). Vision impairment was associated with ametropia; fewer than 25% of the children had refractive errors < or = +/-1 D. The refractive error frequency plots (for 0 to 2-, 6 to 8-, and 12 to 14-year age bands) had a Gaussian distribution indicating that the emmetropization process was abnormal. The mean spherical equivalent refractive error of the children (n = 813) was +0.78 +/- 6.00 D with 0.94 +/- 1.24 D of astigmatism and 0.92 +/- 2.15 D of anisometropia. Most conditions causing vision impairment such as albinism were associated with low amounts of hyperopia. Moderate myopia was observed in children with ROP. The relative frequency of ocular conditions causing vision impairment in children has changed since the 1970s. Children with vision impairment often have an associated ametropia suggesting that the emmetropization system is also impaired.
Fusion of Multiple Sensing Modalities for Machine Vision
1994-05-31
Modeling of Non-Homogeneous 3-D Objects for Thermal and Visual Image Synthesis," Pattern Recognition, in press. U [11] Nair, Dinesh , and J. K. Aggarwal...20th AIPR Workshop: Computer Vision--Meeting the Challenges, McLean, Virginia, October 1991. Nair, Dinesh , and J. K. Aggarwal, "An Object Recognition...Computer Engineering August 1992 Sunil Gupta Ph.D. Student Mohan Kumar M.S. Student Sandeep Kumar M.S. Student Xavier Lebegue Ph.D., Computer
46 CFR 92.03-1 - Navigation bridge visibility.
Code of Federal Regulations, 2010 CFR
2010-10-01
... after September 7, 1990, must meet the following requirements: (a) The field of vision from the... obstruction must not exceed 5 degrees. (2) From the conning position, the horizontal field of vision extends... paragraph (a)(1) of this section. (3) From each bridge wing, the field of vision extends over an arc from at...
46 CFR 190.02-1 - Navigation bridge visibility.
Code of Federal Regulations, 2010 CFR
2010-10-01
... September 7, 1990, must meet the following requirements: (a) The field of vision from the navigation bridge... not exceed 5 degrees. (2) From the conning position, the horizontal field of vision extends over an...)(1) of this section. (3) From each bridge wing, the field of vision extends over an arc from at least...
46 CFR 108.801 - Navigation bridge visibility.
Code of Federal Regulations, 2010 CFR
2010-10-01
... September 7, 1990, must meet the following requirements: (a) The field of vision from the navigation bridge... not exceed 5 degrees. (2) From the conning position, the horizontal field of vision extends over an...)(1) of this section. (3) From each bridge wing, the field of vision extends over an arc from at least...
Reduced vision in highly myopic eyes without ocular pathology: the ZOC-BHVI high myopia study.
Jong, Monica; Sankaridurg, Padmaja; Li, Wayne; Resnikoff, Serge; Naidoo, Kovin; He, Mingguang
2018-01-01
The aim was to investigate the relationship of the magnitude of myopia with visual acuity in highly myopic eyes without ocular pathology. Twelve hundred and ninety-two highly myopic eyes (up to -6.00 DS both eyes, no astigmatic cut-off) with no ocular pathology from the ZOC-BHVI high myopia study in China, had cycloplegic refraction, followed by subjective refraction and visual acuities and axial length measurement. Two logistic regression models were undertaken to test the association of age, gender, refractive error, axial length and parental myopia with reduced vision. Mean group age was 19.0 ± 8.6 years; subjective spherical equivalent refractive error was -9.03 ± 2.73 D; objective spherical equivalent refractive error was -8.90 ± 2.60 D and axial length was 27.0 ± 1.3 mm. Using visual acuity, 82.4 per cent had normal vision, 16.0 per cent had mildly reduced vision, 1.2 per cent had moderately reduced vision, 0.3 per cent had severely reduced vision and no subjects were blind. The percentage with reduced vision increased with spherical equivalent to 74.5 per cent from -15.00 to -39.99 D, axial length to 67.7 per cent of eyes from 30.01 to 32.00 mm and age to 22.9 per cent of those 41 years and over. Spherical equivalent and axial length were significantly associated with reduced vision (p < 0.0001). Age and parental myopia were not significantly associated with reduced vision. Gender was significant for one model (p = 0.04). Mildly reduced vision is common in high myopia without ocular pathology and is strongly correlated with greater magnitudes of refractive error and axial length. Better understanding is required to minimise reduced vision in high myopes. © 2017 Optometry Australia.
Rethinking GIS Towards The Vision Of Smart Cities Through CityGML
NASA Astrophysics Data System (ADS)
Guney, C.
2016-10-01
Smart cities present a substantial growth opportunity in the coming years. The role of GIS in the smart city ecosystem is to integrate different data acquired by sensors in real time and provide better decisions, more efficiency and improved collaboration. Semantically enriched vision of GIS will help evolve smart cities into tomorrow's much smarter cities since geospatial/location data and applications may be recognized as a key ingredient of smart city vision. However, it is need for the Geospatial Information communities to debate on "Is 3D Web and mobile GIS technology ready for smart cities?" This research places an emphasis on the challenges of virtual 3D city models on the road to smarter cities.
Calibration of stereo rigs based on the backward projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin
2016-08-01
High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.
Chen, Chun-Fu; Huang, Kuo-Chen
2016-04-01
This study investigated the effects of target distance (30, 35, and 40 cm) and the color of background lighting (red, green, blue, and yellow) on the duration of movements made by participants with low vision, myopia, and normal vision while performing a reaching task; 48 students (21 women, 27 men; M age = 21.8 year, SD = 2.4) participated in the study. Participants reached for a target (a white LED light) whose vertical position varied randomly across trials, ranging in distance from 30 to 40 cm. Movement time was analyzed using a 3 (participant group) × [4 (color of background lighting) × 3 (movement distance)] mixed-design ANOVA model. Results indicated longer times for completing a reaching movement when: participants belonged to the low vision group; the target distance between the starting position and the target position was longer (40 cm); and the reaching movement occurred in the red-background lighting condition. These results are particularly relevant for situations in which a user is required to respond to a signal by reaching toward a button or an icon. © The Author(s) 2016.
Pérez-San-Gregorio, M Á; Martín-Rodríguez, A; Borda-Mas, M; Avargues-Navarro, M L; Pérez-Bernal, J; Gómez-Bravo, M Á
2018-03-01
Analyze the influence of 2 variables (post-traumatic growth and time since liver transplantation) on coping strategies used by the transplant recipient's family members. In all, 218 family members who were their main caregivers of liver transplant recipients were selected. They were evaluated using the Posttraumatic Growth Inventory and the Brief COPE. A 3 × 3 factorial analysis of variance was used to analyze the influence that post-traumatic growth level (low, medium, and high) and time since transplantation (≤3.5 years, >3.5 to ≤9 years, and >9 years) exerted on caregiver coping strategies. No interactive effects between the two factors in the study were found. The only significant main effect was the influence of the post-traumatic growth factor on the following variables: instrumental support (P = .007), emotional support (P = .005), self-distraction (P = .006), positive reframing (P = .000), acceptance (P = .013), and religion (P = <.001). According to the most relevant effect sizes, low post-traumatic growth compared with medium growth was associated with less use of self-distraction (P = .006, d = -0.52, medium effect size), positive reframing (P = .001, d = -0.62, medium effect size), and religion (P = .000, d = -0.66, medium effect size), and in comparison with high growth, it was associated with less use of positive reframing (P = .002, d = -0.56, medium effect size) and religion (P = .000, d = 0.87, large effect size). Regardless of the time elapsed since the stressful life event (liver transplantation), family members with low post-traumatic growth usually use fewer coping strategies involving a positive, transcendent vision to deal with transplantation. Copyright © 2017 Elsevier Inc. All rights reserved.
Dawidek, Mark T; Roach, Victoria A; Ott, Michael C; Wilson, Timothy D
A major challenge in laparoscopic surgery is the lack of depth perception. With the development and continued improvement of 3D video technology, the potential benefit of restoring 3D vision to laparoscopy has received substantial attention from the surgical community. Despite this, procedures conducted under 2D vision remain the standard of care, and trainees must become proficient in 2D laparoscopy. This study aims to determine whether incorporating 3D vision into a 2D laparoscopic simulation curriculum accelerates skill acquisition in novices. Postgraduate year-1 surgical specialty residents (n = 15) at the Schulich School of Medicine and Dentistry, at Western University were randomized into 1 of 2 groups. The control group practiced the Fundamentals of Laparoscopic Surgery peg-transfer task to proficiency exclusively under standard 2D laparoscopy conditions. The experimental group first practiced peg transfer under 3D direct visualization, with direct visualization of the working field. Upon reaching proficiency, this group underwent a perceptual switch, changing to standard 2D laparoscopy conditions, and once again trained to proficiency. Incorporating 3D direct visualization before training under standard 2D conditions significantly (p < 0.0.5) reduced the total training time to proficiency by 10.9 minutes or 32.4%. There was no difference in total number of repetitions to proficiency. Data were also used to generate learning curves for each respective training protocol. An adaptive learning approach, which incorporates 3D direct visualization into a 2D laparoscopic simulation curriculum, accelerates skill acquisition. This is in contrast to previous work, possibly owing to the proficiency-based methodology employed, and has implications for resource savings in surgical training. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping
2017-12-01
In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.
3D laser imaging for ODOT interstate network at true 1-mm resolution.
DOT National Transportation Integrated Search
2014-12-01
With the development of 3D laser imaging technology, the latest iteration of : PaveVision3D Ultra can obtain true 1mm resolution 3D data at full-lane coverage in all : three directions at highway speed up to 60MPH. This project provides rapid survey ...
[Strabismus surgery in Grave's disease--dose-effect relationships and functional results].
Schittkowski, M; Fichter, N; Guthoff, R
2004-11-01
Strabismus in thyroid ophthalmopathy is based on a loss of the contractility and distensibility of the external ocular muscles. Different therapeutic approaches are available, such as recession after pre-. or intraoperative measurement, adjustable sutures, antagonist resection, or contralateral synergist faden-operation. 26 patients with strabismus in thyroid ophthalmopathy were operated between 2000 and 2003. All patients were examined preoperatively, then 1 day and 3 - 6 months (maximum 36 months) postoperatively. Before proceeding with surgery, we waited at least 6 months after stabilization of ocular alignment and normalization of thyroid chemistry. Preoperative vertical deviation was 10-44 PD (mean 22), 3 months postoperatively it was 2-10 PD (mean 1.5). Recession of the fibrotic muscle leads to reproducible results: 3.98 +/- 0.52 PD vertical deviation/mm for the inferior rectus. In the case of a large preoperative deviation a correction should be expected, which might not be sufficient in the first few days or weeks; a second operation should not be carried out before 3 months. 7 patients were operated twice, 1 patient need three operations. 4 patients (preop. 0) achieved no double vision at all; 15 patients (preop. 1) had no double vision in the primary and reading positions; 3 patients (preop. 0) had no double vision with a maximum of 5 PD; 1 patient (preop. 7) had double vision in the primary or reading position even with prisms; and 2 patients (preop. 17) had double vision in every position. We advocate that recession of the restricted inferior or internal rectus muscle is precise, safe and effective in patients with thyroid ophthalmopathy. The recessed muscle should be fixed directly at the sclera to avoid late over-correction through a slipped muscle. The success rate in terms of binocular single vision was 76 % and 88 % with prisms added.
An evaluation of the lag of accommodation using photorefraction.
Seidemann, Anne; Schaeffel, Frank
2003-02-01
The lag of accommodation which occurs in most human subjects during reading has been proposed to explain the association between reading and myopia. However, the measured lags are variable among different published studies and current knowledge on its magnitude rests largely on measurements with the Canon R-1 autorefractor. Therefore, we have measured it with another technique, eccentric infrared photorefraction (the PowerRefractor), and studied how it can be modified. Particular care was taken to ensure correct calibration of the instrument. Ten young adult subjects were refracted both in the fixation axis of the right eye and from the midline between both eyes, while they read text both monocularly and binocularly at 1.5, 2, 3, 4 and 5 D distance ("group 1"). A second group of 10 subjects ("group 2"), measured from the midline between both eyes, was studied to analyze the effects of binocular vs monocular vision, addition of +1 or +2 D lenses, and of letter size. Spherical equivalents (SE) were analyzed in all cases. The lag of accommodation was variable among subjects (standard deviations among groups and viewing distances ranging from 0.18 to 1.07 D) but was significant when the measurements were done in the fixation axis (0.35 D at 3 D target distance to 0.60 D at 5 D with binocular vision; p<0.01 or better all cases). Refracting from the midline between both eyes tended to underestimate the lag of accommodation although this was significant only at 5 D (ANOVA: p<0.0001, post hoc t-test: p<0.05). There was a small improvement in accommodation precision with binocular compared to monocular viewing but significance was reached only for the 5 D reading target (group 1--lags for a 3/4/5 D target: 0.35 vs 0.41 D/0.48 vs 0.47 D/0.60 vs 0.66 D, ANOVA: p<0.0001, post hoc t-test: p<0.05; group 2--0.29 vs 0.12 D, 0.33 vs 0.16 D, 0.23 vs -0.31 D, ANOVA: p<0.0001, post hoc t-test: p<0.05). Adjusting the letter height for constant angular subtense (0.2 deg) induced scarcely more accommodation than keeping letter size constantly at 3.5 mm (ANOVA: p<0.0001, post hoc t-test: n.s.). Positive trial lenses reduced the lag of accommodation under monocular viewing conditions and even reversed it with binocular vision. After consideration of possible sources of measurement error, the lag of accommodation measured with photorefraction at 3 D (0.41 D SE monocular and 0.35 D SE binocular) was in the range of published values from the Canon R-1 autorefractor. With the measured lag, simulations of the retinal images for a diffraction limited eye suggest surprisingly poor letter contrast on the retina.
Blake, Randolph; Wilson, Hugh
2010-01-01
This essay reviews major developments –empirical and theoretical –in the field of binocular vision during the last 25 years. We limit our survey primarily to work on human stereopsis, binocular rivalry and binocular contrast summation, with discussion where relevant of single-unit neurophysiology and human brain imaging. We identify several key controversies that have stimulated important work on these problems. In the case of stereopsis those controversies include position versus phase encoding of disparity, dependence of disparity limits on spatial scale, role of occlusion in binocular depth and surface perception, and motion in 3D. In the case of binocular rivalry, controversies include eye versus stimulus rivalry, role of “top-down” influences on rivalry dynamics, and the interaction of binocular rivalry and stereopsis. Concerning binocular contrast summation, the essay focuses on two representative models that highlight the evolving complexity in this field of study. PMID:20951722
Computer Vision Assisted Virtual Reality Calibration
NASA Technical Reports Server (NTRS)
Kim, W.
1999-01-01
A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.
Wang, Ming-Shan; Zhang, Rong-Wei; Su, Ling-Yan; Li, Yan; Peng, Min-Sheng; Liu, He-Qun; Zeng, Lin; Irwin, David M; Du, Jiu-Lin; Yao, Yong-Gang; Wu, Dong-Dong; Zhang, Ya-Ping
2016-05-01
As noted by Darwin, chickens have the greatest phenotypic diversity of all birds, but an interesting evolutionary difference between domestic chickens and their wild ancestor, the Red Junglefowl, is their comparatively weaker vision. Existing theories suggest that diminished visual prowess among domestic chickens reflect changes driven by the relaxation of functional constraints on vision, but the evidence identifying the underlying genetic mechanisms responsible for this change has not been definitively characterized. Here, a genome-wide analysis of the domestic chicken and Red Junglefowl genomes showed significant enrichment for positively selected genes involved in the development of vision. There were significant differences between domestic chickens and their wild ancestors regarding the level of mRNA expression for these genes in the retina. Numerous additional genes involved in the development of vision also showed significant differences in mRNA expression between domestic chickens and their wild ancestors, particularly for genes associated with phototransduction and photoreceptor development, such as RHO (rhodopsin), GUCA1A, PDE6B and NR2E3. Finally, we characterized the potential role of the VIT gene in vision, which experienced positive selection and downregulated expression in the retina of the village chicken. Overall, our results suggest that positive selection, rather than relaxation of purifying selection, contributed to the evolution of vision in domestic chickens. The progenitors of domestic chickens harboring weaker vision may have showed a reduced fear response and vigilance, making them easier to be unconsciously selected and/or domesticated.
Wang, Ming-Shan; Zhang, Rong-wei; Su, Ling-Yan; Li, Yan; Peng, Min-Sheng; Liu, He-Qun; Zeng, Lin; Irwin, David M; Du, Jiu-Lin; Yao, Yong-Gang; Wu, Dong-Dong; Zhang, Ya-Ping
2016-01-01
As noted by Darwin, chickens have the greatest phenotypic diversity of all birds, but an interesting evolutionary difference between domestic chickens and their wild ancestor, the Red Junglefowl, is their comparatively weaker vision. Existing theories suggest that diminished visual prowess among domestic chickens reflect changes driven by the relaxation of functional constraints on vision, but the evidence identifying the underlying genetic mechanisms responsible for this change has not been definitively characterized. Here, a genome-wide analysis of the domestic chicken and Red Junglefowl genomes showed significant enrichment for positively selected genes involved in the development of vision. There were significant differences between domestic chickens and their wild ancestors regarding the level of mRNA expression for these genes in the retina. Numerous additional genes involved in the development of vision also showed significant differences in mRNA expression between domestic chickens and their wild ancestors, particularly for genes associated with phototransduction and photoreceptor development, such as RHO (rhodopsin), GUCA1A, PDE6B and NR2E3. Finally, we characterized the potential role of the VIT gene in vision, which experienced positive selection and downregulated expression in the retina of the village chicken. Overall, our results suggest that positive selection, rather than relaxation of purifying selection, contributed to the evolution of vision in domestic chickens. The progenitors of domestic chickens harboring weaker vision may have showed a reduced fear response and vigilance, making them easier to be unconsciously selected and/or domesticated. PMID:27033669
The 3D laser radar vision processor system
NASA Astrophysics Data System (ADS)
Sebok, T. M.
1990-10-01
Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.
The 3D laser radar vision processor system
NASA Technical Reports Server (NTRS)
Sebok, T. M.
1990-01-01
Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.
Benavente-Perez, Alexandra; Nour, Ann; Troilo, David
2012-09-21
We evaluated the effect of imposing negative and positive defocus simultaneously on the eye growth and refractive state of the common marmoset, a New World primate that compensates for either negative and positive defocus when they are imposed individually. Ten marmosets were reared with multizone contact lenses of alternating powers (-5 diopters [D]/+5 D), 50:50 ratio for average pupil of 2.80 mm over the right eye (experimental) and plano over the fellow eye (control) from 10 to 12 weeks. The effects on refraction (mean spherical equivalent [MSE]) and vitreous chamber depth (VC) were measured and compared to untreated, and -5 D and +5 D single vision contact lens-reared marmosets. Over the course of the treatment, pupil diameters ranged from 2.26 to 2.76 mm, leading to 1.5 times greater exposure to negative than positive power zones. Despite this, at different intervals during treatment, treated eyes were on average relatively more hyperopic and smaller than controls (experimental-control [exp-con] mean MSE ± SE +1.44 ± 0.45 D, mean VC ± SE -0.05 ± 0.02 mm) and the effects were similar to those in marmosets raised on +5 D single vision contact lenses (exp-con mean MSE ± SE +1.62 ± 0.44 D. mean VC ± SE -0.06 ± 0.03 mm). Six weeks into treatment, the interocular growth rates in multizone animals were already lower than in -5 D-treated animals (multizone -1.0 ± 0.1 μm/day, -5 D +2.1 ± 0.9 μm/day) and did not change significantly throughout treatment. Imposing hyperopic and myopic defocus simultaneously using concentric contact lenses resulted in relatively smaller and less myopic eyes, despite treated eyes being exposed to a greater percentage of negative defocus. Exposing the retina to combined dioptric powers with multifocal lenses that include positive defocus might be an effective treatment to control myopia development or progression.
NASA Astrophysics Data System (ADS)
Di, Si; Lin, Hui; Du, Ruxu
2011-05-01
Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.
NASA Astrophysics Data System (ADS)
Lauinger, Norbert
1997-09-01
The interpretation of the 'inverted' retina of primates as an 'optoretina' (a light cones transforming diffractive cellular 3D-phase grating) integrates the functional, structural, and oscillatory aspects of a cortical layer. It is therefore relevant to consider prenatal developments as a basis of the macro- and micro-geometry of the inner eye. This geometry becomes relevant for the postnatal trichromatic synchrony organization (TSO) as well as the adaptive levels of human vision. It is shown that the functional performances, the trichromatism in photopic vision, the monocular spatiotemporal 3D- and 4D-motion detection, as well as the Fourier optical image transformation with extraction of invariances all become possible. To transform light cones into reciprocal gratings especially the spectral phase conditions in the eikonal of the geometrical optical imaging before the retinal 3D-grating become relevant first, then in the von Laue resp. reciprocal von Laue equation for 3D-grating optics inside the grating and finally in the periodicity of Talbot-2/Fresnel-planes in the near-field behind the grating. It is becoming possible to technically realize -- at least in some specific aspects -- such a cortical optoretina sensor element with its typical hexagonal-concentric structure which leads to these visual functions.
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
Color vision deficiencies and the child's willingness for visual activity: preliminary research
NASA Astrophysics Data System (ADS)
Geniusz, Malwina; Szmigiel, Marta; Geniusz, Maciej
2017-09-01
After a few weeks a newborn baby can recognize high contrasts in colors like black and white. They reach full color vision at the age of circa six months. Matching colors is the next milestone. Most children can do it at the age of two. Good color vision is one of the factors which indicate proper development of a child. Presented research shows the correlation between color vision and visual activity. The color vision of a group of children aged 3-8 was examined with saturated Farnsworth D-15. Fransworth test was performed twice - in a standard version and in a magnetic version. The time of completing standard and magnetic tests was measured. Furthermore, parents of subjects answered questions checking the children's visual activity in 1 - 10 scale. Parents stated whether the child willingly watched books, colored coloring books, put puzzles or liked to play with blocks etc. The Fransworth D-15 test designed for color vision testing can be used to test younger children from the age of 3 years. These are preliminary studies which may be a useful tool for further, more accurate examination on a larger group of subjects.
Arumugam, Baskar; Hung, Li-Fang; To, Chi-Ho; Sankaridurg, Padmaja; III, Earl L. Smith
2016-01-01
Purpose We investigated how the relative surface area devoted to the more positive-powered component in dual-focus lenses influences emmetropization in rhesus monkeys. Methods From 3 to 21 weeks of age, macaques were reared with binocular dual-focus spectacles. The treatment lenses had central 2-mm zones of zero-power and concentric annular zones that had alternating powers of either +3.0 diopters (D) and 0 D (+3 D/pL) or −3.0 D and 0 D (−3 D/pL). The relative widths of the powered and plano zones varied from 50:50 to 18:82 between treatment groups. Refractive status, corneal curvature, and axial dimensions were assessed biweekly throughout the lens-rearing period. Comparison data were obtained from monkeys reared with binocular full-field single-vision lenses (FF+3D, n = 6; FF−3D, n = 10) and from 35 normal controls. Results The median refractive errors for all of the +3 D/pL lens groups were similar to that for the FF+3D group (+4.63 D versus +4.31 D to +5.25 D; P = 0.18–0.96), but significantly more hyperopic than that for controls (+2.44 D; P = 0.0002–0.003). In the −3 D/pL monkeys, refractive development was dominated by the zero-powered portions of the treatment lenses; the −3 D/pL animals (+2.94 D to +3.13 D) were more hyperopic than the FF−3D monkeys (−0.78 D; P = 0.004–0.006), but similar to controls (+2.44 D; P = 0.14–0.22). Conclusions The results demonstrate that even when the more positive-powered zones make up only one-fifth of a dual-focus lens' surface area, refractive development is still dominated by relative myopic defocus. Overall, the results emphasize that myopic defocus distributed across the visual field evokes strong signals to slow eye growth in primates. PMID:27479812
The 3-D vision system integrated dexterous hand
NASA Technical Reports Server (NTRS)
Luo, Ren C.; Han, Youn-Sik
1989-01-01
Most multifingered hands use a tendon mechanism to minimize the size and weight of the hand. Such tendon mechanisms suffer from the problems of striction and friction of the tendons resulting in a reduction of control accuracy. A design for a 3-D vision system integrated dexterous hand with motor control is described which overcomes these problems. The proposed hand is composed of three three-jointed grasping fingers with tactile sensors on their tips, a two-jointed eye finger with a cross-shaped laser beam emitting diode in its distal part. The two non-grasping fingers allow 3-D vision capability and can rotate around the hand to see and measure the sides of grasped objects and the task environment. An algorithm that determines the range and local orientation of the contact surface using a cross-shaped laser beam is introduced along with some potential applications. An efficient method for finger force calculation is presented which uses the measured contact surface normals of an object.
Boyatzis, Richard E.; Rochford, Kylie; Taylor, Scott N.
2015-01-01
Personal and shared vision have a long history in management and organizational practices yet only recently have we begun to build a systematic body of empirical knowledge about the role of personal and shared vision in organizations. As the introductory paper for this special topic in Frontiers in Psychology, we present a theoretical argument as to the existence and critical role of two states in which a person, dyad, team, or organization may find themselves when engaging in the creation of a personal or shared vision: the positive emotional attractor (PEA) and the negative emotional attractor (NEA). These two primary states are strange attractors, each characterized by three dimensions: (1) positive versus negative emotional arousal; (2) endocrine arousal of the parasympathetic nervous system versus sympathetic nervous system; and (3) neurological activation of the default mode network versus the task positive network. We argue that arousing the PEA is critical when creating or affirming a personal vision (i.e., sense of one’s purpose and ideal self). We begin our paper by reviewing the underpinnings of our PEA–NEA theory, briefly review each of the papers in this special issue, and conclude by discussing the practical implications of the theory. PMID:26052300
Hand-Eye Calibration of Robonaut
NASA Technical Reports Server (NTRS)
Nickels, Kevin; Huber, Eric
2004-01-01
NASA's Human Space Flight program depends heavily on Extra-Vehicular Activities (EVA's) performed by human astronauts. EVA is a high risk environment that requires extensive training and ground support. In collaboration with the Defense Advanced Research Projects Agency (DARPA), NASA is conducting a ground development project to produce a robotic astronaut's assistant, called Robonaut, that could help reduce human EVA time and workload. The project described in this paper designed and implemented a hand-eye calibration scheme for Robonaut, Unit A. The intent of this calibration scheme is to improve hand-eye coordination of the robot. The basic approach is to use kinematic and stereo vision measurements, namely the joint angles self-reported by the right arm and 3-D positions of a calibration fixture as measured by vision, to estimate the transformation from Robonaut's base coordinate system to its hand coordinate system and to its vision coordinate system. Two methods of gathering data sets have been developed, along with software to support each. In the first, the system observes the robotic arm and neck angles as the robot is operated under external control, and measures the 3-D position of a calibration fixture using Robonaut's stereo cameras, and logs these data. In the second, the system drives the arm and neck through a set of pre-recorded configurations, and data are again logged. Two variants of the calibration scheme have been developed. The full calibration scheme is a batch procedure that estimates all relevant kinematic parameters of the arm and neck of the robot The daily calibration scheme estimates only joint offsets for each rotational joint on the arm and neck, which are assumed to change from day to day. The schemes have been designed to be automatic and easy to use so that the robot can be fully recalibrated when needed such as after repair, upgrade, etc, and can be partially recalibrated after each power cycle. The scheme has been implemented on Robonaut Unit A and has been shown to reduce mismatch between kinematically derived positions and visually derived positions from a mean of 13.75cm using the previous calibration to means of 1.85cm using a full calibration and 2.02cm using a suboptimal but faster daily calibration. This improved calibration has already enabled the robot to more accurately reach for and grasp objects that it sees within its workspace. The system has been used to support an autonomous wrench-grasping experiment and significantly improved the workspace positioning of the hand based on visually derived wrench position. estimates.
On the use of orientation filters for 3D reconstruction in event-driven stereo vision
Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe
2014-01-01
The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694
Modeling Images of Natural 3D Surfaces: Overview and Potential Applications
NASA Technical Reports Server (NTRS)
Jalobeanu, Andre; Kuehnel, Frank; Stutz, John
2004-01-01
Generative models of natural images have long been used in computer vision. However, since they only describe the of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks a 3D scene model is when it comes to inferring a 3D object or its characteristics. In this paper, we present such a generative model, incorporating both a multiscale surface prior model for surface geometry and reflectance, and an image formation process model based on realistic rendering, the computation of the posterior model parameter densities, and on the critical aspects of the rendering. We also how to efficiently invert the model within a Bayesian framework. We present a few potential applications, such as asteroid modeling and Planetary topography recovery, illustrated by promising results on real images.
Computer vision research with new imaging technology
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Liu, Fei; Sun, Zhenan
2015-12-01
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
Skeleton-based human action recognition using multiple sequence alignment
NASA Astrophysics Data System (ADS)
Ding, Wenwen; Liu, Kai; Cheng, Fei; Zhang, Jin; Li, YunSong
2015-05-01
Human action recognition and analysis is an active research topic in computer vision for many years. This paper presents a method to represent human actions based on trajectories consisting of 3D joint positions. This method first decompose action into a sequence of meaningful atomic actions (actionlets), and then label actionlets with English alphabets according to the Davies-Bouldin index value. Therefore, an action can be represented using a sequence of actionlet symbols, which will preserve the temporal order of occurrence of each of the actionlets. Finally, we employ sequence comparison to classify multiple actions through using string matching algorithms (Needleman-Wunsch). The effectiveness of the proposed method is evaluated on datasets captured by commodity depth cameras. Experiments of the proposed method on three challenging 3D action datasets show promising results.
Nithyanandam, S; Joseph, M; Stephen, J
2013-02-01
The aim of the work is to describe the occurrence of ocular complications and loss of vision due to herpes zoster ophthalmicus (HZO) in HIV-positive patients who received early antiviral therapy for HZO.This is a post hoc analysis of prospectively collected data.Twenty-four HIV-positive patients with HZO were included in this report; male to female ratio was 3.8:1; mean age was 33.5 (±14.9) years. The visual outcome was good, with 14/24 patients having 6/6 vision; severe vision loss (≤6/60) occurred in only 2/24. There was no statistical difference in the visual outcome between the HIV-positive and -negative patients (P = 0.69), although severe vision loss was more likely in HIV-infected patients. The ocular complications of HZO in HIV-infected patients were: reduced corneal sensation (17/24), corneal epithelial lesions (14/24), uveitis (12/24), elevated intraocular pressure (10/24) and extra-ocular muscle palsy (3/24). The severity of rash was similar in the two groups but multidermatomal rash occurred only in HIV-infected patients (4/24). There was no difference in the occurrence of ocular complications of HZO between HIV-positive and HIV-negative patients. HZO associated ocular complications and visual loss is low in HIV-infected patients if treated with HZO antiviral therapy and was comparable with HIV-negative patients. Early institution of HZO antiviral therapy is recommended to reduce ocular complication and vision loss.
NASA Astrophysics Data System (ADS)
Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.
2012-10-01
We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.
Bloch, Edward; Uddin, Nabil; Gannon, Laura; Rantell, Khadija; Jain, Saurabh
2015-01-01
Background Stereopsis is believed to be advantageous for surgical tasks that require precise hand-eye coordination. We investigated the effects of short-term and long-term absence of stereopsis on motor task performance in three-dimensional (3D) and two-dimensional (2D) viewing conditions. Methods 30 participants with normal stereopsis and 15 participants with absent stereopsis performed a simulated surgical task both in free space under direct vision (3D) and via a monitor (2D), with both eyes open and one eye covered in each condition. Results The stereo-normal group scored higher, on average, than the stereo-absent group with both eyes open under direct vision (p<0.001). Both groups performed comparably in monocular and binocular monitor viewing conditions (p=0.579). Conclusions High-grade stereopsis confers an advantage when performing a fine motor task under direct vision. However, stereopsis does not appear advantageous to task performance under 2D viewing conditions, such as in video-assisted surgery. PMID:25185439
Smart Camera System for Aircraft and Spacecraft
NASA Technical Reports Server (NTRS)
Delgado, Frank; White, Janis; Abernathy, Michael F.
2003-01-01
This paper describes a new approach to situation awareness that combines video sensor technology and synthetic vision technology in a unique fashion to create a hybrid vision system. Our implementation of the technology, called "SmartCam3D" (SC3D) has been flight tested by both NASA and the Department of Defense with excellent results. This paper details its development and flight test results. Windshields and windows add considerable weight and risk to vehicle design, and because of this, many future vehicles will employ a windowless cockpit design. This windowless cockpit design philosophy prompted us to look at what would be required to develop a system that provides crewmembers and awareness. The system created to date provides a real-time operations personnel an appropriate level of situation 3D perspective display that can be used during all-weather and visibility conditions. While the advantages of a synthetic vision only system are considerable, the major disadvantage of such a system is that it displays the synthetic scene created using "static" data acquired by an aircraft or satellite at some point in the past. The SC3D system we are presenting in this paper is a hybrid synthetic vision system that fuses live video stream information with a computer generated synthetic scene. This hybrid system can display a dynamic, real-time scene of a region of interest, enriched by information from a synthetic environment system, see figure 1. The SC3D system has been flight tested on several X-38 flight tests performed over the last several years and on an ARMY Unmanned Aerial Vehicle (UAV) ground control station earlier this year. Additional testing using an assortment of UAV ground control stations and UAV simulators from the Army and Air Force will be conducted later this year.
ERIC Educational Resources Information Center
Henriksen, Peter N.; Payerle, Paul
Sunlight (solar radiation) provides many beneficial contributions to mankind, including warmth and energy, vision and photoresponses, photosynthesis, and vitamin D synthesis. Along with these positive benefits attributed to solar radiation, there are also adverse effects. A particular adverse effect of current interest and concern which is…
Computer Vision Tracking Using Particle Filters for 3D Position Estimation
2014-03-27
the United States Air Force, the Department of Defense, or the United States Government. This material is declared a work of the U.S. Government and is...probability distribution (unless otherwise noted) π proposal distribution ω importance weight i index of normalized weights δ Dirac -delta function x...p(x) and the importance weights, where δ is the Dirac delta function [2, p. 178]. p(x) = N∑ n=1 ωnδ (x − xn) (2.14) ωn ∝ p(x) π(x) (2.15) Applying
Drogue tracking using 3D flash lidar for autonomous aerial refueling
NASA Astrophysics Data System (ADS)
Chen, Chao-I.; Stettner, Roger
2011-06-01
Autonomous aerial refueling (AAR) is an important capability for an unmanned aerial vehicle (UAV) to increase its flying range and endurance without increasing its size. This paper presents a novel tracking method that utilizes both 2D intensity and 3D point-cloud data acquired with a 3D Flash LIDAR sensor to establish relative position and orientation between the receiver vehicle and drogue during an aerial refueling process. Unlike classic, vision-based sensors, a 3D Flash LIDAR sensor can provide 3D point-cloud data in real time without motion blur, in the day or night, and is capable of imaging through fog and clouds. The proposed method segments out the drogue through 2D analysis and estimates the center of the drogue from 3D point-cloud data for flight trajectory determination. A level-set front propagation routine is first employed to identify the target of interest and establish its silhouette information. Sufficient domain knowledge, such as the size of the drogue and the expected operable distance, is integrated into our approach to quickly eliminate unlikely target candidates. A statistical analysis along with a random sample consensus (RANSAC) is performed on the target to reduce noise and estimate the center of the drogue after all 3D points on the drogue are identified. The estimated center and drogue silhouette serve as the seed points to efficiently locate the target in the next frame.
Higher-Order Neural Networks Applied to 2D and 3D Object Recognition
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Reid, Max B.
1994-01-01
A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.
Object Tracking Vision System for Mapping the UCN τ Apparatus Volume
NASA Astrophysics Data System (ADS)
Lumb, Rowan; UCNtau Collaboration
2016-09-01
The UCN τ collaboration has an immediate goal to measure the lifetime of the free neutron to within 0.1%, i.e. about 1 s. The UCN τ apparatus is a magneto-gravitational ``bottle'' system. This system holds low energy, or ultracold, neutrons in the apparatus with the constraint of gravity, and keeps these low energy neutrons from interacting with the bottle via a strong 1 T surface magnetic field created by a bowl-shaped array of permanent magnets. The apparatus is wrapped with energized coils to supply a magnetic field throughout the ''bottle'' volume to prevent depolarization of the neutrons. An object-tracking stereo-vision system will be presented that precisely tracks a Hall probe and allows a mapping of the magnetic field throughout the volume of the UCN τ bottle. The stereo-vision system utilizes two cameras and open source openCV software to track an object's 3-d position in space in real time. The desired resolution is +/-1 mm resolution along each axis. The vision system is being used as part of an even larger system to map the magnetic field of the UCN τ apparatus and expose any possible systematic effects due to field cancellation or low field points which could allow neutrons to depolarize and possibly escape from the apparatus undetected. Tennessee Technological University.
Hong, Deokhwa; Lee, Hyunki; Kim, Min Young; Cho, Hyungsuck; Moon, Jeon Il
2009-07-20
Automatic optical inspection (AOI) for printed circuit board (PCB) assembly plays a very important role in modern electronics manufacturing industries. Well-developed inspection machines in each assembly process are required to ensure the manufacturing quality of the electronics products. However, generally almost all AOI machines are based on 2D image-analysis technology. In this paper, a 3D-measurement-method-based AOI system is proposed consisting of a phase shifting profilometer and a stereo vision system for assembled electronic components on a PCB after component mounting and the reflow process. In this system information from two visual systems is fused to extend the shape measurement range limited by 2pi phase ambiguity of the phase shifting profilometer, and finally to maintain fine measurement resolution and high accuracy of the phase shifting profilometer with the measurement range extended by the stereo vision. The main purpose is to overcome the low inspection reliability problem of 2D-based inspection machines by using 3D information of components. The 3D shape measurement results on PCB-mounted electronic components are shown and compared with results from contact and noncontact 3D measuring machines. Based on a series of experiments, the usefulness of the proposed sensor system and its fusion technique are discussed and analyzed in detail.
Multi-Robot FastSLAM for Large Domains
2007-03-01
Derr, D. Fox, A.B. Cremers , Integrating global position estimation and position tracking for mobile robots: The dynamic markov localization approach...Intelligence (AAAI), 2000. 53. Andrew J. Davison and David W. Murray. Simultaneous Localization and Map- Building Using Active Vision. IEEE...Wyeth, Michael Milford and David Prasser. A Modified Particle Filter for Simultaneous Robot Localization and Landmark Tracking in an Indoor
Chromotomography for a rotating-prism instrument using backprojection, then filtering.
Deming, Ross W
2006-08-01
A simple closed-form solution is derived for reconstructing a 3D spatial-chromatic image cube from a set of chromatically dispersed 2D image frames. The algorithm is tailored for a particular instrument in which the dispersion element is a matching set of mechanically rotated direct vision prisms positioned between a lens and a focal plane array. By using a linear operator formalism to derive the Tikhonov-regularized pseudoinverse operator, it is found that the unique minimum-norm solution is obtained by applying the adjoint operator, followed by 1D filtering with respect to the chromatic variable. Thus the filtering and backprojection (adjoint) steps are applied in reverse order relative to an existing method. Computational efficiency is provided by use of the fast Fourier transform in the filtering step.
NASA Astrophysics Data System (ADS)
Lauinger, Norbert
2004-10-01
The human eye is a good model for the engineering of optical correlators. Three prominent intelligent functionalities in human vision could in the near future become realized by a new diffractive-optical hardware design of optical imaging sensors: (1) Illuminant-adaptive RGB-based color Vision, (2) Monocular 3D Vision based on RGB data processing, (3) Patchwise fourier-optical Object-Classification and Identification. The hardware design of the human eye has specific diffractive-optical elements (DOE's) in aperture and in image space and seems to execute the three jobs at -- or not far behind -- the loci of the images of objects.
Code of Federal Regulations, 2010 CFR
2010-10-01
... of Vision Care Professional(s) D Appendix D to Part 5 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT... Pt. 5, App. D Appendix D to Part 5—Criteria for Designation of Areas Having Shortages of Vision Care... of vision care professional(s) if the following three criteria are met: 1. The area is a rational...
Code of Federal Regulations, 2011 CFR
2011-10-01
... of Vision Care Professional(s) D Appendix D to Part 5 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT... Pt. 5, App. D Appendix D to Part 5—Criteria for Designation of Areas Having Shortages of Vision Care... of vision care professional(s) if the following three criteria are met: 1. The area is a rational...
A high resolution and high speed 3D imaging system and its application on ATR
NASA Astrophysics Data System (ADS)
Lu, Thomas T.; Chao, Tien-Hsin
2006-04-01
The paper presents an advanced 3D imaging system based on a combination of stereo vision and light projection methods. A single digital camera is used to take only one shot of the object and reconstruct the 3D model of an object. The stereo vision is achieved by employing a prism and mirror setup to split the views and combine them side by side in the camera. The advantage of this setup is its simple system architecture, easy synchronization, fast 3D imaging speed and high accuracy. The 3D imaging algorithms and potential applications are discussed. For ATR applications, it is critically important to extract maximum information for the potential targets and to separate the targets from the background and clutter noise. The added dimension of a 3D model provides additional features of surface profile, range information of the target. It is capable of removing the false shadow from camouflage and reveal the 3D profile of the object. It also provides arbitrary viewing angles and distances for training the filter bank for invariant ATR. The system architecture can be scaled to take large objects and to perform area 3D modeling onboard a UAV.
A novel camera localization system for extending three-dimensional digital image correlation
NASA Astrophysics Data System (ADS)
Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher
2018-03-01
The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.
Bionic Vision-Based Intelligent Power Line Inspection System
Ma, Yunpeng; He, Feijia; Xu, Jinxin
2017-01-01
Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions. PMID:28203269
How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision
Cao, Yongqiang; Grossberg, Stephen
2014-01-01
The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model proposes how lateral geniculate nucleus (LGN) and hierarchically organized laminar circuits in cortical areas V1, V2, and V4 interact to control processes of 3D boundary formation and surface filling-in that simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. Interactions between layers 4, 3B, and 2/3 in V1 and V2 carry out stereopsis and 3D boundary formation. Both binocular and monocular information combine to form 3D boundary and surface representations. Surface contour surface-to-boundary feedback from V2 thin stripes to V2 pale stripes combines computationally complementary boundary and surface formation properties, leading to a single consistent percept, while also eliminating redundant 3D boundaries, and triggering figure-ground perception. False binocular boundary matches are eliminated by Gestalt grouping properties during boundary formation. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. These model mechanisms have also simulated properties of 3D neon color spreading, binocular rivalry, 3D Necker cube, and many examples of 3D figure-ground separation. PMID:25309467
How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision.
Cao, Yongqiang; Grossberg, Stephen
2014-01-01
The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model proposes how lateral geniculate nucleus (LGN) and hierarchically organized laminar circuits in cortical areas V1, V2, and V4 interact to control processes of 3D boundary formation and surface filling-in that simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. Interactions between layers 4, 3B, and 2/3 in V1 and V2 carry out stereopsis and 3D boundary formation. Both binocular and monocular information combine to form 3D boundary and surface representations. Surface contour surface-to-boundary feedback from V2 thin stripes to V2 pale stripes combines computationally complementary boundary and surface formation properties, leading to a single consistent percept, while also eliminating redundant 3D boundaries, and triggering figure-ground perception. False binocular boundary matches are eliminated by Gestalt grouping properties during boundary formation. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. These model mechanisms have also simulated properties of 3D neon color spreading, binocular rivalry, 3D Necker cube, and many examples of 3D figure-ground separation.
Accuracy of four commonly used color vision tests in the identification of cone disorders.
Thiadens, Alberta A H J; Hoyng, Carel B; Polling, Jan Roelof; Bernaerts-Biskop, Riet; van den Born, L Ingeborgh; Klaver, Caroline C W
2013-04-01
To determine which color vision test is most appropriate for the identification of cone disorders. In a clinic-based study, four commonly used color vision tests were compared between patients with cone dystrophy (n = 37), controls with normal visual acuity (n = 35), and controls with low vision (n = 39) and legal blindness (n = 11). Mean outcome measures were specificity, sensitivity, positive predictive value and discriminative accuracy of the Ishihara test, Hardy-Rand-Rittler (HRR) test, and the Lanthony and Farnsworth Panel D-15 tests. In the comparison between cone dystrophy and all controls, sensitivity, specificity and predictive value were highest for the HRR and Ishihara tests. When patients were compared to controls with normal vision, discriminative accuracy was highest for the HRR test (c-statistic for PD-axes 1, for T-axis 0.851). When compared to controls with poor vision, discriminative accuracy was again highest for the HRR test (c-statistic for PD-axes 0.900, for T-axis 0.766), followed by the Lanthony Panel D-15 test (c-statistic for PD-axes 0.880, for T-axis 0.500) and Ishihara test (c-statistic 0.886). Discriminative accuracies of all tests did not further decrease when patients were compared to controls who were legally blind. The HRR, Lanthony Panel D-15 and Ishihara all have a high discriminative accuracy to identify cone disorders, but the highest scores were for the HRR test. Poor visual acuity slightly decreased the accuracy of all tests. Our advice is to use the HRR test since this test also allows for evaluation of all three color axes and quantification of color defects.
Color vision impairment in type 2 diabetes assessed by the D-15d test and the Cambridge Colour Test.
Feitosa-Santana, Claudia; Paramei, Galina V; Nishi, Mauro; Gualtieri, Mirella; Costa, Marcelo F; Ventura, Dora F
2010-09-01
Color vision impairment emerges at early stages of diabetes mellitus type 2 (DM2) and may precede diabetic retinopathy or the appearance of vascular alterations in the retina. The aim of the present study was to compare the evaluation of the color vision with two different tests - the Lanthony desaturated D-15d test (a traditional color arrangement test), and the Cambridge Colour Test (CCT) (a computerized color discrimination test) - in patients diagnosed with DM2 without clinical signs of diabetic retinopathy (DR), and in sex- and age-matched control groups. Both color tests revealed statistically significant differences between the controls and the worst eyes of the DM2 patients. In addition, the degree of color vision impairment diagnosed by both tests correlated with the disease duration. The D-15d outcomes indicated solely tritan losses. In comparison, CCT outcomes revealed diffuse losses in color discrimination: 13.3% for best eyes and 29% for worst eyes. In addition, elevation of tritan thresholds in the DM2 patients, as detected by the Trivector subtest of the CCT, was found to correlate with the level of glycated hemoglobin. Outcomes of both tests confirm that subclinical losses of color vision are present in DM2 patients at an early stage of the disease, prior to signs of retinopathy. Considering the advantages of the CCT test compared to the D-15d test, further studies should attempt to verify and/or improve the efficiency of the CCT test. © 2010 The Authors, Ophthalmic and Physiological Optics © 2010 The College of Optometrists.
Reliability and accuracy of four dental shade-matching devices.
Kim-Pusateri, Seungyee; Brewer, Jane D; Davis, Elaine L; Wee, Alvin G
2009-03-01
There are several electronic shade-matching instruments available for clinical use, but the reliability and accuracy of these instruments have not been thoroughly investigated. The purpose of this in vitro study was to evaluate the reliability and accuracy of 4 dental shade-matching instruments in a standardized environment. Four shade-matching devices were tested: SpectroShade, ShadeVision, VITA Easyshade, and ShadeScan. Color measurements were made of 3 commercial shade guides (Vitapan Classical, Vitapan 3D-Master, and Chromascop). Shade tabs were placed in the middle of a gingival matrix (Shofu GUMY) with shade tabs of the same nominal shade from additional shade guides placed on both sides. Measurements were made of the central region of the shade tab positioned inside a black box. For the reliability assessment, each shade tab from each of the 3 shade guide types was measured 10 times. For the accuracy assessment, each shade tab from 10 guides of each of the 3 types evaluated was measured once. Differences in reliability and accuracy were evaluated using the Standard Normal z test (2 sided) (alpha=.05) with Bonferroni correction. Reliability of devices was as follows: ShadeVision, 99.0%; SpectroShade, 96.9%; VITA Easyshade, 96.4%; and ShadeScan, 87.4%. A significant difference in reliability was found between ShadeVision and ShadeScan (P=.008). All other comparisons showed similar reliability. Accuracy of devices was as follows: VITA Easyshade, 92.6%; ShadeVision, 84.8%; SpectroShade, 80.2%; and ShadeScan, 66.8%. Significant differences in accuracy were found between all device pairs (P<.001) for all comparisons except for SpectroShade versus ShadeVision (P=.033). Most devices had similar high reliability (over 96%), indicating predictable shade values from repeated measurements. However, there was more variability in accuracy among devices (67-93%), and differences in accuracy were seen with most device comparisons.
Wang, Xiuqin; Yi, Hongmei; Lu, Lina; Zhang, Linxiu; Ma, Xiaochen; Jin, Ling; Zhang, Haiqing; Naidoo, Kovin S; Minto, Hasan; Zou, Haidong; Rozelle, Scott; Congdon, Nathan
2015-12-01
The number of urban migrants in China is 300 million and is increasing rapidly in response to government policies. Urban migrants have poor access to health care, but little is known about rates of correction of refractive error among migrant children. This is of particular significance in light of recent evidence demonstrating the educational impact of providing children with spectacles. To measure prevalence of spectacle need and ownership among Chinese migrant children. Population-based, cross-sectional study among children who failed vision testing (uncorrected visual acuity ≤6/12 in either eye) between September 15 and 30, 2013, at 94 randomly selected primary schools in predominantly migrant communities in Shanghai, Suzhou, and Wuxi, China. Refractive error by cycloplegic refraction; spectacle ownership, defined as producing glasses at school, having been told to bring them; and needing glasses, defined as uncorrected visual acuity of 6/12 or less correctable to greater than 6/12 in either eye, with myopia of -0.5 diopters (D) or less, hyperopia of +2.0 D or greater, or astigmatism of 0.75 D or greater in both eyes. Among 4409 children, 4376 (99.3%) completed vision screening (mean [SD] age, 11.0 [0.81] years; 55.3% boys; 4225 [96.5%] migrant and 151 [3.5%] local). Among 1204 children failing vision testing (total, 27.5%; 1147 migrant children [27.1%] vs 57 local children [37.7%]; P = .003), 850 (70.6%) completed refraction. Spectacle ownership in migrant children needing glasses (147 of 640 children [23.0%]) was less than among local children (12 of 34 children [35.3%]) (odds ratio = 0.55; 95% CI, 0.32-0.95; P = .03). Having uncorrected visual acuity less than 6/18 in both eyes was associated positively with baseline spectacle ownership (odds ratio = 5.73; 95% CI, 3.81-8.62; P < .001), but parental education and family wealth were not. Among urban migrant children, there was a high prevalence of need for spectacles and a very low rate of spectacle ownership. Spectacle distribution programs are needed specifically targeting migrant children.
3D Data Acquisition Platform for Human Activity Understanding
2016-03-02
3D data. The support for the acquisition of such research instrumentation have significantly facilitated our current and future research and educate ...SECURITY CLASSIFICATION OF: In this project, we incorporated motion capture devices, 3D vision sensors, and EMG sensors to cross validate...multimodality data acquisition, and address fundamental research problems of representation and invariant description of 3D data, human motion modeling and
A cognitive approach to vision for a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Funk, Christopher; Lyons, Damian
2013-05-01
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
Geometric Variational Methods for Controlled Active Vision
2006-08-01
Haker , L. Zhu, and A. Tannenbaum, ``Optimal mass transport for registration and warping’’ Int. Journal Computer Vision, volume 60, 2004, pp. 225-240. G...pp. 119-142. A. Angenent, S. Haker , and A. Tannenbaum, ``Minimizing flows for the Monge-Kantorovich problem,’’ SIAM J. Math. Analysis, volume 35...Shape analysis of structures using spherical wavelets’’ (with S. Haker and D. Nain), Proceeedings of MICCAI, 2005. ``Affine surface evolution for 3D
Evaluation of stereoscopic display with visual function and interview
NASA Astrophysics Data System (ADS)
Okuyama, Fumio
1999-05-01
The influence of binocular stereoscopic (3D) television display on the human eye were compared with one of a 2D display, using human visual function testing and interviews. A 40- inch double lenticular display was used for 2D/3D comparison experiments. Subjects observed the display for 30 minutes at a distance 1.0 m, with a combination of 2D material and one of 3D material. The participants were twelve young adults. Main optometric test with visual function measured were visual acuity, refraction, phoria, near vision point, accommodation etc. The interview consisted of 17 questions. Testing procedures were performed just before watching, just after watching, and forty-five minutes after watching. Changes in visual function are characterized as prolongation of near vision point, decrease of accommodation and increase in phoria. 3D viewing interview results show much more visual fatigue in comparison with 2D results. The conclusions are: 1) change in visual function is larger and visual fatigue is more intense when viewing 3D images. 2) The evaluation method with visual function and interview proved to be very satisfactory for analyzing the influence of stereoscopic display on human eye.
3-D Imaging Systems for Agricultural Applications—A Review
Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.
2016-01-01
Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560
When the display matters: A multifaceted perspective on 3D geovisualizations
NASA Astrophysics Data System (ADS)
Juřík, Vojtěch; Herman, Lukáš; Šašinka, Čeněk; Stachoň, Zdeněk; Chmelík, Jiří
2017-04-01
This study explores the influence of stereoscopic (real) 3D and monoscopic (pseudo) 3D visualization on the human ability to reckon altitude information in noninteractive and interactive 3D geovisualizations. A two phased experiment was carried out to compare the performance of two groups of participants, one of them using the real 3D and the other one pseudo 3D visualization of geographical data. A homogeneous group of 61 psychology students, inexperienced in processing of geographical data, were tested with respect to their efficiency at identifying altitudes of the displayed landscape. The first phase of the experiment was designed as non-interactive, where static 3D visual displayswere presented; the second phase was designed as interactive and the participants were allowed to explore the scene by adjusting the position of the virtual camera. The investigated variables included accuracy at altitude identification, time demands and the amount of the participant's motor activity performed during interaction with geovisualization. The interface was created using a Motion Capture system, Wii Remote Controller, widescreen projection and the passive Dolby 3D technology (for real 3D vision). The real 3D visual display was shown to significantly increase the accuracy of the landscape altitude identification in non-interactive tasks. As expected, in the interactive phase there were differences in accuracy flattened out between groups due to the possibility of interaction, with no other statistically significant differences in completion times or motor activity. The increased number of omitted objects in real 3D condition was further subjected to an exploratory analysis.
Refractive outcomes after multifocal intraocular lens exchange.
Kim, Eric J; Sajjad, Ahmar; Montes de Oca, Ildamaris; Koch, Douglas D; Wang, Li; Weikert, Mitchell P; Al-Mohtaseb, Zaina N
2017-06-01
To evaluate the refractive outcomes after multifocal intraocular lens (IOL) exchange. Cullen Eye Institute, Baylor College of Medicine, Houston, Texas, USA. Retrospective case series. Patients had multifocal IOL explantation followed by IOL implantation. Outcome measures included type of IOL, surgical indication, corrected distance visual acuity (CDVA), and refractive prediction error. The study comprised 29 patients (35 eyes). The types of IOLs implanted after multifocal IOL explantation included in-the-bag IOLs (74%), iris-sutured IOLs (6%), sulcus-fixated IOLs with optic capture (9%), sulcus-fixated IOLs without optic capture (9%), and anterior chamber IOLs (3%). The surgical indication for exchange included blurred vision (60%), photic phenomena (57%), photophobia (9%), loss of contrast sensitivity (3%), and multiple complaints (29%). The CDVA was 20/40 or better in 94% of eyes before the exchange and 100% of eyes after the exchange (P = .12). The mean refractive prediction error significantly decreased from 0.22 ± 0.81 diopter (D) before the exchange to -0.09 ± 0.53 D after the exchange (P < .05). The median absolute refractive prediction error significantly decreased from 0.43 D before the exchange to 0.23 D after the exchange (P < .05). Multifocal IOL exchange can be performed safely with good visual outcomes using different types of IOLs. A lower refractive prediction error and a higher likelihood of 20/40 or better vision can be achieved with the implantation of the second IOL compared with the original multifocal IOL, regardless of the final IOL position. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Humans and Deep Networks Largely Agree on Which Kinds of Variation Make Object Recognition Harder.
Kheradpisheh, Saeed R; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée
2016-01-01
View-invariant object recognition is a challenging problem that has attracted much attention among the psychology, neuroscience, and computer vision communities. Humans are notoriously good at it, even if some variations are presumably more difficult to handle than others (e.g., 3D rotations). Humans are thought to solve the problem through hierarchical processing along the ventral stream, which progressively extracts more and more invariant visual features. This feed-forward architecture has inspired a new generation of bio-inspired computer vision systems called deep convolutional neural networks (DCNN), which are currently the best models for object recognition in natural images. Here, for the first time, we systematically compared human feed-forward vision and DCNNs at view-invariant object recognition task using the same set of images and controlling the kinds of transformation (position, scale, rotation in plane, and rotation in depth) as well as their magnitude, which we call "variation level." We used four object categories: car, ship, motorcycle, and animal. In total, 89 human subjects participated in 10 experiments in which they had to discriminate between two or four categories after rapid presentation with backward masking. We also tested two recent DCNNs (proposed respectively by Hinton's group and Zisserman's group) on the same tasks. We found that humans and DCNNs largely agreed on the relative difficulties of each kind of variation: rotation in depth is by far the hardest transformation to handle, followed by scale, then rotation in plane, and finally position (much easier). This suggests that DCNNs would be reasonable models of human feed-forward vision. In addition, our results show that the variation levels in rotation in depth and scale strongly modulate both humans' and DCNNs' recognition performances. We thus argue that these variations should be controlled in the image datasets used in vision research.
Low computation vision-based navigation for a Martian rover
NASA Technical Reports Server (NTRS)
Gavin, Andrew S.; Brooks, Rodney A.
1994-01-01
Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.
Machine vision guided sensor positioning system for leaf temperature assessment
NASA Technical Reports Server (NTRS)
Kim, Y.; Ling, P. P.; Janes, H. W. (Principal Investigator)
2001-01-01
A sensor positioning system was developed for monitoring plants' well-being using a non-contact sensor. Image processing algorithms were developed to identify a target region on a plant leaf. A novel algorithm to recover view depth was developed by using a camera equipped with a computer-controlled zoom lens. The methodology has improved depth recovery resolution over a conventional monocular imaging technique. An algorithm was also developed to find a maximum enclosed circle on a leaf surface so the conical field-of-view of an infrared temperature sensor could be filled by the target without peripheral noise. The center of the enclosed circle and the estimated depth were used to define the sensor 3-D location for accurate plant temperature measurement.
Parametric dense stereovision implementation on a system-on chip (SoC).
Gardel, Alfredo; Montejo, Pablo; García, Jorge; Bravo, Ignacio; Lázaro, José L
2012-01-01
This paper proposes a novel hardware implementation of a dense recovery of stereovision 3D measurements. Traditionally 3D stereo systems have imposed the maximum number of stereo correspondences, introducing a large restriction on artificial vision algorithms. The proposed system-on-chip (SoC) provides great performance and efficiency, with a scalable architecture available for many different situations, addressing real time processing of stereo image flow. Using double buffering techniques properly combined with pipelined processing, the use of reconfigurable hardware achieves a parametrisable SoC which gives the designer the opportunity to decide its right dimension and features. The proposed architecture does not need any external memory because the processing is done as image flow arrives. Our SoC provides 3D data directly without the storage of whole stereo images. Our goal is to obtain high processing speed while maintaining the accuracy of 3D data using minimum resources. Configurable parameters may be controlled by later/parallel stages of the vision algorithm executed on an embedded processor. Considering hardware FPGA clock of 100 MHz, image flows up to 50 frames per second (fps) of dense stereo maps of more than 30,000 depth points could be obtained considering 2 Mpix images, with a minimum initial latency. The implementation of computer vision algorithms on reconfigurable hardware, explicitly low level processing, opens up the prospect of its use in autonomous systems, and they can act as a coprocessor to reconstruct 3D images with high density information in real time.
Wide-angle vision for road views
NASA Astrophysics Data System (ADS)
Huang, F.; Fehrs, K.-K.; Hartmann, G.; Klette, R.
2013-03-01
The field-of-view of a wide-angle image is greater than (say) 90 degrees, and so contains more information than available in a standard image. A wide field-of-view is more advantageous than standard input for understanding the geometry of 3D scenes, and for estimating the poses of panoramic sensors within such scenes. Thus, wide-angle imaging sensors and methodologies are commonly used in various road-safety, street surveillance, street virtual touring, or street 3D modelling applications. The paper reviews related wide-angle vision technologies by focusing on mathematical issues rather than on hardware.
Vemuri, Anant S; Wu, Jungle Chi-Hsiang; Liu, Kai-Che; Wu, Hurng-Sheng
2012-12-01
Surgical procedures have undergone considerable advancement during the last few decades. More recently, the availability of some imaging methods intraoperatively has added a new dimension to minimally invasive techniques. Augmented reality in surgery has been a topic of intense interest and research. Augmented reality involves usage of computer vision algorithms on video from endoscopic cameras or cameras mounted in the operating room to provide the surgeon additional information that he or she otherwise would have to recognize intuitively. One of the techniques combines a virtual preoperative model of the patient with the endoscope camera using natural or artificial landmarks to provide an augmented reality view in the operating room. The authors' approach is to provide this with the least number of changes to the operating room. Software architecture is presented to provide interactive adjustment in the registration of a three-dimensional (3D) model and endoscope video. Augmented reality including adrenalectomy, ureteropelvic junction obstruction, and retrocaval ureter and pancreas was used to perform 12 surgeries. The general feedback from the surgeons has been very positive not only in terms of deciding the positions for inserting points but also in knowing the least change in anatomy. The approach involves providing a deformable 3D model architecture and its application to the operating room. A 3D model with a deformable structure is needed to show the shape change of soft tissue during the surgery. The software architecture to provide interactive adjustment in registration of the 3D model and endoscope video with adjustability of every 3D model is presented.
3D Medical Collaboration Technology to Enhance Emergency Healthcare
Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.
2009-01-01
Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951
3D medical collaboration technology to enhance emergency healthcare.
Welch, Gregory F; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj K; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E
2009-04-19
Two-dimensional (2D) videoconferencing has been explored widely in the past 15-20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals' viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare.
A combined vision-inertial fusion approach for 6-DoF object pose estimation
NASA Astrophysics Data System (ADS)
Li, Juan; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.
2015-02-01
The estimation of the 3D position and orientation of moving objects (`pose' estimation) is a critical process for many applications in robotics, computer vision or mobile services. Although major research efforts have been carried out to design accurate, fast and robust indoor pose estimation systems, it remains as an open challenge to provide a low-cost, easy to deploy and reliable solution. Addressing this issue, this paper describes a hybrid approach for 6 degrees of freedom (6-DoF) pose estimation that fuses acceleration data and stereo vision to overcome the respective weaknesses of single technology approaches. The system relies on COTS technologies (standard webcams, accelerometers) and printable colored markers. It uses a set of infrastructure cameras, located to have the object to be tracked visible most of the operation time; the target object has to include an embedded accelerometer and be tagged with a fiducial marker. This simple marker has been designed for easy detection and segmentation and it may be adapted to different service scenarios (in shape and colors). Experimental results show that the proposed system provides high accuracy, while satisfactorily dealing with the real-time constraints.
Technological innovation in video-assisted thoracic surgery.
Özyurtkan, Mehmet Oğuzhan; Kaba, Erkan; Toker, Alper
2017-01-01
The popularity of video-assisted thoracic surgery (VATS) which increased worldwide due to the recent innovations in thoracic surgical technics, equipment, electronic devices that carry light and vision and high definition monitors. Uniportal VATS (UVATS) is disseminated widely, creating a drive to develop new techniques and instruments, including new graspers and special staplers with more angulation capacities. During the history of VATS, the classical 10 mm 0° or 30° rigid rod lens system, has been replaced by new thoracoscopes providing a variable angle technology and allowing 0° and 120° range of vision. Besides, the tip of these novel thoracoscopes can be positioned away from the operating side minimize fencing with other thoracoscopic instruments. The curved-tip stapler technology, and better designed endostaplers helped better dissection, precision of control, more secure staple lines. UVATS also contributed to the development of embryonic natural orifice transluminal endoscopic surgery. Three-dimensional VATS systems facilitated faster and more accurate grasping, suturing, and dissection of the tissues by restoring natural 3D vision and the perception of depth. Another innovation in VATS is the energy-based coagulative and tissue fusion technology which may be an alternative to endostaplers.
Solimini, Angelo G.
2013-01-01
Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530
Solimini, Angelo G
2013-01-01
The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators.
Visual-conformal display format for helicopter guidance
NASA Astrophysics Data System (ADS)
Doehler, H.-U.; Schmerwitz, Sven; Lueken, Thomas
2014-06-01
Helicopter guidance in situations where natural vision is reduced is still a challenging task. Beside new available sensors, which are able to "see" through darkness, fog and dust, display technology remains one of the key issues of pilot assistance systems. As long as we have pilots within aircraft cockpits, we have to keep them informed about the outside situation. "Situational awareness" of humans is mainly powered by their visual channel. Therefore, display systems which are able to cross-fade seamless from natural vision to artificial computer vision and vice versa, are of greatest interest within this context. Helmet-mounted displays (HMD) have this property when they apply a head-tracker for measuring the pilot's head orientation relative to the aircraft reference frame. Together with the aircraft's position and orientation relative to the world's reference frame, the on-board graphics computer can generate images which are perfectly aligned with the outside world. We call image elements which match the outside world, "visual-conformal". Published display formats for helicopter guidance in degraded visual environment apply mostly 2D-symbologies which stay far behind from what is possible. We propose a perspective 3D-symbology for a head-tracked HMD which shows as much as possible visual-conformal elements. We implemented and tested our proposal within our fixed based cockpit simulator as well as in our flying helicopter simulator (FHS). Recently conducted simulation trials with experienced helicopter pilots give some first evaluation results of our proposal.
Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation
Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O.; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M.; Vitiello, Nicola
2016-01-01
Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements. PMID:26861333
Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation.
Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M; Vitiello, Nicola
2016-02-05
Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master-slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator's hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers' hands movements.
Active-Vision Control Systems for Complex Adversarial 3-D Environments
2009-03-01
Control Systems MURI Final Report 36 51. D. Nain, S. Haker , A. Bobick, A. Tannenbaum, "Multiscale 3D shape representation and segmentation using...Conference, August 2008. 99. L. Zhu, Y. Yang, S. Haker , and A. Tannenbaum, "An image morphing technique based on optimal mass preserving mapping," IEEE
Accommodation and Phoria in Children Wearing Multifocal Contact Lenses
Gong, Celia R; Troilo, David; Richdale, Kathryn
2017-01-01
Purpose To determine the effect of multifocal contact lenses on accommodation and phoria in children. Methods This was a prospective, non-dispensing, randomized, crossover, single visit study. Myopic children with normal accommodation and binocularity and no history of myopia control treatment were enrolled and fitted with Coopervision Biofinity single vision (SV) and multifocal (MF, +2.50D center distance add) contact lenses. Accommodative responses (photorefraction) and phorias (Modified Thorington) were measured at 4 distances (>3m, 100cm, 40cm, 25cm). Secondary measures included high and low contrast logMAR acuity, accommodative amplitude and facility. Differences between contact lens designs were analyzed using repeated measures regression and paired t-tests. Results A total of 16 subjects, aged 10-15 years, completed the study. There was a small decrease in high (SV: -0.08, MF: +0.01) and low illumination (SV:-0.03, MF: +0.08) (both p<0.01) visual acuity, and contrast sensitivity (SV: 2.0, MF: 1.9 log units, p=0.015) with multifocals. Subjects were more exophoric at 40 cm (SV: -0.41, MF: -2.06 Δ) and 25cm (SV: -0.83, MF: -4.30 Δ) (both p<0.01). With multifocals, subjects had decreased accommodative responses at distance (SV: -0.04; MF: -0.37 D, p=0.02), 100 cm (SV: +0.37; MF: -0.35 D, p<0.01), 40 cm (SV: +1.82; MF: +0.62 D, p<0.01), and 25 cm (SV: +3.38; MF: +1.75 D, p<0.01). There were no significant differences in accommodative amplitude (p=0.66) or facility (p=0.54). Conclusions Children wearing multifocal contact lenses exhibited reduced accommodative responses and more exophoria at increasingly higher accommodative demands than with single vision contact lenses. This suggests that children may be relaxing their accommodation and using the positive addition or increased depth of focus from added spherical aberration of the multifocals. Further studies are needed to evaluate other lens designs, different amounts of positive addition and aberrations, and long-term adaptation to lenses. PMID:28027276
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Z; Baker, J; Hsia, A
Purpose: The commercially available Leipzig-style Cone for High Dose Rate (HDR) Brachytherapy has a steep depth dose curve and a non-uniform dose distribution. This work shows the performance of a Ring Surface Applicator created using a 3D printer that can generate a better dose distribution. Calculated doses were verified with film measurement. Methods: The water equivalent red-ABS plastic was used to print the Ring Surface Applicator which hosts three catheters: a center piece with a straight catheter and two concentric rings with diameters of 3.5 and 5.5 cm. Gafchromic EBT2 film, Epson Expression 10000 flatbed scanner, and the online softwaremore » at radiochromic.com were used to analyze the measured data. 10cm×10cm piece of film was sandwiched between two 15×10×5cm3 polystyrene phantoms. The applicator was positioned directly on top of the phantom. Measurement was done using dwell time and positions calculated by Eclipse BrachyVision treatment planning system (RTP). Results: Depth dose curve was generated from the plan and measurement. The results show that the measured and calculated depth dose were in agreement (<3%) from surface to 4mm depth. A discrepancy of 6% was observed at 5 mm depth, where the dose is typically prescribed to. For depths deeper than 5 mm, the measured doses were lower than those calculated by Eclipse BrachyVision. This can be attributed to a combination of simple calculation algorithm using TG-43 and the lack of inhomogeneity correction. Dose profiles at 5 mm depth were also generated from TPS calculation and measured with film. The measured and calculated profiles are similar. Consistent with the depth dose curve, the measured dose is lower than the calculated. Conclusion: Our results showed that the Ring Surface Applicator, printed using 3D printer, can generate more uniform dose distribution within the target volume and can be safely used in the clinic.« less
Automatic image database generation from CAD for 3D object recognition
NASA Astrophysics Data System (ADS)
Sardana, Harish K.; Daemi, Mohammad F.; Ibrahim, Mohammad K.
1993-06-01
The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.
Design of an NF-kB Activation-Coupled Apoptotic Molecule for Prostate Cancer Therapy
2008-07-31
p65-LS) hetero-dimer. We used this immunocomplex for caspase activity assay using a colorimetric caspase activity assay kit ( Biovision ). The...by a Caspase-3 colorimetric assay kit ( BioVision ). The purified Caspase-3 (10 ng) was used as a positive control in the assay. As shown in Figure...caspase-3 activity assay with a caspase-3 activity assay kit ( BioVision ). The activity of caspase-3 is in an arbitrary unit. 16 c), co-expressed
2015-10-01
Clip Additively Manufactured • The Navy installed a 3D printer aboard the USS Essex to demonstrate the ability to additively develop and produce...desired result and vision to have the capability on the fleet. These officials stated that the Navy plans to install 3D printers on two additional...DEFENSE ADDITIVE MANUFACTURING DOD Needs to Systematically Track Department-wide 3D Printing Efforts Report to
Influence of Socially Used Drugs on Vision and Vision Performance
1974-07-31
AD-A012 909 INFLUENCE OF SOCIALLY USED DRUGS ON VISION AND VISION PERFORMANCE OPTICAL SCIENCES GROUP PREPARED FOR ARMY MEDICAL RESEARCH AND...AND ADDRESS 12. REPORT DATE July 11, 1974 U.S. Army Medical Research and D mvelopmenteommand 13. NUMBER OF PAGES Washington, D.C. 203114 14...nreeeoary and identily by block number) vision vision performance alcohol marijuana tetrahydrocannabinol 20. ABSTRACT (Continue on reverae aide It
Li, Zhongwei; Liu, Xingjian; Wen, Shifeng; He, Piyao; Zhong, Kai; Wei, Qingsong; Shi, Yusheng; Liu, Sheng
2018-01-01
Lack of monitoring of the in situ process signatures is one of the challenges that has been restricting the improvement of Powder-Bed-Fusion Additive Manufacturing (PBF AM). Among various process signatures, the monitoring of the geometric signatures is of high importance. This paper presents the use of vision sensing methods as a non-destructive in situ 3D measurement technique to monitor two main categories of geometric signatures: 3D surface topography and 3D contour data of the fusion area. To increase the efficiency and accuracy, an enhanced phase measuring profilometry (EPMP) is proposed to monitor the 3D surface topography of the powder bed and the fusion area reliably and rapidly. A slice model assisted contour detection method is developed to extract the contours of fusion area. The performance of the techniques is demonstrated with some selected measurements. Experimental results indicate that the proposed method can reveal irregularities caused by various defects and inspect the contour accuracy and surface quality. It holds the potential to be a powerful in situ 3D monitoring tool for manufacturing process optimization, close-loop control, and data visualization. PMID:29649171
Myopia and radial keratotomy: a survey among Norwegian ophthalmologists.
Midelfart, A
1990-10-01
One hundred and eighty-nine of 200 ophthalmologists in Norway responded to a survey requesting them to report their age, sex, refractive state, use of corrective lenses, and if myopic, their view on radial keratotomy as a possible method to correct their own myopia. According to the answers, 32 (17%) females and 154 (82%) males, with mean age of 49 years, were registered. The reported refractive state was 26.5% emmetropy and 72.0% ametropy. The prevalence of myopia was 45%. The mean refractive status (equivalent sphere) in the right eye was -1.02 +/- 2.28 D with a range from -8.5 D to +7.25 D (n = 184). Of the ametropes, 64.8% used spectacles, 15.3% used both spectacles and contact lenses, whilst 3.6% used only contact lenses for distance vision. With the exception of one, all myopes used corrective lenses. Only 2 myopic ophthalmologists responded positively to the question of whether they would consider having radial keratotomy to correct their own myopia.
Bornhoft, J M; Strabala, K W; Wortman, T D; Lehman, A C; Oleynikov, D; Farritor, S M
2011-01-01
The objective of this research is to study the effectiveness of using a stereoscopic visualization system for performing remote surgery. The use of stereoscopic vision has become common with the advent of the da Vinci® system (Intuitive, Sunnyvale CA). This system creates a virtual environment that consists of a 3-D display for visual feedback and haptic tactile feedback, together providing an intuitive environment for remote surgical applications. This study will use simple in vivo robotic surgical devices and compare the performance of surgeons using the stereoscopic interfacing system to the performance of surgeons using one dimensional monitors. The stereoscopic viewing system consists of two cameras, two monitors, and four mirrors. The cameras are mounted to a multi-functional miniature in vivo robot; and mimic the depth perception of the actual human eyes. This is done by placing the cameras at a calculated angle and distance apart. Live video streams from the left and right cameras are displayed on the left and right monitors, respectively. A system of angled mirrors allows the left and right eyes to see the video stream from the left and right monitor, respectively, creating the illusion of depth. The haptic interface consists of two PHANTOM Omni® (SensAble, Woburn Ma) controllers. These controllers measure the position and orientation of a pen-like end effector with three degrees of freedom. As the surgeon uses this interface, they see a 3-D image and feel force feedback for collision and workspace limits. The stereoscopic viewing system has been used in several surgical training tests and shows a potential improvement in depth perception and 3-D vision. The haptic system accurately gives force feedback that aids in surgery. Both have been used in non-survival animal surgeries, and have successfully been used in suturing and gallbladder removal. Bench top experiments using the interfacing system have also been conducted. A group of participants completed two different surgical training tasks using both a two dimensional visual system and the stereoscopic visual system. Results suggest that the stereoscopic visual system decreased the amount of time taken to complete the tasks. All participants also reported that the stereoscopic system was easier to utilize than the two dimensional system. Haptic controllers combined with stereoscopic vision provides for a more intuitive virtual environment. This system provides the surgeon with 3-D vision, depth perception, and the ability to receive feedback through forces applied in the haptic controller while performing surgery. These capabilities potentially enable the performance of more complex surgeries with a higher level of precision.
NASA Technical Reports Server (NTRS)
Blackmon, Theodore
1998-01-01
Virtual reality (VR) technology has played an integral role for Mars Pathfinder mission, operations Using an automated machine vision algorithm, the 3d topography of the Martian surface was rapidly recovered fro -a the stereo images captured. by the Tender camera to produce photo-realistic 3d models, An advanced, interface was developed for visualization and interaction with. the virtual environment of the Pathfinder landing site for mission scientists at the Space Flight Operations Facility of the Jet Propulsion Laboratory. The VR aspect of the display allowed mission scientists to navigate on Mars in Bud while remaining here on Earth, thus improving their spatial awareness of the rock field that surrounds the lenders Measurements of positions, distances and angles could be easily extracted from the topographic models, providing valuable information for science analysis and mission. planning. Moreover, the VR map of Mars has also been used to assist with the archiving and planning of activities for the Sojourner rover.
Formalizing the potential of stereoscopic 3D user experience in interactive entertainment
NASA Astrophysics Data System (ADS)
Schild, Jonas; Masuch, Maic
2015-03-01
The use of stereoscopic 3D vision affects how interactive entertainment has to be developed as well as how it is experienced by the audience. The large amount of possibly impacting factors and variety as well as a certain subtlety of measured effects on user experience make it difficult to grasp the overall potential of using S3D vision. In a comprehensive approach, we (a) present a development framework which summarizes possible variables in display technology, content creation and human factors, and (b) list a scheme of S3D user experience effects concerning initial fascination, emotions, performance, and behavior as well as negative feelings of discomfort and complexity. As a major contribution we propose a qualitative formalization which derives dependencies between development factors and user effects. The argumentation is based on several previously published user studies. We further show how to apply this formula to identify possible opportunities and threats in content creation as well as how to pursue future steps for a possible quantification.
NASA Astrophysics Data System (ADS)
Lauinger, N.
2007-09-01
A better understanding of the color constancy mechanism in human color vision [7] can be reached through analyses of photometric data of all illuminants and patches (Mondrians or other visible objects) involved in visual experiments. In Part I [3] and in [4, 5 and 6] the integration in the human eye of the geometrical-optical imaging hardware and the diffractive-optical hardware has been described and illustrated (Fig.1). This combined hardware represents the main topic of the NAMIROS research project (nano- and micro- 3D gratings for optical sensors) [8] promoted and coordinated by Corrsys 3D Sensors AG. The hardware relevant to (photopic) human color vision can be described as a diffractive or interference-optical correlator transforming incident light into diffractive-optical RGB data and relating local RGB onto global RGB data in the near-field behind the 'inverted' human retina. The relative differences at local/global RGB interference-optical contrasts are available to photoreceptors (cones and rods) only after this optical pre-processing.
Bamashmus, Mahfouth A; Hubaish, Khammash; Alawad, Mohammed; Alakhlee, Hisham
2015-01-01
The purpose was to evaluate subjective quality of vision and patient satisfaction after laser in situ keratomileusis (LASIK) for myopia and myopic astigmatism. A self-administered patient questionnaire consisting 29 items was prospectively administered to LASIK patients at the Yemen Magrabi Hospital. Seven scales covering specific aspects of the quality of vision were formulated including; global satisfaction; quality of uncorrected and corrected vision; quality of night vision; glare; daytime driving and; night driving. Main outcome measures were responses to individual questions and scale scores and correlations with clinical parameters. The scoring scale ranged from 1 (dissatisfied) to 3 (very satisfied) and was stratified in the following manner: 1-1.65 = dissatisfied; 1.66-2.33 = satisfied and; 2.33-3 = very satisfied. Data at 6 months postoperatively are reported. This study sample was comprised of 200 patients (122 females: 78 males) ranging in age from 18 to 46 years old. The preoperative myopic sphere was - 3.50 ± 1.70 D and myopic astigmatism was 0.90 ± 0.82 D. There were 96% of eyes within ± 1.00 D of the targeted correction. Postoperatively, the uncorrected visual acuity was 20/40 or better in 99% of eyes. The mean score for the overall satisfaction was 2.64 ± 0.8. A total of 98.5% of patients was satisfied or very satisfied with their surgery, 98.5% considered their main goal for surgery was achieved. Satisfaction with uncorrected vision was 2.5 ± 0.50. The main score for glare was 1.98 ± 0.7 at night. Night driving was rated more difficult preoperatively by 6.2%, whereas 79% had less difficulty driving at night. Patient satisfaction with uncorrected vision after LASIK for myopia and myopic astigmatism appears to be excellent and is related to the residual refractive error postoperatively.
Bamashmus, Mahfouth A.; Hubaish, Khammash; Alawad, Mohammed; Alakhlee, Hisham
2015-01-01
Purpose: The purpose was to evaluate subjective quality of vision and patient satisfaction after laser in situ keratomileusis (LASIK) for myopia and myopic astigmatism. Patients and Methods: A self-administered patient questionnaire consisting 29 items was prospectively administered to LASIK patients at the Yemen Magrabi Hospital. Seven scales covering specific aspects of the quality of vision were formulated including; global satisfaction; quality of uncorrected and corrected vision; quality of night vision; glare; daytime driving and; night driving. Main outcome measures were responses to individual questions and scale scores and correlations with clinical parameters. The scoring scale ranged from 1 (dissatisfied) to 3 (very satisfied) and was stratified in the following manner: 1-1.65 = dissatisfied; 1.66-2.33 = satisfied and; 2.33-3 = very satisfied. Data at 6 months postoperatively are reported. Results: This study sample was comprised of 200 patients (122 females: 78 males) ranging in age from 18 to 46 years old. The preoperative myopic sphere was − 3.50 ± 1.70 D and myopic astigmatism was 0.90 ± 0.82 D. There were 96% of eyes within ± 1.00 D of the targeted correction. Postoperatively, the uncorrected visual acuity was 20/40 or better in 99% of eyes. The mean score for the overall satisfaction was 2.64 ± 0.8. A total of 98.5% of patients was satisfied or very satisfied with their surgery, 98.5% considered their main goal for surgery was achieved. Satisfaction with uncorrected vision was 2.5 ± 0.50. The main score for glare was 1.98 ± 0.7 at night. Night driving was rated more difficult preoperatively by 6.2%, whereas 79% had less difficulty driving at night. Conclusion: Patient satisfaction with uncorrected vision after LASIK for myopia and myopic astigmatism appears to be excellent and is related to the residual refractive error postoperatively. PMID:25624684
Semi-automatic registration of 3D orthodontics models from photographs
NASA Astrophysics Data System (ADS)
Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin
2013-03-01
In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.
NASA Technical Reports Server (NTRS)
Hung, Stephen H. Y.
1989-01-01
A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.
NASA Astrophysics Data System (ADS)
Babayan, Pavel; Smirnov, Sergey; Strotov, Valery
2017-10-01
This paper describes the aerial object recognition algorithm for on-board and stationary vision system. Suggested algorithm is intended to recognize the objects of a specific kind using the set of the reference objects defined by 3D models. The proposed algorithm based on the outer contour descriptor building. The algorithm consists of two stages: learning and recognition. Learning stage is devoted to the exploring of reference objects. Using 3D models we can build the database containing training images by rendering the 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the recognition stage of the algorithm. The recognition stage is focusing on estimating the similarity of the captured object and the reference objects by matching an observed image descriptor and the reference object descriptors. The experimental research was performed using a set of the models of the aircraft of the different types (airplanes, helicopters, UAVs). The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Parkhurst, Gregory D
2016-01-01
Purpose The aim of this study was to evaluate and compare night vision and low-luminance contrast sensitivity (CS) in patients undergoing implantation of phakic collamer lenses or wavefront-optimized laser-assisted in situ keratomileusis (LASIK). Patients and methods This is a nonrandomized, prospective study, in which 48 military personnel were recruited. Rabin Super Vision Test was used to compare the visual acuity and CS of Visian implantable collamer lens (ICL) and LASIK groups under normal and low light conditions, using a filter for simulated vision through night vision goggles. Results Preoperative mean spherical equivalent was −6.10 D in the ICL group and −6.04 D in the LASIK group (P=0.863). Three months postoperatively, super vision acuity (SVa), super vision acuity with (low-luminance) goggles (SVaG), super vision contrast (SVc), and super vision contrast with (low luminance) goggles (SVcG) significantly improved in the ICL and LASIK groups (P<0.001). Mean improvement in SVaG at 3 months postoperatively was statistically significantly greater in the ICL group than in the LASIK group (mean change [logarithm of the minimum angle of resolution, LogMAR]: ICL =−0.134, LASIK =−0.085; P=0.032). Mean improvements in SVc and SVcG were also statistically significantly greater in the ICL group than in the LASIK group (SVc mean change [logarithm of the CS, LogCS]: ICL =0.356, LASIK =0.209; P=0.018 and SVcG mean change [LogCS]: ICL =0.390, LASIK =0.259; P=0.024). Mean improvement in SVa at 3 months was comparable in both groups (P=0.154). Conclusion Simulated night vision improved with both ICL implantation and wavefront-optimized LASIK, but improvements were significantly greater with ICLs. These differences may be important in a military setting and may also affect satisfaction with civilian vision correction. PMID:27418804
Parkhurst, Gregory D
2016-01-01
The aim of this study was to evaluate and compare night vision and low-luminance contrast sensitivity (CS) in patients undergoing implantation of phakic collamer lenses or wavefront-optimized laser-assisted in situ keratomileusis (LASIK). This is a nonrandomized, prospective study, in which 48 military personnel were recruited. Rabin Super Vision Test was used to compare the visual acuity and CS of Visian implantable collamer lens (ICL) and LASIK groups under normal and low light conditions, using a filter for simulated vision through night vision goggles. Preoperative mean spherical equivalent was -6.10 D in the ICL group and -6.04 D in the LASIK group (P=0.863). Three months postoperatively, super vision acuity (SVa), super vision acuity with (low-luminance) goggles (SVaG), super vision contrast (SVc), and super vision contrast with (low luminance) goggles (SVcG) significantly improved in the ICL and LASIK groups (P<0.001). Mean improvement in SVaG at 3 months postoperatively was statistically significantly greater in the ICL group than in the LASIK group (mean change [logarithm of the minimum angle of resolution, LogMAR]: ICL =-0.134, LASIK =-0.085; P=0.032). Mean improvements in SVc and SVcG were also statistically significantly greater in the ICL group than in the LASIK group (SVc mean change [logarithm of the CS, LogCS]: ICL =0.356, LASIK =0.209; P=0.018 and SVcG mean change [LogCS]: ICL =0.390, LASIK =0.259; P=0.024). Mean improvement in SVa at 3 months was comparable in both groups (P=0.154). Simulated night vision improved with both ICL implantation and wavefront-optimized LASIK, but improvements were significantly greater with ICLs. These differences may be important in a military setting and may also affect satisfaction with civilian vision correction.
Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.
Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323
Changes in stimulus and response AC/A ratio with vision therapy in Convergence Insufficiency.
Singh, Neeraj Kumar; Mani, Revathy; Hussaindeen, Jameel Rizwana
To evaluate the changes in the stimulus and response Accommodative Convergence to Accommodation (AC/A) ratio following vision therapy (VT) in Convergence Insufficiency (CI). Stimulus and response AC/A ratio were measured on twenty five CI participants, pre and post 10 sessions of VT. Stimulus AC/A ratio was measured using the gradient method and response AC/A ratio was calculated using modified Thorington technique with accommodative responses measured using WAM-5500 open-field autorefractor. The gradient stimulus and response AC/A cross-link ratios were compared with thirty age matched controls. Mean age of the CI and control participants were 23.3±5.2 years and 22.7±4.2 years, respectively. The mean stimulus and response AC/A ratio for CI pre therapy was 2.2±0.72 and 6.3±2.0 PD/D that changed to 4.2±0.9 and 8.28±3.31 PD/D respectively post vision therapy and these changes were statistically significant (paired t-test; p<0.001). The mean stimulus and response AC/A ratio for controls was 3.1±0.81 and 8.95±2.5 PD/D respectively. Stimulus and response AC/A ratio increased following VT, accompanied by clinically significant changes in vergence and accommodation parameters in subjects with convergence insufficiency. This represents the plasticity of the AC/A crosslink ratios that could be achieved with vision therapy in CI. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Vanlandingham, Phillip A.; Nuno, Didier J.; Quiambao, Alexander B.; Phelps, Eric; Wassel, Ronald A.; Ma, Jian-Xing; Farjo, Krysten M.; Farjo, Rafal A.
2017-01-01
Purpose Diabetic retinopathy is a leading cause of vision loss. Previous studies have shown signaling pathways mediated by Stat3 (signal transducer and activator of transcription 3) play a primary role in diabetic retinopathy progression. This study tested CLT-005, a small molecule inhibitor of Stat3, for its dose-dependent therapeutic effects on vision loss in a rat model of diabetic retinopathy. Methods Brown Norway rats were administered streptozotocin (STZ) to induce diabetes. CLT-005 was administered daily by oral gavage for 16 weeks at concentrations of 125, 250, or 500 mg/kg, respectively, beginning 4 days post streptozotocin administration. Systemic and ocular drug concentration was quantified with mass spectrometry. Visual function was monitored at 2-week intervals from 6 to 16 weeks using optokinetic tracking to measure visual acuity and contrast sensitivity. The presence and severity of cataracts was visually monitored and correlated to visual acuity. The transcription and translation of multiple angiogenic factors and inflammatory cytokines were measured by real-time polymerase chain reaction and Multiplex immunoassay. Results Streptozotocin-diabetic rats sustain progressive vision loss over 16 weeks, and this loss in visual function is rescued in a dose-dependent manner by CLT-005. This positive therapeutic effect correlates to the positive effects of CLT-005 on vascular leakage and the presence of inflammatory cytokines in the retina. Conclusions The present study indicates that Stat3 inhibition has strong therapeutic potential for the treatment of vision loss in diabetic retinopathy. PMID:28395025
Acute air pollution-related symptoms among residents in Chiang Mai, Thailand.
Wiwatanadate, Phongtape
2014-01-01
Open burnings (forest fires, agricultural, and garbage burnings) are the major sources of air pollution in Chiang Mai, Thailand. A time series prospective study was conducted in which 3025 participants were interviewed for 19 acute symptoms with the daily records of ambient air pollutants: particulate matter less than 10 microm in size (PM10), carbon monoxide (CO), nitrogen dioxide (NO2), sulfur dioxide (SO2), and ozone (O3). PM10 was positively associated with blurred vision with an adjusted odds ratio (OR) of 1.009. CO was positively associated with lower lung and heart symptoms with adjusted ORs of 1.137 and 1.117. NO2 was positively associated with nosebleed, larynx symptoms, dry cough, lower lung symptoms, heart symptoms, and eye irritation with the range of adjusted ORs (ROAORs) of 1.024 to 1.229. SO2 was positively associated with swelling feet, skin symptoms, eye irritation, red eyes, and blurred vision with ROAORs of 1.205 to 2.948. Conversely, O3 was negatively related to running nose, burning nose, dry cough, body rash, red eyes, and blurred vision with ROAORs of 0.891 to 0.979.
Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras
NASA Astrophysics Data System (ADS)
El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid
2015-03-01
In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.
Vision-Based Haptic Feedback for Remote Micromanipulation in-SEM Environment
NASA Astrophysics Data System (ADS)
Bolopion, Aude; Dahmen, Christian; Stolle, Christian; Haliyo, Sinan; Régnier, Stéphane; Fatikow, Sergej
2012-07-01
This article presents an intuitive environment for remote micromanipulation composed of both haptic feedback and virtual reconstruction of the scene. To enable nonexpert users to perform complex teleoperated micromanipulation tasks, it is of utmost importance to provide them with information about the 3-D relative positions of the objects and the tools. Haptic feedback is an intuitive way to transmit such information. Since position sensors are not available at this scale, visual feedback is used to derive information about the scene. In this work, three different techniques are implemented, evaluated, and compared to derive the object positions from scanning electron microscope images. The modified correlation matching with generated template algorithm is accurate and provides reliable detection of objects. To track the tool, a marker-based approach is chosen since fast detection is required for stable haptic feedback. Information derived from these algorithms is used to propose an intuitive remote manipulation system that enables users situated in geographically distant sites to benefit from specific equipments, such as SEMs. Stability of the haptic feedback is ensured by the minimization of the delays, the computational efficiency of vision algorithms, and the proper tuning of the haptic coupling. Virtual guides are proposed to avoid any involuntary collisions between the tool and the objects. This approach is validated by a teleoperation involving melamine microspheres with a diameter of less than 2 μ m between Paris, France and Oldenburg, Germany.
Demonstration of a 3D vision algorithm for space applications
NASA Technical Reports Server (NTRS)
Defigueiredo, Rui J. P. (Editor)
1987-01-01
This paper reports an extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using movement invariants as features of object representation. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.
HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images.
Wang, Qingzhu; Kang, Wanjun; Hu, Haihui; Wang, Bin
2016-07-01
An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM.
Design of a surgical robot with dynamic vision field control for Single Port Endoscopic Surgery.
Kobayashi, Yo; Sekiguchi, Yuta; Tomono, Yu; Watanabe, Hiroki; Toyoda, Kazutaka; Konishi, Kozo; Tomikawa, Morimasa; Ieiri, Satoshi; Tanoue, Kazuo; Hashizume, Makoto; Fujie, Masaktsu G
2010-01-01
Recently, a robotic system was developed to assist Single Port Endoscopic Surgery (SPS). However, the existing system required a manual change of vision field, hindering the surgical task and increasing the degrees of freedom (DOFs) of the manipulator. We proposed a surgical robot for SPS with dynamic vision field control, the endoscope view being manipulated by a master controller. The prototype robot consisted of a positioning and sheath manipulator (6 DOF) for vision field control, and dual tool tissue manipulators (gripping: 5DOF, cautery: 3DOF). Feasibility of the robot was demonstrated in vitro. The "cut and vision field control" (using tool manipulators) is suitable for precise cutting tasks in risky areas while a "cut by vision field control" (using a vision field control manipulator) is effective for rapid macro cutting of tissues. A resection task was accomplished using a combination of both methods.
Contextualising and Analysing Planetary Rover Image Products through the Web-Based PRoGIS
NASA Astrophysics Data System (ADS)
Morley, Jeremy; Sprinks, James; Muller, Jan-Peter; Tao, Yu; Paar, Gerhard; Huber, Ben; Bauer, Arnold; Willner, Konrad; Traxler, Christoph; Garov, Andrey; Karachevtseva, Irina
2014-05-01
The international planetary science community has launched, landed and operated dozens of human and robotic missions to the planets and the Moon. They have collected various surface imagery that has only been partially utilized for further scientific purposes. The FP7 project PRoViDE (Planetary Robotics Vision Data Exploitation) is assembling a major portion of the imaging data gathered so far from planetary surface missions into a unique database, bringing them into a spatial context and providing access to a complete set of 3D vision products. Processing is complemented by a multi-resolution visualization engine that combines various levels of detail for a seamless and immersive real-time access to dynamically rendered 3D scenes. PRoViDE aims to (1) complete relevant 3D vision processing of planetary surface missions, such as Surveyor, Viking, Pathfinder, MER, MSL, Phoenix, Huygens, and Lunar ground-level imagery from Apollo, Russian Lunokhod and selected Luna missions, (2) provide highest resolution & accuracy remote sensing (orbital) vision data processing results for these sites to embed the robotic imagery and its products into spatial planetary context, (3) collect 3D Vision processing and remote sensing products within a single coherent spatial data base, (4) realise seamless fusion between orbital and ground vision data, (5) demonstrate the potential of planetary surface vision data by maximising image quality visualisation in 3D publishing platform, (6) collect and formulate use cases for novel scientific application scenarios exploiting the newly introduced spatial relationships and presentation, (7) demonstrate the concepts for MSL, (9) realize on-line dissemination of key data & its presentation by a web-based GIS and rendering tool named PRoGIS (Planetary Robotics GIS). PRoGIS is designed to give access to rover image archives in geographical context, using projected image view cones, obtained from existing meta-data and updated according to processing results, as a means to interact with and explore the archive. However PRoGIS is more than a source data explorer. It is linked to the PRoVIP (Planetary Robotics Vision Image Processing) system which includes photogrammetric processing tools to extract terrain models, compose panoramas, and explore and exploit multi-view stereo (where features on the surface have been imaged from different rover stops). We have started with the Opportunity MER rover as our test mission but the system is being designed to be multi-mission, taking advantage in particular of UCL MSSL's PDS mirror, and we intend to at least deal with both MER rovers and MSL. For the period of ProViDE until end of 2015 the further intent is to handle lunar and other Martian rover & descent camera data. The presentation discusses the challenges of integrating rover and orbital derived data into a single geographical framework, especially reconstructing view cones; our human-computer interaction intentions in creating an interface to the rover data that is accessible to planetary scientists; how we handle multi-mission data in the database; and a demonstration of the resulting system & its processing capabilities. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 PRoViDE.
Real-Time Mapping Using Stereoscopic Vision Optimization
2005-03-01
pinhole geometry . . . . . . . . . . . . . . 17 2.8. Artificially textured scenes . . . . . . . . . . . . . . . . . . . . 23 3.1. Bilbo the robot...geometry. 2.2.1 The Fundamental Matrix. The fundamental matrix (F) describes the relationship between a pair of 2D pictures of a 3D scene . This is...eight CCD cameras to compute a mesh model of the environment from a large number of overlapped 3D images. In [1,17], a range scanner is combined with a
[Halos and multifocal intraocular lenses: origin and interpretation].
Alba-Bueno, F; Vega, F; Millán, M S
2014-10-01
To present the theoretical and experimental characterization of the halo in multifocal intraocular lenses (MIOL). The origin of the halo in a MIOL is the overlaying of 2 or more images. Using geometrical optics, it can be demonstrated that the diameter of each halo depends on the addition of the lens (ΔP), the base power (P(d)), and the diameter of the IOL that contributes to the «non-focused» focus. In the image plane that corresponds to the distance focus, the halo diameter (δH(d)) is given by: δH(d)=d(pn) ΔP/P(d), where d(pn) is the diameter of the IOL that contributes to the near focus. Analogously, in the near image plane the halo diameter (δH(n)) is: δH(n)=d(pd) ΔP/P(d), where d(pd) is the diameter of the IOL that contributes to the distance focus. Patients perceive halos when they see bright objects over a relatively dark background. In vitro, the halo can be characterized by analyzing the intensity profile of the image of a pinhole that is focused by each of the foci of a MIOL. A comparison has been made between the halos induced by different MIOL of the same base power (20D) in an optical bench. As predicted by theory, the larger the addition of the MIOL, the larger the halo diameter. For large pupils and with MIOL with similar aspheric designs and addition (SN6AD3 vs ZMA00), the apodized MIOL has a smaller halo diameter than a non-apodized one in distance vision, while in near vision the size is very similar, but the relative intensity is higher in the apodized MIOL. When comparing lenses with the same diffractive design, but with different spherical-aspheric base design (SN60D3 vs SN6AD3), the halo in distance vision of the spherical MIOL is larger, while in near vision the spherical IOL induces a smaller halo, but with higher intensity due to the spherical aberration of the distance focus in the near image. In the case of a trifocal-diffractive IOL (AT LISA 839MP) the most noticeable characteristic is the double-halo formation due to the 2 non-focused powers. Copyright © 2013 Sociedad Española de Oftalmología. Published by Elsevier Espana. All rights reserved.
Vijaya, Lingam; George, Ronnie; Asokan, Rashima; Velumuri, Lokapavani; Ramesh, Sathyamangalam Ve
2014-04-01
To evaluate the prevalence and causes of low vision and blindness in an urban south Indian population. Population-based cross-sectional study. Exactly 3850 subjects aged 40 years and above from Chennai city were examined at a dedicated facility in the base hospital. All subjects had a complete ophthalmic examination that included best-corrected visual acuity. Low vision and blindness were defined using World Health Organization (WHO) criteria. The influence of age, gender, literacy, and occupation was assessed using multiple logistic regression. Chi-square test, t-test, and multivariate analysis were used. Of the 4800 enumerated subjects, 3850 subjects (1710 males, 2140 females) were examined (response rate, 80.2%). The prevalence of blindness was 0.85% (95% CI 0.6-1.1%) and was positively associated with age and illiteracy. Cataract was the leading cause (57.6%) and glaucoma was the second cause (16.7%) for blindness. The prevalence of low vision was 2.9% (95% CI 2.4-3.4%) and visual impairment (blindness + low vision) was 3.8% (95% CI 3.2-4.4%). The primary causes for low vision were refractive errors (68%) and cataract (22%). In this urban population based study, cataract was the leading cause for blindness and refractive error was the main reason for low vision.
Expedient range enhanced 3-D robot colour vision
NASA Astrophysics Data System (ADS)
Jarvis, R. A.
1983-01-01
Computer vision has been chosen, in many cases, as offering the richest form of sensory information which can be utilized for guiding robotic manipulation. The present investigation is concerned with the problem of three-dimensional (3D) visual interpretation of colored objects in support of robotic manipulation of those objects with a minimum of semantic guidance. The scene 'interpretations' are aimed at providing basic parameters to guide robotic manipulation rather than to provide humans with a detailed description of what the scene 'means'. Attention is given to overall system configuration, hue transforms, a connectivity analysis, plan/elevation segmentations, range scanners, elevation/range segmentation, higher level structure, eye in hand research, and aspects of array and video stream processing.
Vision based flight procedure stereo display system
NASA Astrophysics Data System (ADS)
Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng
2008-03-01
A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.
Generating Contextual Descriptions of Virtual Reality (VR) Spaces
NASA Astrophysics Data System (ADS)
Olson, D. M.; Zaman, C. H.; Sutherland, A.
2017-12-01
Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.
Heads-up 3D Microscopy: An Ergonomic and Educational Approach to Microsurgery
Mendez, Bernardino M.; Chiodo, Michael V.; Vandevender, Darl
2016-01-01
Summary: Traditional microsurgery can lead surgeons to use postures that cause musculoskeletal fatigue, leaving them more prone to work-related injuries. A new technology from TrueVision transmits the microscopic image onto a 3-dimensional (3D) monitor, allowing surgeons to operate while sitting/standing in a heads-up position. The purpose of this study was to evaluate the feasibility of performing heads-up 3D microscopy as a more ergonomic alternative to traditional microsurgery. A feasibility study was conducted comparing heads-up 3D microscopy and traditional microscopy by performing femoral artery anastomoses on 8 Sprague-Dawley rats. Operative times and patency rates for each technology were compared. The 8 microsurgeons completed a questionnaire comparing image quality, comfort, technical feasibility, and educational value of the 2 technologies. Rat femoral artery anastomoses were successfully carried out by all 8 microsurgeons with each technology. There was no significant difference in anastomosis time between heads-up 3D and traditional microscopy (average times, 34.5 and 33.8 minutes, respectively; P = 0.66). Heads-up 3D microscopy was rated superior in neck and back comfort by 75% of participants. Image resolution, field of view, and technical feasibility were found to be superior or equivalent in 75% of participants, whereas 63% evaluated depth perception to be superior or equivalent. Heads-up 3D microscopy is a new technology that improves comfort for the microsurgeon without compromising image quality or technical feasibility. Its use has become prevalent in the field of ophthalmology and may also have utility in plastic and reconstructive surgery. PMID:27579241
Potato Operation: automatic detection of potato diseases
NASA Astrophysics Data System (ADS)
Lefebvre, Marc; Zimmerman, Thierry; Baur, Charles; Guegerli, Paul; Pun, Thierry
1995-01-01
The Potato Operation is a collaborative, multidisciplinary project in the domain of destructive testing of agricultural products. It aims at automatizing pulp sampling of potatoes in order to detect possible viral diseases. Such viruses can decrease fields productivity by a factor of up to ten. A machine, composed of three conveyor belts, a vision system, a robotic arm and controlled by a PC has been built. Potatoes are brought one by one from a bulk to the vision system, where they are seized by a rotating holding device. The sprouts, where the viral activity is maximum, are then detected by an active vision process operating on multiple views. The 3D coordinates of the sampling point are communicated to the robot arm holding a drill. Some flesh is then sampled by the drill, then deposited into an Elisa plate. After sampling, the robot arm washes the drill in order to prevent any contamination. The PC computer simultaneously controls these processes, the conveying of the potatoes, the vision algorithms and the sampling procedure. The master process, that is the vision procedure, makes use of three methods to achieve the sprouts detection. A profile analysis first locates the sprouts as protuberances. Two frontal analyses, respectively based on fluorescence and local variance, confirm the previous detection and provide the 3D coordinate of the sampling zone. The other two processes work by interruption of the master process.
Review of 3d GIS Data Fusion Methods and Progress
NASA Astrophysics Data System (ADS)
Hua, Wei; Hou, Miaole; Hu, Yungang
2018-04-01
3D data fusion is a research hotspot in the field of computer vision and fine mapping, and plays an important role in fine measurement, risk monitoring, data display and other processes. At present, the research of 3D data fusion in the field of Surveying and mapping focuses on the 3D model fusion of terrain and ground objects. This paper summarizes the basic methods of 3D data fusion of terrain and ground objects in recent years, and classified the data structure and the establishment method of 3D model, and some of the most widely used fusion methods are analysed and commented.
Predicting Vision-Related Disability in Glaucoma.
Abe, Ricardo Y; Diniz-Filho, Alberto; Costa, Vital P; Wu, Zhichao; Medeiros, Felipe A
2018-01-01
To present a new methodology for investigating predictive factors associated with development of vision-related disability in glaucoma. Prospective, observational cohort study. Two hundred thirty-six patients with glaucoma followed up for an average of 4.3±1.5 years. Vision-related disability was assessed by the 25-item National Eye Institute Visual Function Questionnaire (NEI VFQ-25) at baseline and at the end of follow-up. A latent transition analysis model was used to categorize NEI VFQ-25 results and to estimate the probability of developing vision-related disability during follow-up. Patients were tested with standard automated perimetry (SAP) at 6-month intervals, and evaluation of rates of visual field change was performed using mean sensitivity (MS) of the integrated binocular visual field. Baseline disease severity, rate of visual field loss, and duration of follow-up were investigated as predictive factors for development of disability during follow-up. The relationship between baseline and rates of visual field deterioration and the probability of vision-related disability developing during follow-up. At baseline, 67 of 236 (28%) glaucoma patients were classified as disabled based on NEI VFQ-25 results, whereas 169 (72%) were classified as nondisabled. Patients classified as nondisabled at baseline had 14.2% probability of disability developing during follow-up. Rates of visual field loss as estimated by integrated binocular MS were almost 4 times faster for those in whom disability developed versus those in whom it did not (-0.78±1.00 dB/year vs. -0.20±0.47 dB/year, respectively; P < 0.001). In the multivariate model, each 1-dB lower baseline binocular MS was associated with 34% higher odds of disability developing over time (odds ratio [OR], 1.34; 95% confidence interval [CI], 1.06-1.70; P = 0.013). In addition, each 0.5-dB/year faster rate of loss of binocular MS during follow-up was associated with a more than 3.5 times increase in the risk of disability developing (OR, 3.58; 95% CI, 1.56-8.23; P = 0.003). A new methodology for classification and analysis of change in patient-reported quality-of-life outcomes allowed construction of models for predicting vision-related disability in glaucoma. Copyright © 2017 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
A novel method of robot location using RFID and stereo vision
NASA Astrophysics Data System (ADS)
Chen, Diansheng; Zhang, Guanxin; Li, Zhen
2012-04-01
This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system, the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience conducted in a 7m×7m lobby, the result is much more accurate than other location method.
PRoViScout: a planetary scouting rover demonstrator
NASA Astrophysics Data System (ADS)
Paar, Gerhard; Woods, Mark; Gimkiewicz, Christiane; Labrosse, Frédéric; Medina, Alberto; Tyler, Laurence; Barnes, David P.; Fritz, Gerald; Kapellos, Konstantinos
2012-01-01
Mobile systems exploring Planetary surfaces in future will require more autonomy than today. The EU FP7-SPACE Project ProViScout (2010-2012) establishes the building blocks of such autonomous exploration systems in terms of robotics vision by a decision-based combination of navigation and scientific target selection, and integrates them into a framework ready for and exposed to field demonstration. The PRoViScout on-board system consists of mission management components such as an Executive, a Mars Mission On-Board Planner and Scheduler, a Science Assessment Module, and Navigation & Vision Processing modules. The platform hardware consists of the rover with the sensors and pointing devices. We report on the major building blocks and their functions & interfaces, emphasizing on the computer vision parts such as image acquisition (using a novel zoomed 3D-Time-of-Flight & RGB camera), mapping from 3D-TOF data, panoramic image & stereo reconstruction, hazard and slope maps, visual odometry and the recognition of potential scientifically interesting targets.
Three-camera stereo vision for intelligent transportation systems
NASA Astrophysics Data System (ADS)
Bergendahl, Jason; Masaki, Ichiro; Horn, Berthold K. P.
1997-02-01
A major obstacle in the application of stereo vision to intelligent transportation system is high computational cost. In this paper, a PC based three-camera stereo vision system constructed with off-the-shelf components is described. The system serves as a tool for developing and testing robust algorithms which approach real-time performance. We present an edge based, subpixel stereo algorithm which is adapted to permit accurate distance measurements to objects in the field of view using a compact camera assembly. Once computed, the 3D scene information may be directly applied to a number of in-vehicle applications, such as adaptive cruise control, obstacle detection, and lane tracking. Moreover, since the largest computational costs is incurred in generating the 3D scene information, multiple applications that leverage this information can be implemented in a single system with minimal cost. On-road applications, such as vehicle counting and incident detection, are also possible. Preliminary in-vehicle road trial results are presented.
Mahon, Edward G.; Taylor, Scott N.; Boyatzis, Richard E.
2014-01-01
As organizational leaders worry about the appalling low percentage of people who feel engaged in their work, academics are trying to understand what causes an increase in engagement. We collected survey data from 231 team members from two organizations. We examined the impact of team members’ emotional intelligence (EI) and their perception of shared personal vision, shared positive mood, and perceived organizational support (POS) on the members’ degree of organizational engagement. We found shared vision, shared mood, and POS have a direct, positive association with engagement. In addition, shared vision and POS interact with EI to positively influence engagement. Besides highlighting the importance of shared personal vision, positive mood, and POS, our study contributes to the emergent understanding of EI by revealing EI’s amplifying effect on shared vision and POS in relation to engagement. We conclude by discussing the research and practical implications of this study. PMID:25477845
Data fusion for a vision-aided radiological detection system: Calibration algorithm performance
NASA Astrophysics Data System (ADS)
Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas
2018-05-01
In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average calibration-difference of 22 cm. Using NaI and He-3 detectors in place of the EJ-309, the calibration-difference was 52 cm for NaI and 75 cm for He-3. The algorithm is not detector dependent; however, from these results it was determined that detector dependent adjustments are required.
Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto
2013-01-01
In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape. PMID:24113680
Pose and motion recovery from feature correspondences and a digital terrain map.
Lerner, Ronen; Rivlin, Ehud; Rotstein, Héctor P
2006-09-01
A novel algorithm for pose and motion estimation using corresponding features and a Digital Terrain Map is proposed. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables the elimination of the ambiguity present in vision-based algorithms for motion recovery. As a consequence, the absolute position and orientation of a camera can be recovered with respect to the external reference frame. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. Explicit reconstruction of the 3D world is not required. When considering a number of feature points, the resulting constraints can be solved using nonlinear optimization in terms of position, orientation, and motion. Such a procedure requires an initial guess of these parameters, which can be obtained from dead-reckoning or any other source. The feasibility of the algorithm is established through extensive experimentation. Performance is compared with a state-of-the-art alternative algorithm, which intermediately reconstructs the 3D structure and then registers it to the DTM. A clear advantage for the novel algorithm is demonstrated in variety of scenarios.
Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto
2013-10-09
In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape.
Grossberg, Stephen; Hwang, Seungwoo; Mingolla, Ennio
2002-05-01
This article further develops the FACADE neural model of 3-D vision and figure-ground perception to quantitatively explain properties of the McCollough effect (ME). The model proposes that many ME data result from visual system mechanisms whose primary function is to adaptively align, through learning, boundary and surface representations that are positionally shifted due to the process of binocular fusion. For example, binocular boundary representations are shifted by binocular fusion relative to monocular surface representations, yet the boundaries must become positionally aligned with the surfaces to control binocular surface capture and filling-in. The model also includes perceptual reset mechanisms that use habituative transmitters in opponent processing circuits. Thus the model shows how ME data may arise from a combination of mechanisms that have a clear functional role in biological vision. Simulation results with a single set of parameters quantitatively fit data from 13 experiments that probe the nature of achromatic/chromatic and monocular/binocular interactions during induction of the ME. The model proposes how perceptual learning, opponent processing, and habituation at both monocular and binocular surface representations are involved, including early thalamocortical sites. In particular, it explains the anomalous ME utilizing these multiple processing sites. Alternative models of the ME are also summarized and compared with the present model.
Evaluation of reliability and validity of three dental color-matching devices.
Tsiliagkou, Aikaterini; Diamantopoulou, Sofia; Papazoglou, Efstratios; Kakaboura, Afrodite
2016-01-01
To assess the repeatability and accuracy of three dental color-matching devices under standardized and freehand measurement conditions. Two shade guides (Vita Classical A1-D4, Vita; and Vita Toothguide 3D-Master, Vita), and three color-matching devices (Easyshade, Vita; SpectroShade, MHT Optic Research; and ShadeVision, X-Rite) were used. Five shade tabs were selected from the Vita Classical A1-D4 (A2, A3.5, B1, C4, D3), and five from the Vita Toothguide 3D-Master (1M1, 2R1.5, 3M2, 4L2.5, 5M3) shade guides. Each shade tab was recorded 15 continuous, repeated times with each device under two different measurement conditions (standardized, and freehand). Both qualitative (color shade) and quantitative (L, a, and b) color characteristics were recorded. The color difference (ΔE) of each recorded value with the known values of the shade tab was calculated. The repeatability of each device was evaluated by the coefficient of variance. The accuracy of each device was determined by comparing the recorded values with the known values of the reference shade tab (one sample t test; α = 0.05). The agreement between the recorded shade and the reference shade tab was calculated. The influence of the parameters (devices and conditions) on the parameter ΔE was investigated (two-way ANOVA). Comparison of the devices was performed with Bonferroni pairwise post-hoc analysis. Under standardized conditions, repeatability of all three devices was very good, except for ShadeVision with Vita Classical A1-D4. Accuracy ranged from good to fair, depending on the device and the shade guide. Under freehand conditions, repeatability and accuracy for Easyshade and ShadeVision were negatively influenced, but not for SpectroShade, regardless of the shade guide. Based on the total of the color parameters assessed per device, SpectroShade was the most reliable of the three color-matching devices studied.
Cheng, Hui-Chen; Guo, Chao-Yu; Chen, Mei-Ju; Ko, Yu-Chieh; Huang, Nicole; Liu, Catherine Jui-ling
2015-03-01
Previous studies have found that glaucoma is associated with impaired patient-reported vision-related quality of life (pVRQOL) but few, to our knowledge, have assessed how the visual field (VF) defect location impacts the pVRQOL. To investigate the associations of VF defects in the superior vs inferior hemifields with pVRQOL outcomes in patients with primary open-angle glaucoma. Prospective cross-sectional study at a tertiary referral center from March 1, 2012, to January 1, 2013, including patients with primary open-angle glaucoma who had a best-corrected visual acuity in the better eye equal to or better than 20/60 and reliable VF tests. The pVRQOL was assessed by a validated Taiwanese version of the 25-item National Eye Institute Visual Function Questionnaire. Reliable VF tests obtained within 3 months of enrollment were transformed to binocular integrated VF (IVF). The IVF was further stratified by VF location (superior vs inferior hemifield). The association between each domain of the 25-item National Eye Institute Visual Function Questionnaire and superior or inferior hemifield IVF was determined using multivariable linear regression analysis. The analysis included 186 patients with primary open-angle glaucoma with a mean age of 59.1 years (range, 19-86 years) and IVF mean deviation (MD) of -4.84 dB (range, -27.56 to 2.17 dB). In the multivariable linear regression analysis, the MD of the full-field IVF showed positive associations with near activities (β = 0.05; R2 = 0.20; P < .001), vision-specific role difficulties (β = 0.04; R2 = 0.19; P = .01), vision-specific dependency (β = 0.04; R2 = 0.20; P < .001), driving (β = 0.05; R2 = 0.24; P < .001), peripheral vision (β = 0.03; R2 = 0.18; P = .02), and composite scores (β = 0.04; R2 = 0.27; P = .005). Subsequent analysis showed that the MD of the superior hemifield IVF was associated only with near activities (β = 0.04; R2 = 0.21; P < .001) while the MD of the inferior hemifield IVF was associated with general vision (β = 0.04; R2 = 0.12; P = .01), vision-specific role difficulties (β = 0.04; R2 = 0.20; P = .01), and peripheral vision (β = 0.03; R2 = 0.17; P = .03). Superior hemifield IVF was strongly associated with difficulty with near activities. Inferior hemifield IVF impacted vision-specific role difficulties and general and peripheral vision. The impact of a VF defect on a patient's pVRQOL may depend not only on its severity, but also on its hemifield location.
Remote sensing of vegetation structure using computer vision
NASA Astrophysics Data System (ADS)
Dandois, Jonathan P.
High-spatial resolution measurements of vegetation structure are needed for improving understanding of ecosystem carbon, water and nutrient dynamics, the response of ecosystems to a changing climate, and for biodiversity mapping and conservation, among many research areas. Our ability to make such measurements has been greatly enhanced by continuing developments in remote sensing technology---allowing researchers the ability to measure numerous forest traits at varying spatial and temporal scales and over large spatial extents with minimal to no field work, which is costly for large spatial areas or logistically difficult in some locations. Despite these advances, there remain several research challenges related to the methods by which three-dimensional (3D) and spectral datasets are joined (remote sensing fusion) and the availability and portability of systems for frequent data collections at small scale sampling locations. Recent advances in the areas of computer vision structure from motion (SFM) and consumer unmanned aerial systems (UAS) offer the potential to address these challenges by enabling repeatable measurements of vegetation structural and spectral traits at the scale of individual trees. However, the potential advances offered by computer vision remote sensing also present unique challenges and questions that need to be addressed before this approach can be used to improve understanding of forest ecosystems. For computer vision remote sensing to be a valuable tool for studying forests, bounding information about the characteristics of the data produced by the system will help researchers understand and interpret results in the context of the forest being studied and of other remote sensing techniques. This research advances understanding of how forest canopy and tree 3D structure and color are accurately measured by a relatively low-cost and portable computer vision personal remote sensing system: 'Ecosynth'. Recommendations are made for optimal conditions under which forest structure measurements should be obtained with UAS-SFM remote sensing. Ultimately remote sensing of vegetation by computer vision offers the potential to provide an 'ecologist's eye view', capturing not only canopy 3D and spectral properties, but also seeing the trees in the forest and the leaves on the trees.
3D laparoscopic surgery: a prospective clinical trial.
Agrusa, Antonino; Di Buono, Giuseppe; Buscemi, Salvatore; Cucinella, Gaspare; Romano, Giorgio; Gulotta, Gaspare
2018-04-03
Since it's introduction, laparoscopic surgery represented a real revolution in clinical practice. The use of a new generation three-dimensional (3D) HD laparoscopic system can be considered a favorable "hybrid" made by combining two different elements: feasibility and diffusion of laparoscopy and improved quality of vision. In this study we report our clinical experience with use of three-dimensional (3D) HD vision system for laparoscopic surgery. Between 2013 and 2017 a prospective cohort study was conducted at the University Hospital of Palermo. We considered 163 patients underwent to laparoscopic three-dimensional (3D) HD surgery for various indications. This 3D-group was compared to a retrospective-prospective control group of patients who underwent the same surgical procedures. Considerating specific surgical procedures there is no significant difference in term of age and gender. The analysis of all the groups of diseases shows that the laparoscopic procedures performed with 3D technology have a shorter mean operative time than comparable 2D procedures when we consider surgery that require complex tasks. The use of 3D laparoscopic technology is an extraordinary innovation in clinical practice, but the instrumentation is still not widespread. Precisely for this reason the studies in literature are few and mainly limited to the evaluation of the surgical skills to the simulator. This study aims to evaluate the actual benefits of the 3D laparoscopic system integrating it in clinical practice. The three-dimensional view allows advanced performance in particular conditions, such as small and deep spaces and promotes performing complex surgical laparoscopic procedures.
Bener, Abdulbari; Al-Mahdi, Huda S; Vachhani, Pankit J; Al-Nufal, Mohammed; Ali, Awab I
2010-12-01
The aim of this study is to determine whether excessive internet use, television viewing and the ensuing poor lifestyle habits affect low vision in school children in a rapidly developing country. This is a cross-sectional study and 3000 school students aged between six and 18 years were approached and 2467 (82.2%) students participated. Of the studied school children 12.6 percent had low vision. Most of the low vision school children were in the 6-10 years age group and came from middle income backgrounds (41.8%; p = 0.008). A large proportion of the children with low vision spent ≥ 3 hours per day on the internet (48.2%; p< 0.001) and ≥ 3 hours reclining (62.4%; p < 0.001). A significantly smaller frequency of studied children with low vision participated in each of the reviewed forms of physical activity (p < 0.001) yet a larger proportion consumed fast food (86.8%; p < 0.001). Highly significant positive correlations were found between low vision and BMI, hours spent reclining and on the internet respectively. Blurred vision was the most commonly complained of symptom among the studied children (p < 0.001). The current study suggested a strong association between spending prolonged hours on the computer or TV, fast food eating, poor lifestyle habits and low vision.
Analysis on the 3D crosstalk in stereoscopic display
NASA Astrophysics Data System (ADS)
Choi, Hee-Jin
2010-11-01
Nowadays, with the rapid progresses in flat panel display (FPD) technologies, the three-dimensional (3D) display is now becoming a next mainstream of display market. Among the various 3D display techniques, the stereoscopic 3D display shows different left/right images for each eye of observer using special glasses and is the most popular 3D technique with the advantages of low price and high 3D resolution. However, current stereoscopic 3D displays suffer with the 3D crosstalk which means the interference between the left eye mage and right eye images since it degrades the quality of 3D image severely. In this paper, the meaning and causes of the 3D crosstalk in stereoscopic 3D display are introduced and the pre-proposed methods of 3D crosstalk measurement vision science are reviewed. Based on them The threshold of 3D crosstalk to realize a 3D display with no degradation is analyzed.
Robotic Quantification of Position Sense in Children With Perinatal Stroke.
Kuczynski, Andrea M; Dukelow, Sean P; Semrau, Jennifer A; Kirton, Adam
2016-09-01
Background Perinatal stroke is the leading cause of hemiparetic cerebral palsy. Motor deficits and their treatment are commonly emphasized in the literature. Sensory dysfunction may be an important contributor to disability, but it is difficult to measure accurately clinically. Objective Use robotics to quantify position sense deficits in hemiparetic children with perinatal stroke and determine their association with common clinical measures. Methods Case-control study. Participants were children aged 6 to 19 years with magnetic resonance imaging-confirmed unilateral perinatal arterial ischemic stroke or periventricular venous infarction and symptomatic hemiparetic cerebral palsy. Participants completed a position matching task using an exoskeleton robotic device (KINARM). Position matching variability, shift, and expansion/contraction area were measured with and without vision. Robotic outcomes were compared across stroke groups and controls and to clinical measures of disability (Assisting Hand Assessment) and sensory function. Results Forty stroke participants (22 arterial, 18 venous, median age 12 years, 43% female) were compared with 60 healthy controls. Position sense variability was impaired in arterial (6.01 ± 1.8 cm) and venous (5.42 ± 1.8 cm) stroke compared to controls (3.54 ± 0.9 cm, P < .001) with vision occluded. Impairment remained when vision was restored. Robotic measures correlated with functional disability. Sensitivity and specificity of clinical sensory tests were modest. Conclusions Robotic assessment of position sense is feasible in children with perinatal stroke. Impairment is common and worse in arterial lesions. Limited correction with vision suggests cortical sensory network dysfunction. Disordered position sense may represent a therapeutic target in hemiparetic cerebral palsy. © The Author(s) 2016.
Dopamine antagonists and brief vision distinguish lens-induced- and form-deprivation-induced myopia
Nickla, Debora L.; Totonelly, Kristen
2011-01-01
In eyes wearing negative lenses, the D2 dopamine antagonist spiperone was only partly effective in preventing the ameliorative effects of brief periods of vision (Nickla et al., 2010), in contrast to reports from studies using form deprivation. The present study was done to directly compare the effects of spiperone, and the D1 antagonist SCH-23390, on the two different myopiagenic paradigms. 12-day old chickens wore monocular diffusers (form deprivation) or − 10 D lenses attached to the feathers with matching rings of Velcro. Each day for 4 days, 10 µl intravitreal injections of the dopamine D2/D4 antagonist spiperone (5 nmoles) or the D1 antagonist SCH-23390, were given under isoflurane anesthesia, and the diffusers (n=16; n=5, respectively) or lenses (n=20; n=6) were removed for 2 hours immediately after. Saline injections prior to vision were done as controls (form deprivation: n=11; lenses: n=10). Two other saline-injected groups wore the lenses (n=12) or diffusers (n=4) continuously. Axial dimensions were measured by high frequency A-scan ultrasonography at the start, and on the last day immediately prior to, and 3 hours after the injection. Refractive errors were measured at the end of the experiment using a Hartinger’s refractometer. In form-deprived eyes, spiperone, but not SCH-23390, prevented the ocular growth inhibition normally effected by the brief periods of vision (change in vitreous chamber depth, spiperone vs saline: 322 vs 211 µm; p=0.01). By contrast, neither had any effect on negative lens-wearing eyes given similar unrestricted vision (210 and 234 µm respectively, vs 264 µm). The increased elongation in the spiperone-injected form deprived eyes did not, however, result in a myopic shift, probably due to the inhibitory effect of the drug on anterior chamber growth (drug vs saline: 96 vs 160 µm; p<0.01). Finally, spiperone inhibited the vision-induced transient choroidal thickening in form deprived eyes, while SCH-23390 did not. These results indicate that the dopaminergic mechanisms mediating the protective effects of brief periods of unrestricted vision differ for form deprivation versus negative lens-wear, which may imply different growth control mechanisms between the two. PMID:21872586
Dopamine antagonists and brief vision distinguish lens-induced- and form-deprivation-induced myopia.
Nickla, Debora L; Totonelly, Kristen
2011-11-01
In eyes wearing negative lenses, the D2 dopamine antagonist spiperone was only partly effective in preventing the ameliorative effects of brief periods of vision (Nickla et al., 2010), in contrast to reports from studies using form-deprivation. The present study was done to directly compare the effects of spiperone, and the D1 antagonist SCH-23390, on the two different myopiagenic paradigms. 12-day old chickens wore monocular diffusers (form-deprivation) or -10 D lenses attached to the feathers with matching rings of Velcro. Each day for 4 days, 10 μl intravitreal injections of the dopamine D2/D4 antagonist spiperone (5 nmoles) or the D1 antagonist SCH-23390, were given under isoflurane anesthesia, and the diffusers (n = 16; n = 5, respectively) or lenses (n = 20; n = 6) were removed for 2 h immediately after. Saline injections prior to vision were done as controls (form-deprivation: n = 11; lenses: n = 10). Two other saline-injected groups wore the lenses (n = 12) or diffusers (n = 4) continuously. Axial dimensions were measured by high frequency A-scan ultrasonography at the start, and on the last day immediately prior to, and 3 h after the injection. Refractive errors were measured at the end of the experiment using a Hartinger's refractometer. In form-deprived eyes, spiperone, but not SCH-23390, prevented the ocular growth inhibition normally effected by the brief periods of vision (change in vitreous chamber depth, spiperone vs saline: 322 vs 211 μm; p = 0.01). By contrast, neither had any effect on negative lens-wearing eyes given similar unrestricted vision (210 and 234 μm respectively, vs 264 μm). The increased elongation in the spiperone-injected form-deprived eyes did not, however, result in a myopic shift, probably due to the inhibitory effect of the drug on anterior chamber growth (drug vs saline: 96 vs 160 μm; p < 0.01). Finally, spiperone inhibited the vision-induced transient choroidal thickening in form-deprived eyes, while SCH-23390 did not. These results indicate that the dopaminergic mechanisms mediating the protective effects of brief periods of unrestricted vision differ for form-deprivation versus negative lens-wear, which may imply different growth control mechanisms between the two. Copyright © 2011 Elsevier Ltd. All rights reserved.
Loarie, Thomas M; Applegate, David; Kuenne, Christopher B; Choi, Lawrence J; Horowitz, Diane P
2003-01-01
Market segmentation analysis identifies discrete segments of the population whose beliefs are consistent with exhibited behaviors such as purchase choice. This study applies market segmentation analysis to low myopes (-1 to -3 D with less than 1 D cylinder) in their consideration and choice of a refractive surgery procedure to discover opportunities within the market. A quantitative survey based on focus group research was sent to a demographically balanced sample of myopes using contact lenses and/or glasses. A variable reduction process followed by a clustering analysis was used to discover discrete belief-based segments. The resulting segments were validated both analytically and through in-market testing. Discontented individuals who wear contact lenses are the primary target for vision correction surgery. However, 81% of the target group is apprehensive about laser in situ keratomileusis (LASIK). They are nervous about the procedure and strongly desire reversibility and exchangeability. There exists a large untapped opportunity for vision correction surgery within the low myope population. Market segmentation analysis helped determine how to best meet this opportunity through repositioning existing procedures or developing new vision correction technology, and could also be applied to identify opportunities in other vision correction populations.
Panoramic 3d Vision on the ExoMars Rover
NASA Astrophysics Data System (ADS)
Paar, G.; Griffiths, A. D.; Barnes, D. P.; Coates, A. J.; Jaumann, R.; Oberst, J.; Gao, Y.; Ellery, A.; Li, R.
The Pasteur payload on the ESA ExoMars Rover 2011/2013 is designed to search for evidence of extant or extinct life either on or up to ˜2 m below the surface of Mars. The rover will be equipped by a panoramic imaging system to be developed by a UK, German, Austrian, Swiss, Italian and French team for visual characterization of the rover's surroundings and (in conjunction with an infrared imaging spectrometer) remote detection of potential sample sites. The Panoramic Camera system consists of a wide angle multispectral stereo pair with 65° field-of-view (WAC; 1.1 mrad/pixel) and a high resolution monoscopic camera (HRC; current design having 59.7 µrad/pixel with 3.5° field-of-view) . Its scientific goals and operational requirements can be summarized as follows: • Determination of objects to be investigated in situ by other instruments for operations planning • Backup and Support for the rover visual navigation system (path planning, determination of subsequent rover positions and orientation/tilt within the 3d environment), and localization of the landing site (by stellar navigation or by combination of orbiter and ground panoramic images) • Geological characterization (using narrow band geology filters) and cartography of the local environments (local Digital Terrain Model or DTM). • Study of atmospheric properties and variable phenomena near the Martian surface (e.g. aerosol opacity, water vapour column density, clouds, dust devils, meteors, surface frosts,) 1 • Geodetic studies (observations of Sun, bright stars, Phobos/Deimos). The performance of 3d data processing is a key element of mission planning and scientific data analysis. The 3d Vision Team within the Panoramic Camera development Consortium reports on the current status of development, consisting of the following items: • Hardware Layout & Engineering: The geometric setup of the system (location on the mast & viewing angles, mutual mounting between WAC and HRC) needs to be optimized w.r.t. fields of view, ranging capability (distance measurement capability), data rate, necessity of calibration targets, hardware & data interfaces to other subsystems (e.g. navigation) as well as accuracy impacts of sensor design and compression ratio. • Geometric Calibration: The geometric properties of the individual cameras including various spectral filters, their mutual relations and the dynamic geometrical relation between rover frame and cameras - with the mast in between - are precisely described by a calibration process. During surface operations these relations will be continuously checked and updated by photogrammetric means, environmental influences such as temperature, pressure and the Mars gravity will be taken into account. • Surface Mapping: Stereo imaging using the WAC stereo pair is used for the 3d reconstruction of the rover vicinity to identify, locate and characterize potentially interesting spots (3-10 for an experimental cycle to be performed within approx. 10-30 sols). The HRC is used for high resolution imagery of these regions of interest to be overlaid on the 3d reconstruction and potentially refined by shape-from-shading techniques. A quick processing result is crucial for time critical operations planning, therefore emphasis is laid on the automatic behaviour and intrinsic error detection mechanisms. The mapping results will be continuously fused, updated and synchronized with the map used by the navigation system. The surface representation needs to take into account the different resolutions of HRC and WAC as well as uncommon or even unexpected image acquisition modes such as long range, wide baseline stereo from different rover positions or escape strategies in the case of loss of one of the stereo camera heads. • Panorama Mosaicking: The production of a high resolution stereoscopic panorama nowadays is state-of-art in computer vision. However, certain 2 challenges such as the need for access to accurate spherical coordinates, maintenance of radiometric & spectral response in various spectral bands, fusion between HRC and WAC, super resolution, and again the requirement of quick yet robust processing will add some complexity to the ground processing system. • Visualization for Operations Planning: Efficient operations planning is directly related to an ergonomic and well performing visualization. It is intended to adapt existing tools to an integrated visualization solution for the purpose of scientific site characterization, view planning and reachability mapping/instrument placement of pointing sensors (including the panoramic imaging system itself), and selection of regions of interest. The main interfaces between the individual components as well as the first version of a user requirement document are currently under definition. Beside the support for sensor layout and calibration the 3d vision system will consist of 2-3 main modules to be used during ground processing & utilization of the ExoMars Rover panoramic imaging system. 3
Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash
2015-01-01
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations. PMID:25642198
Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash
2014-01-01
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
The effects of simultaneous dual focus lenses on refractive development in infant monkeys.
Arumugam, Baskar; Hung, Li-Fang; To, Chi-Ho; Holden, Brien; Smith, Earl L
2014-10-16
We investigated the effects of two simultaneously imposed, competing focal planes on refractive development in monkeys. Starting at 3 weeks of age and continuing until 150 ± 4 days of age, rhesus monkeys were reared with binocular dual-focus spectacle lenses. The treatment lenses had central 2-mm zones of zero power and concentric annular zones with alternating powers of +3.0 diopter [D] and plano (pL or 0 D) (n = 7; +3D/pL) or -3.0 D and plano (n = 7; -3D/pL). Retinoscopy, keratometry, and A-scan ultrasonography were performed every 2 weeks throughout the treatment period. For comparison purposes data were obtained from monkeys reared with full field (FF) +3.0 (n = 4) or -3.0 D (n = 5) lenses over both eyes and 33 control animals reared with unrestricted vision. The +3 D/pL lenses slowed eye growth resulting in hyperopic refractive errors that were similar to those produced by FF+3 D lenses (+3 D/pL = +5.25 D, FF +3 D = +4.63 D; P = 0.32), but significantly more hyperopic than those observed in control monkeys (+2.50 D, P = 0.0001). One -3 D/pL monkey developed compensating axial myopia; however, in the other -3 D/pL monkeys refractive development was dominated by the zero-powered portions of the treatment lenses. The refractive errors for the -3 D/pL monkeys were more hyperopic than those in the FF -3 D monkeys (-3 D/pL = +3.13 D, FF -3D = -1.69 D; P = 0.01), but similar to those in control animals (P = 0.15). In the monkeys treated with dual-focus lenses, refractive development was dominated by the more anterior (i.e., relatively myopic) image plane. The results indicate that imposing relative myopic defocus over a large proportion of the retina is an effective means for slowing ocular growth. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
Prevalence and causes of low vision and blindness in an urban population: The Chennai Glaucoma Study
Vijaya, Lingam; George, Ronnie; Asokan, Rashima; Velumuri, Lokapavani; Ramesh, Sathyamangalam Ve
2014-01-01
Aim: To evaluate the prevalence and causes of low vision and blindness in an urban south Indian population. Settings and Design: Population-based cross-sectional study. Exactly 3850 subjects aged 40 years and above from Chennai city were examined at a dedicated facility in the base hospital. Materials and Methods: All subjects had a complete ophthalmic examination that included best-corrected visual acuity. Low vision and blindness were defined using World Health Organization (WHO) criteria. The influence of age, gender, literacy, and occupation was assessed using multiple logistic regression. Statistical Analysis: Chi-square test, t-test, and multivariate analysis were used. Results: Of the 4800 enumerated subjects, 3850 subjects (1710 males, 2140 females) were examined (response rate, 80.2%). The prevalence of blindness was 0.85% (95% CI 0.6–1.1%) and was positively associated with age and illiteracy. Cataract was the leading cause (57.6%) and glaucoma was the second cause (16.7%) for blindness. The prevalence of low vision was 2.9% (95% CI 2.4–3.4%) and visual impairment (blindness + low vision) was 3.8% (95% CI 3.2–4.4%). The primary causes for low vision were refractive errors (68%) and cataract (22%). Conclusions: In this urban population based study, cataract was the leading cause for blindness and refractive error was the main reason for low vision. PMID:23619490
3D laptop for defense applications
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.
NASA Astrophysics Data System (ADS)
Chen, Li
1999-09-01
According to a general definition of discrete curves, surfaces, and manifolds (Li Chen, 'Generalized discrete object tracking algorithms and implementations, ' In Melter, Wu, and Latecki ed, Vision Geometry VI, SPIE Vol. 3168, pp 184 - 195, 1997.). This paper focuses on the Jordan curve theorem in 2D discrete spaces. The Jordan curve theorem says that a (simply) closed curve separates a simply connected surface into two components. Based on the definition of discrete surfaces, we give three reasonable definitions of simply connected spaces. Theoretically, these three definition shall be equivalent. We have proved the Jordan curve theorem under the third definition of simply connected spaces. The Jordan theorem shows the relationship among an object, its boundary, and its outside area. In continuous space, the boundary of an mD manifold is an (m - 1)D manifold. The similar result does apply to regular discrete manifolds. The concept of a new regular nD-cell is developed based on the regular surface point in 2D, and well-composed objects in 2D and 3D given by Latecki (L. Latecki, '3D well-composed pictures,' In Melter, Wu, and Latecki ed, Vision Geometry IV, SPIE Vol 2573, pp 196 - 203, 1995.).
The perception of geometrical structure from congruence
NASA Technical Reports Server (NTRS)
Lappin, Joseph S.; Wason, Thomas D.
1989-01-01
The principle function of vision is to measure the environment. As demonstrated by the coordination of motor actions with the positions and trajectories of moving objects in cluttered environments and by rapid recognition of solid objects in varying contexts from changing perspectives, vision provides real-time information about the geometrical structure and location of environmental objects and events. The geometric information provided by 2-D spatial displays is examined. It is proposed that the geometry of this information is best understood not within the traditional framework of perspective trigonometry, but in terms of the structure of qualitative relations defined by congruences among intrinsic geometric relations in images of surfaces. The basic concepts of this geometrical theory are outlined.
Harada, Hitoshi; Kanaji, Shingo; Hasegawa, Hiroshi; Yamamoto, Masashi; Matsuda, Yoshiko; Yamashita, Kimihiro; Matsuda, Takeru; Oshikiri, Taro; Sumi, Yasuo; Nakamura, Tetsu; Suzuki, Satoshi; Kakeji, Yoshihiro
2018-03-30
Recently, several new imaging technologies, such as three-dimensional (3D)/high-definition (HD) stereovision and high-resolution two-dimensional (2D)/4K monitors, have been introduced in laparoscopic surgery. However, it is still unclear whether these technologies actually improve surgical performance. Participants were 11 expert laparoscopic surgeons. We designed three laparoscopic suturing tasks (task 1: simple suturing, task 2: knotting thread in a small box, and task 3: suturing in a narrow space) in training boxes. Performances were recorded by an optical position tracker. All participants first performed each task five times consecutively using a conventional 2D/HD monitor. Then they were randomly divided into two groups: six participants performed the tasks using 3D/HD before using 2D/4K; the other five participants performed the tasks using a 2D/4K monitor before the 3D/HD monitor. After the trials, we evaluated the performance scores (operative time, path length of forceps, and technical errors) and compared performance scores across all monitors. Surgical performances of participants were ranked in decreasing order: 3D/HD, 2D/4K, and 2D/HD using the total scores for each task. In task 1 (simple suturing), some surgical performances using 3D/HD were significantly better than those using 2D/4K (P = 0.017, P = 0.033, P = 0.492 for operative time, path length, and technical errors, respectively). On the other hand, with operation in narrow spaces such as in tasks 2 and 3, performances using 2D/4K were not inferior to 3D/HD performances. The high-resolution images from the 2D/4K monitor may enhance depth perception in narrow spaces and may complement stereoscopic vision almost as well as using 3D/HD. Compared to a 2D/HD monitor, a 3D/HD monitor improved the laparoscopic surgical technique of expert surgeons more than a 2D/4K monitor. However, the advantage of 2D/4K high-resolution images may be comparable to a 3D/HD monitor especially in narrow spaces.
Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).
Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong
2016-02-06
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.
Visually guided grasping to study teleprogrammation within the BAROCO testbed
NASA Technical Reports Server (NTRS)
Devy, M.; Garric, V.; Delpech, M.; Proy, C.
1994-01-01
This paper describes vision functionalities required in future orbital laboratories; in such systems, robots will be needed in order to execute the on-board scientific experiments or servicing and maintenance tasks under the remote control of ground operators. For this sake, ESA has proposed a robotic configuration called EMATS; a testbed has been developed by ESTEC in order to evaluate the potentialities of EMATS-like robot to execute scientific tasks in automatic mode. For the same context, CNES develops the BAROCO testbed to investigate remote control and teleprogrammation, in which high level primitives like 'Pick Object A' are provided as basic primitives. In nominal situations, the system has an a priori knowledge about the position of all objects. These positions are not very accurate, but this knowledge is sufficient in order to predict the position of the object which must be grasped, with respect to the manipulator frame. Vision is required in order to insure a correct grasping and to guarantee a good accuracy for the following operations. We describe our results about a visually guided grasping of static objects. It seems to be a very classical problem, and a lot of results are available. But, in many cases, it lacks a realistic evaluation of the accuracy, because such an evaluation requires tedious experiments. We propose several results about calibration of the experimental testbed, recognition algorithms required to locate a 3D polyhedral object, and the grasping itself.
Pose estimation of industrial objects towards robot operation
NASA Astrophysics Data System (ADS)
Niu, Jie; Zhou, Fuqiang; Tan, Haishu; Cao, Yu
2017-10-01
With the advantages of wide range, non-contact and high flexibility, the visual estimation technology of target pose has been widely applied in modern industry, robot guidance and other engineering practices. However, due to the influence of complicated industrial environment, outside interference factors, lack of object characteristics, restrictions of camera and other limitations, the visual estimation technology of target pose is still faced with many challenges. Focusing on the above problems, a pose estimation method of the industrial objects is developed based on 3D models of targets. By matching the extracted shape characteristics of objects with the priori 3D model database of targets, the method realizes the recognition of target. Thus a pose estimation of objects can be determined based on the monocular vision measuring model. The experimental results show that this method can be implemented to estimate the position of rigid objects based on poor images information, and provides guiding basis for the operation of the industrial robot.
Prism-based single-camera system for stereo display
NASA Astrophysics Data System (ADS)
Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa
2016-06-01
This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.
Real-Time Vision-Based Stiffness Mapping †.
Faragasso, Angela; Bimbo, João; Stilli, Agostino; Wurdemann, Helge Arne; Althoefer, Kaspar; Asama, Hajime
2018-04-26
This paper presents new findings concerning a hand-held stiffness probe for the medical diagnosis of abnormalities during palpation of soft-tissue. Palpation is recognized by the medical community as an essential and low-cost method to detect and diagnose disease in soft-tissue. However, differences are often subtle and clinicians need to train for many years before they can conduct a reliable diagnosis. The probe presented here fills this gap providing a means to easily obtain stiffness values of soft tissue during a palpation procedure. Our stiffness sensor is equipped with a multi degree of freedom (DoF) Aurora magnetic tracker, allowing us to track and record the 3D position of the probe whilst examining a tissue area, and generate a 3D stiffness map in real-time. The stiffness probe was integrated in a robotic arm and tested in an artificial environment representing a good model of soft tissue organs; the results show that the sensor can accurately measure and map the stiffness of a silicon phantom embedded with areas of varying stiffness.
Single Lens Dual-Aperture 3D Imaging System: Color Modeling
NASA Technical Reports Server (NTRS)
Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael
2012-01-01
In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.
Bendali, Amel; Rousseau, Lionel; Lissorgues, Gaëlle; Scorsone, Emmanuel; Djilas, Milan; Dégardin, Julie; Dubus, Elisabeth; Fouquet, Stéphane; Benosman, Ryad; Bergonzo, Philippe; Sahel, José-Alain; Picaud, Serge
2015-10-01
Two retinal implants have recently received the CE mark and one has obtained FDA approval for the restoration of useful vision in blind patients. Since the spatial resolution of current vision prostheses is not sufficient for most patients to detect faces or perform activities of daily living, more electrodes with less crosstalk are needed to transfer complex images to the retina. In this study, we modelled planar and three-dimensional (3D) implants with a distant ground or a ground grid, to demonstrate greater spatial resolution with 3D structures. Using such flexible 3D implant prototypes, we showed that the degenerated retina could mould itself to the inside of the wells, thereby isolating bipolar neurons for specific, independent stimulation. To investigate the in vivo biocompatibility of diamond as an electrode or an isolating material, we developed a procedure for depositing diamond onto flexible 3D retinal implants. Taking polyimide 3D implants as a reference, we compared the number of neurones integrating the 3D diamond structures and their ratio to the numbers of all cells, including glial cells. Bipolar neurones were increased whereas there was no increase even a decrease in the total cell number. SEM examinations of implants confirmed the stability of the diamond after its implantation in vivo. This study further demonstrates the potential of 3D designs for increasing the resolution of retinal implants and validates the safety of diamond materials for retinal implants and neuroprostheses in general. Copyright © 2015. Published by Elsevier Ltd.
Team behavioral norms: a shared vision for a healthy patient care workplace.
Parsons, Mickey L; Clark, Paul; Marshall, Michelle; Cornett, Patricia A
2007-01-01
Leaders are bombarded with healthy workplace articles and advice. This article outlines a strategy for laying the foundation for healthy patient care workplaces at the pivotal unit level. This process facilitates the nursing unit staff to create and implement a shared vision for staff working relationships. Fourteen acute care hospital units, all participants in a healthy workplace intervention, were selected for this analysis because they chose team behavioral norms as a top priority to begin to implement their vision for a desired future for their units, a healthy workplace. These units developed specific team behavioral norms for their expectations of each other. The findings revealed 3 major norm themes and attributes: norms for effective communication, positive attitude, and accountability. Attributes of each norm are described to assist nurses to positively influence their core unit work culture.
Vision-based mapping with cooperative robots
NASA Astrophysics Data System (ADS)
Little, James J.; Jennings, Cullen; Murray, Don
1998-10-01
Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.
Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité
2017-01-01
The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.
Clausner, Tommy; Dalal, Sarang S.; Crespo-García, Maité
2017-01-01
The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D. Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position. PMID:28559791
A vision-based end-point control for a two-link flexible manipulator. M.S. Thesis
NASA Technical Reports Server (NTRS)
Obergfell, Klaus
1991-01-01
The measurement and control of the end-effector position of a large two-link flexible manipulator are investigated. The system implementation is described and an initial algorithm for static end-point positioning is discussed. Most existing robots are controlled through independent joint controllers, while the end-effector position is estimated from the joint positions using a kinematic relation. End-point position feedback can be used to compensate for uncertainty and structural deflections. Such feedback is especially important for flexible robots. Computer vision is utilized to obtain end-point position measurements. A look-and-move control structure alleviates the disadvantages of the slow and variable computer vision sampling frequency. This control structure consists of an inner joint-based loop and an outer vision-based loop. A static positioning algorithm was implemented and experimentally verified. This algorithm utilizes the manipulator Jacobian to transform a tip position error to a joint error. The joint error is then used to give a new reference input to the joint controller. The convergence of the algorithm is demonstrated experimentally under payload variation. A Landmark Tracking System (Dickerson, et al 1990) is used for vision-based end-point measurements. This system was modified and tested. A real-time control system was implemented on a PC and interfaced with the vision system and the robot.
Symbolic Model of Perception in Dynamic 3D Environments
2006-11-01
can retrieve memories , work on goals, recognize visual or aural percepts, and perform actions. ACT-R has been selected for the current...types of memory . Procedural memory is the store of condition- action productions that are selected and executed by the core production system...a declarative memory chunk that is made available to the core production system through the vision module . 4 The vision module has been
The 3D model control of image processing
NASA Technical Reports Server (NTRS)
Nguyen, An H.; Stark, Lawrence
1989-01-01
Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.
Saad, Leonide; Washington, Ilyas
2016-01-01
We discuss how an imperfect visual cycle results in the formation of vitamin A dimers, thought to be involved in the pathogenesis of various retinal diseases, and summarize how slowing vitamin A dimerization has been a therapeutic target of interest to prevent blindness. To elucidate the molecular mechanism of vitamin A dimerization, an alternative form of vitamin A, one that forms dimers more slowly yet maneuvers effortlessly through the visual cycle, was developed. Such a vitamin A, reinforced with deuterium (C20-D3-vitamin A), can be used as a non-disruptive tool to understand the contribution of vitamin A dimers to vision loss. Eventually, C20-D3-vitamin A could become a disease-modifying therapy to slow or stop vision loss associated with dry age-related macular degeneration (AMD), Stargardt disease and retinal diseases marked by such vitamin A dimers. Human clinical trials of C20-D3-vitamin A (ALK-001) are underway.
Impaired colour vision in workers exposed to organic solvents: A systematic review.
Betancur-Sánchez, A M; Vásquez-Trespalacios, E M; Sardi-Correa, C
2017-01-01
To evaluate recent evidence concerning the relationship between the exposure to organic solvents and the impairment of colour vision. A bibliographic search was conducted for scientific papers published in the last 15 years, in the LILACS, PubMed, Science Direct, EBSCO, and Cochrane databases that included observational studies assessing the relationship between impairment in colour vision and exposure to organic solvents. Eleven studies were selected that were performed on an economically active population and used the Lanthony D-15 desaturated test (D-15d), measured the exposure to organic solvents, and included unexposed controls. It was found that there is a statistically significant relationship between the exposure to organic solvents and the presence of an impairment in colour vision. The results support the hypothesis that exposure to organic solvents could induce acquired dyschromatopsia. The evaluation of colour vision with the D-15d test is simple and sensitive for diagnosis. More studies need to be conducted on this subject in order to better understand the relationship between impaired colour vision and more severe side effects caused by this exposure. Copyright © 2016 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.
Monocular Stereo Measurement Using High-Speed Catadioptric Tracking
Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku
2017-01-01
This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483
Wu, Defeng; Chen, Tianfei; Li, Aiguo
2016-08-30
A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.
Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin
2015-04-22
Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.
Three-dimensional simulation, surgical navigation and thoracoscopic lung resection
Kanzaki, Masato; Kikkawa, Takuma; Sakamoto, Kei; Maeda, Hideyuki; Wachi, Naoko; Komine, Hiroshi; Oyama, Kunihiro; Murasugi, Masahide; Onuki, Takamasa
2013-01-01
This report describes a 3-dimensional (3-D) video-assisted thoracoscopic lung resection guided by a 3-D video navigation system having a patient-specific 3-D reconstructed pulmonary model obtained by preoperative simulation. A 78-year-old man was found to have a small solitary pulmonary nodule in the left upper lobe in chest computed tomography. By a virtual 3-D pulmonary model the tumor was found to be involved in two subsegments (S1 + 2c and S3a). Complete video-assisted thoracoscopic surgery bi-subsegmentectomy was selected in simulation and was performed with lymph node dissection. A 3-D digital vision system was used for 3-D thoracoscopic performance. Wearing 3-D glasses, the patient's actual reconstructed 3-D model on 3-D liquid-crystal displays was observed, and the 3-D intraoperative field and the picture of 3-D reconstructed pulmonary model were compared. PMID:24964426
Weidling, Patrick; Jaschinski, Wolfgang
2015-01-01
When presbyopic employees are wearing general-purpose progressive lenses, they have clear vision only with a lower gaze inclination to the computer monitor, given the head assumes a comfortable inclination. Therefore, in the present intervention field study the monitor position was lowered, also with the aim to reduce musculoskeletal symptoms. A comparison group comprised users of lenses that do not restrict the field of clear vision. The lower monitor positions led the participants to lower their head inclination, which was linearly associated with a significant reduction in musculoskeletal symptoms. However, for progressive lenses a lower head inclination means a lower zone of clear vision, so that clear vision of the complete monitor was not achieved, rather the monitor should have been placed even lower. The procedures of this study may be useful for optimising the individual monitor position depending on the comfortable head and gaze inclination and the vertical zone of clear vision of progressive lenses. For users of general-purpose progressive lenses, it is suggested that low monitor positions allow for clear vision at the monitor and for a physiologically favourable head inclination. Employees may improve their workplace using a flyer providing ergonomic-optometric information.
Prevalence of color vision deficiency among arc welders.
Heydarian, Samira; Mahjoob, Monireh; Gholami, Ahmad; Veysi, Sajjad; Mohammadi, Morteza
This study was performed to investigate whether occupationally related color vision deficiency can occur from welding. A total of 50 male welders, who had been working as welders for at least 4 years, were randomly selected as case group, and 50 age matched non-welder men, who lived in the same area, were regarded as control group. Color vision was assessed using the Lanthony desatured panel D-15 test. The test was performed under the daylight fluorescent lamp with a spectral distribution of energy with a color temperature of 6500K and a color rendering index of 94 that provided 1000lx on the work plane. The test was carried out monocularly and no time limit was imposed. All data analysis were performed using SPSS, version 22. The prevalence of dyschromatopsia among welders was 15% which was statistically higher than that of nonwelder group (2%) (p=0.001). Among welders with dyschromatopsia, color vision deficiency in 72.7% of cases was monocular. There was positive relationship between the employment length and color vision loss (p=0.04). Similarly, a significant correlation was found between the prevalence of color vision deficiency and average working hours of welding a day (p=0.025). Chronic exposure to welding light may cause color vision deficiency. The damage depends on the exposure duration and the length of their employment as welders. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System.
Barone, Sandro; Carulli, Marina; Neri, Paolo; Paoli, Alessandro; Razionale, Armando Viviano
2018-01-31
The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera.
An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System
Barone, Sandro; Carulli, Marina; Razionale, Armando Viviano
2018-01-01
The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera. PMID:29385051
Lee, Young-Sook; Chung, Wan-Young
2012-01-01
Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486
Transepithelial photorefractive keratectomy with crosslinking for keratoconus.
Mukherjee, Achyut N; Selimis, Vasilis; Aslanides, Ioannis
2013-01-01
To analyse visual, refractive and topographic outcomes of combining transepithelial photorefractive keratectomy (tPRK) with simultaneous corneal crosslinking for the visual rehabilitation of contact lens intolerant keratoconus patients. Patients with topographically significant keratoconus, limited corrected vision and intolerant of contact lenses were prospectively recruited, subject to ethical approval and consent. All patients underwent single step aspheric tPRK and sequential crosslinking. Preoperative vision, refraction, corneal topography and wavefront were assessed, with postoperative assessment at 1, 3, 6, and 12 months. 22 eyes of 14 patients were included in the pilot study. Mean age was 32 years (SD 6.8, range 24 to 43). Mean preoperative unaided vision was 1.39 LogMAR (SD 0.5) best corrected 0.31 LogMAR (SD 0.2). Mean preoperative spherical equivalent was -2.74 Diopters (D) (SD 4.1 range -12.25 to +7.75), and mean cylinder -2.9 D (SD 1.2, range 0 to -5.5). Mean central corneal thickness was 461um (SD 29, range 411 to 516). Vision improved postoperatively; unaided 0.32 LogMAR (SD 0.4), best corrected 0.11 (SD 0.13) (P=<0.005). Mean postoperative cylinder was -1.4D (SD1.2), significantly reduced (p<0.005). Maximum keratometry (Kmax) was stable throughout postoperative follow up. (p<0.05). Non topographic transepithelial PRK with simultaneous crosslinking improves vision, and may offer an alternative to keratoplasty in contact lens intolerant keratoconus. Further comparative studies to topographic PRK techniques are indicated.
Transepithelial Photorefractive Keratectomy with Crosslinking for Keratoconus
Mukherjee, Achyut N; Selimis, Vasilis; Aslanides, Ioannis
2013-01-01
Purpose: To analyse visual, refractive and topographic outcomes of combining transepithelial photorefractive keratectomy (tPRK) with simultaneous corneal crosslinking for the visual rehabilitation of contact lens intolerant keratoconus patients. Methods: Patients with topographically significant keratoconus, limited corrected vision and intolerant of contact lenses were prospectively recruited, subject to ethical approval and consent. All patients underwent single step aspheric tPRK and sequential crosslinking. Preoperative vision, refraction, corneal topography and wavefront were assessed, with postoperative assessment at 1, 3, 6, and 12 months. Results: 22 eyes of 14 patients were included in the pilot study. Mean age was 32 years (SD 6.8, range 24 to 43). Mean preoperative unaided vision was 1.39 LogMAR (SD 0.5) best corrected 0.31 LogMAR (SD 0.2). Mean preoperative spherical equivalent was -2.74 Diopters (D) (SD 4.1 range -12.25 to +7.75), and mean cylinder -2.9 D (SD 1.2, range 0 to -5.5). Mean central corneal thickness was 461um (SD 29, range 411 to 516). Vision improved postoperatively; unaided 0.32 LogMAR (SD 0.4), best corrected 0.11 (SD 0.13) (P=<0.005). Mean postoperative cylinder was -1.4D (SD1.2), significantly reduced (p<0.005). Maximum keratometry (Kmax) was stable throughout postoperative follow up. (p<0.05). Conclusions: Non topographic transepithelial PRK with simultaneous crosslinking improves vision, and may offer an alternative to keratoplasty in contact lens intolerant keratoconus. Further comparative studies to topographic PRK techniques are indicated. PMID:24222809
Sabattini, E; Bisgaard, K; Ascani, S; Poggi, S; Piccioli, M; Ceccarelli, C; Pieri, F; Fraternali-Orcioni, G; Pileri, S A
1998-07-01
To assess a newly developed immunohistochemical detection system, the EnVision++. A large series of differently processed normal and pathological samples and 53 relevant monoclonal antibodies were chosen. A chessboard titration assay was used to compare the results provided by the EnVision++ system with those of the APAAP, CSA, LSAB, SABC, and ChemMate methods, when applied either manually or in a TechMate 500 immunostainer. With the vast majority of the antibodies, EnVision++ allowed two- to fivefold higher dilutions than the APAAP, LSAB, SABC, and ChemMate techniques, the staining intensity and percentage of expected positive cells being the same. With some critical antibodies (such as the anti-CD5), it turned out to be superior in that it achieved consistently reproducible results with differently fixed or overfixed samples. Only the CSA method, which includes tyramide based enhancement, allowed the same dilutions as the EnVision++ system, and in one instance (with the anti-cyclin D1 antibody) represented the gold standard. The EnVision++ is an easy to use system, which avoids the possibility of disturbing endogenous biotin and lowers the cost per test by increasing the dilutions of the primary antibodies. Being a two step procedure, it reduces both the assay time and the workload.
Sabattini, E; Bisgaard, K; Ascani, S; Poggi, S; Piccioli, M; Ceccarelli, C; Pieri, F; Fraternali-Orcioni, G; Pileri, S A
1998-01-01
AIM: To assess a newly developed immunohistochemical detection system, the EnVision++. METHODS: A large series of differently processed normal and pathological samples and 53 relevant monoclonal antibodies were chosen. A chessboard titration assay was used to compare the results provided by the EnVision++ system with those of the APAAP, CSA, LSAB, SABC, and ChemMate methods, when applied either manually or in a TechMate 500 immunostainer. RESULTS: With the vast majority of the antibodies, EnVision++ allowed two- to fivefold higher dilutions than the APAAP, LSAB, SABC, and ChemMate techniques, the staining intensity and percentage of expected positive cells being the same. With some critical antibodies (such as the anti-CD5), it turned out to be superior in that it achieved consistently reproducible results with differently fixed or overfixed samples. Only the CSA method, which includes tyramide based enhancement, allowed the same dilutions as the EnVision++ system, and in one instance (with the anti-cyclin D1 antibody) represented the gold standard. CONCLUSIONS: The EnVision++ is an easy to use system, which avoids the possibility of disturbing endogenous biotin and lowers the cost per test by increasing the dilutions of the primary antibodies. Being a two step procedure, it reduces both the assay time and the workload. Images PMID:9797726
Critical success factors in awareness of and choice towards low vision rehabilitation.
Fraser, Sarah A; Johnson, Aaron P; Wittich, Walter; Overbury, Olga
2015-01-01
The goal of the current study was to examine the critical factors indicative of an individual's choice to access low vision rehabilitation services. Seven hundred and forty-nine visually impaired individuals, from the Montreal Barriers Study, completed a structured interview and questionnaires (on visual function, coping, depression, satisfaction with life). Seventy-five factors from the interview and questionnaires were entered into a data-driven Classification and Regression Tree Analysis in order to determine the best predictors of awareness group: positive personal choice (I knew and I went), negative personal choice (I knew and did not go), and lack of information (Nobody told me, and I did not know). Having a response of moderate to no difficulty on item 6 (reading signs) of the Visual Function Index 14 (VF-14) indicated that the person had made a positive personal choice to seek rehabilitation, whereas reporting a great deal of difficulty on this item was associated with a lack of information on low vision rehabilitation. In addition to this factor, symptom duration of under nine years, moderate difficulty or less on item 5 (seeing steps or curbs) of the VF-14, and an indication of little difficulty or less on item 3 (reading large print) of the VF-14 further identified those who were more likely to have made a positive personal choice. Individuals in the lack of information group also reported greater difficulty on items 3 and 5 of the VF-14 and were more likely to be male. The duration-of-symptoms factor suggests that, even in the positive choice group, it may be best to offer rehabilitation services early. Being male and responding moderate difficulty or greater to the VF-14 questions about far, medium-distance and near situations involving vision was associated with individuals that lack information. Consequently, these individuals may need additional education about the benefits of low vision services in order to make a positive personal choice. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
Pursuit of X-ray Vision for Augmented Reality
2012-01-01
applications. Virtual Reality 15(2–3), 175–184 (2011) 29. Livingston, M.A., Swan II, J.E., Gabbard , J.L., Höllerer, T.H., Hix, D., Julier, S.J., Baillot, Y...Brown, D., Baillot, Y., Gabbard , J.L., Hix, D.: A perceptual matching technique for depth judgments in optical, see-through augmented reality. In: IEEE
Recognizing 3 D Objects from 2D Images Using Structural Knowledge Base of Genetic Views
1988-08-31
technical report. [BIE85] I. Biederman , "Human image understanding: Recent research and a theory", Computer Vision, Graphics, and Image Processing, vol...model bases", Technical Report 87-85, COINS Dept, University of Massachusetts, Amherst, MA 01003, August 1987 . [BUR87b) Burns, J. B. and L. J. Kitchen...34Recognition in 2D images of 3D objects from large model bases using prediction hierarchies", Proc. IJCAI-10, 1987 . [BUR891 J. B. Burns, forthcoming
2011-10-11
developed a method for determining the structure (component logs and their 3D place- ment) of a LINCOLN LOG assembly from a single image from an uncalibrated...small a class of components. Moreover, we focus on determining the precise pose and structure of an assembly, including the 3D pose of each...medial axes are parallel to the work surface. Thus valid structures Fig. 1. The 3D geometric shape parameters of LINCOLN LOGS. have logs on
Sauer, Igor M; Queisner, Moritz; Tang, Peter; Moosburner, Simon; Hoepfner, Ole; Horner, Rosa; Lohmann, Rudiger; Pratschke, Johann
2017-11-01
The paper evaluates the application of a mixed reality (MR) headmounted display (HMD) for the visualization of anatomical structures in complex visceral-surgical interventions. A workflow was developed and technical feasibility was evaluated. Medical images are still not seamlessly integrated into surgical interventions and, thus, remain separated from the surgical procedure.Surgeons need to cognitively relate 2-dimensional sectional images to the 3-dimensional (3D) during the actual intervention. MR applications simulate 3D images and reduce the offset between working space and visualization allowing for improved spatial-visual approximation of patient and image. The surgeon's field of vision was superimposed with a 3D-model of the patient's relevant liver structures displayed on a MR-HMD. This set-up was evaluated during open hepatic surgery. A suitable workflow for segmenting image masks and texture mapping of tumors, hepatic artery, portal vein, and the hepatic veins was developed. The 3D model was positioned above the surgical site. Anatomical reassurance was possible simply by looking up. Positioning in the room was stable without drift and minimal jittering. Users reported satisfactory comfort wearing the device without significant impairment of movement. MR technology has a high potential to improve the surgeon's action and perception in open visceral surgery by displaying 3D anatomical models close to the surgical site. Superimposing anatomical structures directly onto the organs within the surgical site remains challenging, as the abdominal organs undergo major deformations due to manipulation, respiratory motion, and the interaction with the surgical instruments during the intervention. A further application scenario would be intraoperative ultrasound examination displaying the image directly next to the transducer. Displays and sensor-technologies as well as biomechanical modeling and object-recognition algorithms will facilitate the application of MR-HMD in surgery in the near future.
Wang, Feifei; Tidei, Joseph J; Polich, Eric D; Gao, Yu; Zhao, Huashan; Perrone-Bizzozero, Nora I; Guo, Weixiang; Zhao, Xinyu
2015-09-08
The mammalian embryonic lethal abnormal vision (ELAV)-like protein HuD is a neuronal RNA-binding protein implicated in neuronal development, plasticity, and diseases. Although HuD has long been associated with neuronal development, the functions of HuD in neural stem cell differentiation and the underlying mechanisms have gone largely unexplored. Here we show that HuD promotes neuronal differentiation of neural stem/progenitor cells (NSCs) in the adult subventricular zone by stabilizing the mRNA of special adenine-thymine (AT)-rich DNA-binding protein 1 (SATB1), a critical transcriptional regulator in neurodevelopment. We find that SATB1 deficiency impairs the neuronal differentiation of NSCs, whereas SATB1 overexpression rescues the neuronal differentiation phenotypes resulting from HuD deficiency. Interestingly, we also discover that SATB1 is a transcriptional activator of HuD during NSC neuronal differentiation. In addition, we demonstrate that NeuroD1, a neuronal master regulator, is a direct downstream target of SATB1. Therefore, HuD and SATB1 form a positive regulatory loop that enhances NeuroD1 transcription and subsequent neuronal differentiation. Our results here reveal a novel positive feedback network between an RNA-binding protein and a transcription factor that plays critical regulatory roles in neurogenesis.
Kim, Eon; Bakaraju, Ravi C; Ehrmann, Klaus
2016-01-01
To evaluate the repeatability of power profiles measured on NIMO TR1504 (Lambda-X, Belgium) and investigate the effects of lens decentration on the power profiles for single vision (SV), bifocal (BF) and multifocal (MF) contact lenses. Accuracy of the sphere power was evaluated using single vision BK-7 calibration glass lenses of six minus and six plus powers. Three SV and four BF/MF contact lenses - three lenses each, were measured five times to calculate the coefficients of repeatability (COR) of the instrument. The COR was computed for each chord position, lens design, prescription power and operator. One lens from each type was measured with a deliberate decentration up to ±0.5mm in 0.1mm steps. For all lenses, the COR varied across different regions of the half-chord position. In general, SV lenses showed lower COR compared to the BF/MF group lenses. There were no noticeable trends of COR between prescription powers for SV and BF/MF lenses. The shape of the power profiles was not affected when lenses were deliberately decentered for all SV and PureVision MF lenses. However, for Acuvue BF lenses, the peak to trough amplitude of the power profiles flattened up to 1.00D. The COR across the half-chord of the optic zone diameter was mostly within clinical relevance except for the central 0.5mm half-chord position. COR were dependent on the lens type, whereby BF/MF group produced higher COR than SV lenses. The effects of deliberate decentration on the shape of power profiles were pronounced for lenses where the profiles had sharp transitions of power. Copyright © 2015 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
1988-06-08
develop a working experi- tal system which could demonstrate dexterous manipulation in a robotic assembly task. Th ,pe of work can generally be divided into...D Raviv discukse the development, implementation, and experimental evaluation tof a new method for the reconstruction of 3D images from 2D vision data...Research supervision by K. Loparo A. "Moving Shadows Methods for Inferring Three Dimensional Surfaces," D. Raviv , Ph.D. Thesis B. "Robotic Adaptive
3D Printing: Print the future of ophthalmology.
Huang, Wenbin; Zhang, Xiulan
2014-08-26
The three-dimensional (3D) printer is a new technology that creates physical objects from digital files. Recent technological advances in 3D printing have resulted in increased use of this technology in the medical field, where it is beginning to revolutionize medical and surgical possibilities. It is already providing medicine with powerful tools that facilitate education, surgical planning, and organ transplantation research. A good understanding of this technology will be beneficial to ophthalmologists. The potential applications of 3D printing in ophthalmology, both current and future, are explored in this article. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune
2015-01-01
When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not for egocentric position perception. Therefore, the two visual spatial perceptions about observers’ own viewpoints are fundamentally dissociable. PMID:26648895
Vergence-accommodation conflicts hinder visual performance and cause visual fatigue.
Hoffman, David M; Girshick, Ahna R; Akeley, Kurt; Banks, Martin S
2008-03-28
Three-dimensional (3D) displays have become important for many applications including vision research, operation of remote devices, medical imaging, surgical training, scientific visualization, virtual prototyping, and more. In many of these applications, it is important for the graphic image to create a faithful impression of the 3D structure of the portrayed object or scene. Unfortunately, 3D displays often yield distortions in perceived 3D structure compared with the percepts of the real scenes the displays depict. A likely cause of such distortions is the fact that computer displays present images on one surface. Thus, focus cues-accommodation and blur in the retinal image-specify the depth of the display rather than the depths in the depicted scene. Additionally, the uncoupling of vergence and accommodation required by 3D displays frequently reduces one's ability to fuse the binocular stimulus and causes discomfort and fatigue for the viewer. We have developed a novel 3D display that presents focus cues that are correct or nearly correct for the depicted scene. We used this display to evaluate the influence of focus cues on perceptual distortions, fusion failures, and fatigue. We show that when focus cues are correct or nearly correct, (1) the time required to identify a stereoscopic stimulus is reduced, (2) stereoacuity in a time-limited task is increased, (3) distortions in perceived depth are reduced, and (4) viewer fatigue and discomfort are reduced. We discuss the implications of this work for vision research and the design and use of displays.
Reaching with cerebral tunnel vision.
Rizzo, M; Darling, W
1997-01-01
We studied reaching movements in a 48-year-old man with bilateral lesions of the calcarine cortex which spared the foveal representation and caused severe tunnel vision. Three-dimensional (3D) reconstruction of brain MR images showed no evidence of damage beyond area 18. The patient could not see his hand during reaching movements, providing a unique opportunity to test the role of peripheral visual cues in limb control. Optoelectronic recordings of upper limb movements showed normal hand paths and trajectories to fixated extrinsic targets. There was no slowing, tremor, or ataxia. Self-bound movements were also preserved. Analyses of limb orientation at the endpoints of reaches showed that the patient could transform an extrinsic target's visual coordinates to an appropriate upper limb configuration for target acquisition. There was no disadvantage created by blocking the view of the reaching arm. Moreover, the patient could not locate targets presented in the hemianopic fields by pointing. Thus, residual nonconscious vision or 'blindsight' in the aberrant fields was not a factor in our patient's reaching performance. The findings in this study show that peripheral visual cues on the position and velocity of the moving limb are not critical to the control of goal directed reaches, at least not until the hand is close to target. Other cues such as kinesthetic feedback can suffice. It also appears that the visuomotor transformations for reaching do not take place before area 19 in humans.
Virtual environment assessment for laser-based vision surface profiling
NASA Astrophysics Data System (ADS)
ElSoussi, Adnane; Al Alami, Abed ElRahman; Abu-Nabah, Bassam A.
2015-03-01
Oil and gas businesses have been raising the demand from original equipment manufacturers (OEMs) to implement a reliable metrology method in assessing surface profiles of welds before and after grinding. This certainly mandates the deviation from the commonly used surface measurement gauges, which are not only operator dependent, but also limited to discrete measurements along the weld. Due to its potential accuracy and speed, the use of laser-based vision surface profiling systems have been progressively rising as part of manufacturing quality control. This effort presents a virtual environment that lends itself for developing and evaluating existing laser vision sensor (LVS) calibration and measurement techniques. A combination of two known calibration techniques is implemented to deliver a calibrated LVS system. System calibration is implemented virtually and experimentally to scan simulated and 3D printed features of known profiles, respectively. Scanned data is inverted and compared with the input profiles to validate the virtual environment capability for LVS surface profiling and preliminary assess the measurement technique for weld profiling applications. Moreover, this effort brings 3D scanning capability a step closer towards robust quality control applications in a manufacturing environment.
McGrath, Colleen; Laliberte Rudman, Debbie; Polgar, Jan; Spafford, Marlee M; Trentham, Barry
2016-12-01
While previous research has explored the meaning of positive aging discourses from the perspective of older adults, the perspective of older adults aging with a disability has not been studied. In fact the intersection of aging and disability has been largely underexplored in both social gerontology and disability studies. This critical ethnography engaged ten older adults aging with vision loss in narrative interviews, participant observation sessions, and semi-structured in-depth interviews. The overarching objective was to understand those attributes that older adults with age-related vision loss perceive as being the markers of a 'good old age.' The authors critically examined how these markers, and their disabling effects, are situated in ageist and disablist social assumptions regarding what it means to 'age well'. The participants' descriptions of the markers of a 'good old age' were organized into five main themes: 1) maintaining independence while negotiating help; 2) responding positively to vision loss; 3) remaining active while managing risk; 4) managing expectations to be compliant, complicit, and cooperative and; 5) striving to maintain efficiency. The study findings have provided helpful insights into how the ideas and assumptions that operate in relation to disability and impairment in late life are re-produced among older adults with age-related vision loss and how older adults take on an identity that is consistent with socially embedded norms regarding what it means to 'age well'. Copyright © 2016 Elsevier Inc. All rights reserved.
Hwang, Alex D.; Peli, Eli
2014-01-01
Watching 3D content using a stereoscopic display may cause various discomforting symptoms, including eye strain, blurred vision, double vision, and motion sickness. Numerous studies have reported motion-sickness-like symptoms during stereoscopic viewing, but no causal linkage between specific aspects of the presentation and the induced discomfort has been explicitly proposed. Here, we describe several causes, in which stereoscopic capture, display, and viewing differ from natural viewing resulting in static and, importantly, dynamic distortions that conflict with the expected stability and rigidity of the real world. This analysis provides a basis for suggested changes to display systems that may alleviate the symptoms, and suggestions for future studies to determine the relative contribution of the various effects to the unpleasant symptoms. PMID:26034562
2013-10-18
low cost robot testbed. 15. SUBJECT TERMS Bio-inspired trajectory generation, in-situ obstacle avoidance, low-cost LEGO robots, vision- based...will not affect the solution optimality and thus will be regarded as zero. Following the LP motion strategy Eq. (1), the position vector of the Lego ...Lobatto (LGL) method [14], the position of Lego robot can be further represented as ’ 1 ,( )j p jD ζ ζ (6) in which ,0 ,,..., T j j j
Measuring visual discomfort associated with 3D displays
NASA Astrophysics Data System (ADS)
Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.
2009-02-01
Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.
Gundersen, Kjell G; Potvin, Rick
2017-01-01
To compare two different diffractive trifocal intraocular lens (IOL) designs, evaluating longer-term refractive outcomes, visual acuity (VA) at various distances, low contrast VA and quality of vision. Patients with binocularly implanted trifocal IOLs of two different designs (FineVision [FV] and Panoptix [PX]) were evaluated 6 months to 2 years after surgery. Best distance-corrected and uncorrected VA were tested at distance (4 m), intermediate (80 and 60 cm) and near (40 cm). A binocular defocus curve was collected with the subject's best distance correction in place. The preferred reading distance was determined along with the VA at that distance. Low contrast VA at distance was also measured. Quality of vision was measured with the National Eye Institute Visual Function Questionnaire near subset and the Quality of Vision questionnaire. Thirty subjects in each group were successfully recruited. The binocular defocus curves differed only at vergences of -1.0 D (FV better, P =0.02), -1.5 and -2.00 D (PX better, P <0.01 for both). Best distance-corrected and uncorrected binocular vision were significantly better for the PX lens at 60 cm ( P <0.01) with no significant differences at other distances. The preferred reading distance was between 42 and 43 cm for both lenses, with the VA at the preferred reading distance slightly better with the PX lens ( P =0.04). There were no statistically significant differences by lens for low contrast VA ( P =0.1) or for quality of vision measures ( P >0.3). Both trifocal lenses provided excellent distance, intermediate and near vision, but several measures indicated that the PX lens provided better intermediate vision at 60 cm. This may be important to users of tablets and other handheld devices. Quality of vision appeared similar between the two lens designs.
Visualization of anthropometric measures of workers in computer 3D modeling of work place.
Mijović, B; Ujević, D; Baksa, S
2001-12-01
In this work, 3D visualization of a work place by means of a computer-made 3D-machine model and computer animation of a worker have been performed. By visualization of 3D characters in inverse kinematic and dynamic relation with the operating part of a machine, the biomechanic characteristics of worker's body have been determined. The dimensions of a machine have been determined by an inspection of technical documentation as well as by direct measurements and recordings of the machine by camera. On the basis of measured body height of workers all relevant anthropometric measures have been determined by a computer program developed by the authors. By knowing the anthropometric measures, the vision fields and the scope zones while forming work places, exact postures of workers while performing technological procedures were determined. The minimal and maximal rotation angles and the translation of upper and lower arm which are basis for the analysis of worker burdening were analyzed. The dimensions of the seized space of a body are obtained by computer anthropometric analysis of movement, e.g. range of arms, position of legs, head, back. The influence of forming of a work place on correct postures of workers during work has been reconsidered and thus the consumption of energy and fatigue can be reduced to a minimum.
Mester, U; Heinen, S; Kaymak, H
2010-09-01
Aspheric intraocular lenses (IOLs) aim to improve visual function and particularly contrast vision by neutralizing spherical aberration. One drawback of such IOLs is the enhanced sensitivity to decentration and tilt, which can deteriorate image quality. A total of 30 patients who received bilateral phacoemulsification before implantation of the aspheric lens FY-60AD (Hoya) were included in a prospective study. In 25 of the patients (50 eyes) the following parameters could be assessed 3 months after surgery: visual acuity, refraction, contrast sensitivity, pupil size, wavefront errors and decentration and tilt using a newly developed device. The functional results were very satisfying and comparable to results gained with other aspheric IOLs. The mean refraction was sph + 0.1 D (±0.7 D) and cyl 0.6 D (±0.8 D). The spherical equivalent was −0.2 D (±0.6 D). Wavefront measurements revealed a good compensation of the corneal spherical aberration but vertical and horizontal coma also showed opposing values in the cornea and IOL. The assessment of the lens position using the Purkinje meter demonstrated uncritical amounts of decentration and tilt. The mean amount of decentration was 0.2 mm±0.2 mm in the horizontal and vertical directions. The mean amount of tilt was 4.0±2.1° in horizontal and 3.0±2.5° in vertical directions. In a normal dioptric power range the aspheric IOL FY-60AD compensates the corneal spherical aberration very well with only minimal decentration. The slight tilt is symmetrical in both eyes and corresponds to the position of the crystalline lens in young eyes. This may contribute to our findings of compensated corneal coma.
Bolger, P G; Stewart-Brown, S L; Newcombe, E; Starbuck, A
1991-01-01
OBJECTIVE--To see if there were differences in referral rates and abnormalities detected from two areas that were operating different preschool vision screening programmes. DESIGN--Cohort study using case notes of referrals. SETTING--Community based secondary referral centres in the county of Avon. PATIENTS--263 referrals from a child population of 7105 in Southmead district, an area that used orthoptists as primary vision screeners; 111 referrals from a child population of 2977 in Weston-super-Mare, an area that used clinical medical officers for screening. MAIN OUTCOME MEASURES--Amblyopia and squint detection rates, together with false positive referral rates. RESULTS--The amblyopia detection rate in Southmead district was significantly higher than in Weston-super-Mare (11/1000 children v 5/1000), as was the detection rate of squint (11/1000 v 3/1000). However, the false positive referral rate from Southmead was significantly lower than that from Weston-super-Mare (9/1000 v 23/1000). CONCLUSION--Preschool vision screening using orthoptists as primary screeners offers a more effective method of detecting visual abnormalities than using clinical medical officers. PMID:1747671
Stereo chromatic contrast sensitivity model to blue-yellow gratings.
Yang, Jiachen; Lin, Yancong; Liu, Yun
2016-03-07
As a fundamental metric of human visual system (HVS), contrast sensitivity function (CSF) is typically measured by sinusoidal gratings at the detection of thresholds for psychophysically defined cardinal channels: luminance, red-green, and blue-yellow. Chromatic CSF, which is a quick and valid index to measure human visual performance and various retinal diseases in two-dimensional (2D) space, can not be directly applied into the measurement of human stereo visual performance. And no existing perception model considers the influence of chromatic CSF of inclined planes on depth perception in three-dimensional (3D) space. The main aim of this research is to extend traditional chromatic contrast sensitivity characteristics to 3D space and build a model applicable in 3D space, for example, strengthening stereo quality of 3D images. This research also attempts to build a vision model or method to check human visual characteristics of stereo blindness. In this paper, CRT screen was clockwise and anti-clockwise rotated respectively to form the inclined planes. Four inclined planes were selected to investigate human chromatic vision in 3D space and contrast threshold of each inclined plane was measured with 18 observers. Stimuli were isoluminant blue-yellow sinusoidal gratings. Horizontal spatial frequencies ranged from 0.05 to 5 c/d. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. According to the relationship between spatial frequency of inclined plane and horizontal spatial frequency, the chromatic contrast sensitivity characteristics in 3D space have been modeled based on the experimental data. The results show that the proposed model can well predicted human chromatic contrast sensitivity characteristics in 3D space.
Clipping polygon faces through a polyhedron of vision
NASA Technical Reports Server (NTRS)
Florence, Judit K. (Inventor); Rohner, Michel A. (Inventor)
1980-01-01
A flight simulator combines flight data and polygon face terrain data to provide a CRT display at each window of the simulated aircraft. The data base specifies the relative position of each vertex of each polygon face therein. Only those terrain faces currently appearing within the pyramid of vision defined by the pilots eye and the edges of the pilots window need be displayed at any given time. As the orientation of the pyramid of vision changes in response to flight data, the displayed faces are correspondingly displaced, eventually moving out of the pyramid of vision. Faces which are currently not visible (outside the pyramid of vision) are clipped from the data flow. In addition, faces which are only partially outside of pyramid of vision are reconstructed to eliminate the outside portion. Window coordinates are generated defining the distance between each vertex and each of the boundary planes forming the pyramid of vision. The sign bit of each window coordinate indicates whether the vertex is on the pyramid of vision side of the associated boundary panel (positive), or on the other side thereof (negative). The set of sign bits accompanying each vertex constitute the outcode of that vertex. The outcodes (O.C.) are systematically processed and examined to determine which faces are completely inside the pyramid of vision (Case A--all signs positive), which faces are completely outside (Case C--All signs negative) and which faces must be reconstructed (Case B--both positive and negative signs).
2012-01-01
Background Economic viability of treatments for primary open-angle glaucoma (POAG) should be assessed objectively to prioritise health care interventions. This study aims to identify the methods for eliciting utility values (UVs) most sensitive to differences in visual field and visual functioning in patients with POAG. As a secondary objective, the dimensions of generic health-related and vision-related quality of life most affected by progressive vision loss will be identified. Methods A total of 132 POAG patients were recruited. Three sets of utility values (EuroQoL EQ-5D, Short Form SF-6D, Time Trade Off) and a measure of perceived visual functioning from the National Eye Institute Visual Function Questionnaire (VFQ-25) were elicited during face-to-face interviews. The sensitivity of UVs to differences in the binocular visual field, visual acuity and visual functioning measures was analysed using non-parametric statistical methods. Results Median utilities were similar across Integrated Visual Field score quartiles for EQ-5D (P = 0.08) whereas SF-6D and Time-Trade-Off UVs significantly decreased (p = 0.01 and p = 0.001, respectively). The VFQ-25 score varied across Integrated Visual Field and binocular visual acuity groups and was associated with all three UVs (P ≤ 0.001); most of its vision-specific sub-scales were associated with the vision markers. The most affected dimension was driving. A relationship with vision markers was found for the physical component of SF-36 and not for any dimension of EQ-5D. Conclusions The Time-Trade-Off was more sensitive than EQ-5D and SF-6D to changes in vision and visual functioning associated with glaucoma progression but could not measure quality of life changes in the mildest disease stages. PMID:22909264
3-D vision and figure-ground separation by visual cortex.
Grossberg, S
1994-01-01
A neural network theory of three-dimensional (3-D) vision, called FACADE theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a boundary contour system (BCS) and a feature contour system (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that are mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object parts are separated, completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, Da Vinci stereopsis, 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analyzed. The BCS and FCS subsystems model aspects of how the two parvocellular cortical processing streams that join the lateral geniculate nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-DEpth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact with cortical mechanisms of spatial attention, attentive object learning, and visual search. Adaptive resonance theory (ART) mechanisms model aspects of how prestriate visual cortex interacts reciprocally with a visual object recognition system in inferotemporal (IT) cortex for purposes of attentive object learning and categorization. Object attention mechanisms of the What cortical processing stream through IT cortex are distinguished from spatial attention mechanisms of the Where cortical processing stream through parietal cortex. Parvocellular BCS and FCS signals interact with the model What stream. Parvocellular FCS and magnocellular motion BCS signals interact with the model Where stream.(ABSTRACT TRUNCATED AT 400 WORDS)
Currie, Maria E; Trejos, Ana Luisa; Rayman, Reiza; Chu, Michael W A; Patel, Rajni; Peters, Terry; Kiaii, Bob B
2013-01-01
The purpose of this study was to determine the effect of three-dimensional (3D) binocular, stereoscopic, and two-dimensional (2D) monocular visualization on robotics-assisted mitral valve annuloplasty versus conventional techniques in an ex vivo animal model. In addition, we sought to determine whether these effects were consistent between novices and experts in robotics-assisted cardiac surgery. A cardiac surgery test-bed was constructed to measure forces applied during mitral valve annuloplasty. Sutures were passed through the porcine mitral valve annulus by the participants with different levels of experience in robotics-assisted surgery and tied in place using both robotics-assisted and conventional surgery techniques. The mean time for both the experts and the novices using 3D visualization was significantly less than that required using 2D vision (P < 0.001). However, there was no significant difference in the maximum force applied by the novices to the mitral valve during suturing (P = 0.7) and suture tying (P = 0.6) using either 2D or 3D visualization. The mean time required and forces applied by both the experts and the novices were significantly less using the conventional surgical technique than when using the robotic system with either 2D or 3D vision (P < 0.001). Despite high-quality binocular images, both the experts and the novices applied significantly more force to the cardiac tissue during 3D robotics-assisted mitral valve annuloplasty than during conventional open mitral valve annuloplasty. This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery.
GPS Usage in a Population of Low-Vision Drivers.
Cucuras, Maria; Chun, Robert; Lee, Patrick; Jay, Walter M; Pusateri, Gregg
2017-01-01
We surveyed bioptic and non-bioptic low-vision drivers in Illinois, USA, to determine their usage of global positioning system (GPS) devices. Low-vision patients completed an IRB-approved phone survey regarding driving demographics and usage of GPS while driving. Participants were required to be active drivers with an Illinois driver's license, and met one of the following criteria: best-corrected visual acuity (BCVA) less than or equal to 20/40, central or significant peripheral visual field defects, or a combination of both. Of 27 low-vision drivers, 10 (37%) used GPS while driving. The average age for GPS users was 54.3 and for non-users was 77.6. All 10 drivers who used GPS while driving reported increased comfort or safety level. Since non-GPS users were significantly older than GPS users, it is likely that older participants would benefit from GPS technology training from their low-vision eye care professionals.
Broiler weight estimation based on machine vision and artificial neural network.
Amraei, S; Abdanan Mehdizadeh, S; Salari, S
2017-04-01
1. Machine vision and artificial neural network (ANN) procedures were used to estimate live body weight of broiler chickens in 30 1-d-old broiler chickens reared for 42 d. 2. Imaging was performed two times daily. To localise chickens within the pen, an ellipse fitting algorithm was used and the chickens' head and tail removed using the Chan-Vese method. 3. The correlations between the body weight and 6 physical extracted features indicated that there were strong correlations between body weight and the 5 features including area, perimeter, convex area, major and minor axis length. 5. According to statistical analysis there was no significant difference between morning and afternoon data over 42 d. 6. In an attempt to improve the accuracy of live weight approximation different ANN techniques, including Bayesian regulation, Levenberg-Marquardt, Scaled conjugate gradient and gradient descent were used. Bayesian regulation with R 2 value of 0.98 was the best network for prediction of broiler weight. 7. The accuracy of the machine vision technique was examined and most errors were less than 50 g.
Kasten, Erich; Bunzenthal, Ulrike; Sabel, Bernhard A
2006-11-25
It has been argued that patients with visual field defects compensate for their deficit by making more frequent eye movements toward the hemianopic field and that visual field enlargements found after vision restoration therapy (VRT) may be an artefact of such eye movements. In order to determine if this was correct, we recorded eye movements in hemianopic subjects before and after VRT. Visual fields were measured in subjects with homonymous visual field defects (n=15) caused by trauma, cerebral ischemia or haemorrhage (lesion age >6 months). Visual field charts were plotted using both high-resolution perimetry (HRP) and conventional perimetry before and after a 3-month period of VRT, with eye movements being recorded with a 2D-eye tracker. This permitted quantification of eye positions and measurements of deviation from fixation. VRT lead to significant visual field enlargements as indicated by an increase of stimulus detection of 3.8% when tested using HRP and about 2.2% (OD) and 3.5% (OS) fewer misses with conventional perimetry. Eye movements were expressed as the standard deviations (S.D.) of the eye position recordings from fixation. Before VRT, the S.D. was +/-0.82 degrees horizontally and +/-1.16 degrees vertically; after VRT, it was +/-0.68 degrees and +/-1.39 degrees , respectively. A cluster analysis of the horizontal eye movements before VRT showed three types of subjects with (i) small (n=7), (ii) medium (n=7) or (iii) large fixation instability (n=1). Saccades were directed equally to the right or the left side; i.e., with no preference toward the blind hemifield. After VRT, many subjects showed a smaller variability of horizontal eye movements. Before VRT, 81.6% of the recorded eye positions were found within a range of 1 degrees horizontally from fixation, whereas after VRT, 88.3% were within that range. In the 2 degrees range, we found 94.8% before and 98.9% after VRT. Subjects moved their eyes 5 degrees or more 0.3% of the time before VRT versus 0.1% after VRT. Thus, in this study, subjects with homonymous visual field defects who were attempting to fixate a central target while their fields were being plotted, typically showed brief horizontal shifts with no preference toward or away from the blind hemifield. These eye movements were usually less than 1 degrees from fixation. Large saccades toward the blind field after VRT were very rare. VRT has no effect on either the direction or the amplitude of horizontal eye movements during visual field testing. These results argue against the theory that the visual field enlargements are artefacts induced by eye movements.
3D geometric phase analysis and its application in 3D microscopic morphology measurement
NASA Astrophysics Data System (ADS)
Zhu, Ronghua; Shi, Wenxiong; Cao, Quankun; Liu, Zhanwei; Guo, Baoqiao; Xie, Huimin
2018-04-01
Although three-dimensional (3D) morphology measurement has been widely applied on the macro-scale, there is still a lack of 3D measurement technology on the microscopic scale. In this paper, a microscopic 3D measurement technique based on the 3D-geometric phase analysis (GPA) method is proposed. In this method, with machine vision and phase matching, the traditional GPA method is extended to three dimensions. Using this method, 3D deformation measurement on the micro-scale can be realized using a light microscope. Simulation experiments were conducted in this study, and the results demonstrate that the proposed method has a good anti-noise ability. In addition, the 3D morphology of the necking zone in a tensile specimen was measured, and the results demonstrate that this method is feasible.
Holmberg, M; Johansson, J; Forsgren, L; Heijbel, J; Sandgren, O; Holmgren, G
1995-08-01
We present linkage analysis on a large Swedish five-generation family of 15 affected individuals with autosomal dominant cerebellar ataxia (ADCA) associated with retinal degeneration and anticipation. Common clinical signs in this family include ataxia, dysarthria and severely impaired vision with the phenotype ADCA type II. Different subtypes of ADCA have proven difficult to classify clinically due to extensive phenotypic variability within and between families. Genetic analysis of a number of ADCA type I families shows that heterogeneity exists also genetically. During the last few years several types of ADCA type I have been localized and to date six genetically distinct forms have been identified including SCA1 (6p), SCA2 (12q), SCA3 and Machado-Joseph disease (MJD) (14q), SCA4 (16q), and finally SCA5 (11). We performed a genome-wide search of the Swedish ADCA type II family using a total of 270 microsatellite markers. Positive lod scores were obtained with a number of microsatellite markers located on chromosome 3p12-p21.1. Three markers gave lod scores over 3 with a maximum lod score of 4.53 achieved with the marker D3S1600. The ADCA type II gene could be restricted to a region of 32 cM by the markers D3S1547 and D3S1274.
Passive Sensor Integration for Vehicle Self-Localization in Urban Traffic Environment †
Gu, Yanlei; Hsu, Li-Ta; Kamijo, Shunsuke
2015-01-01
This research proposes an accurate vehicular positioning system which can achieve lane-level performance in urban canyons. Multiple passive sensors, which include Global Navigation Satellite System (GNSS) receivers, onboard cameras and inertial sensors, are integrated in the proposed system. As the main source for the localization, the GNSS technique suffers from Non-Line-Of-Sight (NLOS) propagation and multipath effects in urban canyons. This paper proposes to employ a novel GNSS positioning technique in the integration. The employed GNSS technique reduces the multipath and NLOS effects by using the 3D building map. In addition, the inertial sensor can describe the vehicle motion, but has a drift problem as time increases. This paper develops vision-based lane detection, which is firstly used for controlling the drift of the inertial sensor. Moreover, the lane keeping and changing behaviors are extracted from the lane detection function, and further reduce the lateral positioning error in the proposed localization system. We evaluate the integrated localization system in the challenging city urban scenario. The experiments demonstrate the proposed method has sub-meter accuracy with respect to mean positioning error. PMID:26633420
Recognition Of Complex Three Dimensional Objects Using Three Dimensional Moment Invariants
NASA Astrophysics Data System (ADS)
Sadjadi, Firooz A.
1985-01-01
A technique for the recognition of complex three dimensional objects is presented. The complex 3-D objects are represented in terms of their 3-D moment invariants, algebraic expressions that remain invariant independent of the 3-D objects' orientations and locations in the field of view. The technique of 3-D moment invariants has been used successfully for simple 3-D object recognition in the past. In this work we have extended this method for the representation of more complex objects. Two complex objects are represented digitally; their 3-D moment invariants have been calculated, and then the invariancy of these 3-D invariant moment expressions is verified by changing the orientation and the location of the objects in the field of view. The results of this study have significant impact on 3-D robotic vision, 3-D target recognition, scene analysis and artificial intelligence.
YAP is essential for tissue tension to ensure vertebrate 3D body shape.
Porazinski, Sean; Wang, Huijia; Asaoka, Yoichi; Behrndt, Martin; Miyamoto, Tatsuo; Morita, Hitoshi; Hata, Shoji; Sasaki, Takashi; Krens, S F Gabriel; Osada, Yumi; Asaka, Satoshi; Momoi, Akihiro; Linton, Sarah; Miesfeld, Joel B; Link, Brian A; Senga, Takeshi; Shimizu, Nobuyoshi; Nagase, Hideaki; Matsuura, Shinya; Bagby, Stefan; Kondoh, Hisato; Nishina, Hiroshi; Heisenberg, Carl-Philipp; Furutani-Seiki, Makoto
2015-05-14
Vertebrates have a unique 3D body shape in which correct tissue and organ shape and alignment are essential for function. For example, vision requires the lens to be centred in the eye cup which must in turn be correctly positioned in the head. Tissue morphogenesis depends on force generation, force transmission through the tissue, and response of tissues and extracellular matrix to force. Although a century ago D'Arcy Thompson postulated that terrestrial animal body shapes are conditioned by gravity, there has been no animal model directly demonstrating how the aforementioned mechano-morphogenetic processes are coordinated to generate a body shape that withstands gravity. Here we report a unique medaka fish (Oryzias latipes) mutant, hirame (hir), which is sensitive to deformation by gravity. hir embryos display a markedly flattened body caused by mutation of YAP, a nuclear executor of Hippo signalling that regulates organ size. We show that actomyosin-mediated tissue tension is reduced in hir embryos, leading to tissue flattening and tissue misalignment, both of which contribute to body flattening. By analysing YAP function in 3D spheroids of human cells, we identify the Rho GTPase activating protein ARHGAP18 as an effector of YAP in controlling tissue tension. Together, these findings reveal a previously unrecognised function of YAP in regulating tissue shape and alignment required for proper 3D body shape. Understanding this morphogenetic function of YAP could facilitate the use of embryonic stem cells to generate complex organs requiring correct alignment of multiple tissues.
Scalable Photogrammetric Motion Capture System "mosca": Development and Application
NASA Astrophysics Data System (ADS)
Knyaz, V. A.
2015-05-01
Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.
Jaramillo, Carlos; Valenti, Roberto G.; Guo, Ling; Xiao, Jizhong
2016-01-01
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances. PMID:26861351
Potential hazards of viewing 3-D stereoscopic television, cinema and computer games: a review.
Howarth, Peter A
2011-03-01
The visual stimulus provided by a 3-D stereoscopic display differs from that of the real world because the image provided to each eye is produced on a flat surface. The distance from the screen to the eye remains fixed, providing a single focal distance, but the introduction of disparity between the images allows objects to be located geometrically in front of, or behind, the screen. Unlike in the real world, the stimulus to accommodation and the stimulus to convergence do not match. Although this mismatch is used positively in some forms of Orthoptic treatment, a number of authors have suggested that it could negatively lead to the development of asthenopic symptoms. From knowledge of the zone of clear, comfortable, single binocular vision one can predict that, for people with normal binocular vision, adverse symptoms will not be present if the discrepancy is small, but are likely if it is large, and that what constitutes 'large' and 'small' are idiosyncratic to the individual. The accommodation-convergence mismatch is not, however, the only difference between the natural and the artificial stimuli. In the former case, an object located in front of, or behind, a fixated object will not only be perceived as double if the images fall outside Panum's fusional areas, but it will also be defocused and blurred. In the latter case, however, it is usual for the producers of cinema, TV or computer game content to provide an image that is in focus over the whole of the display, and as a consequence diplopic images will be sharply in focus. The size of Panum's fusional area is spatial frequency-dependent, and because of this the high spatial frequencies present in the diplopic 3-D image will provide a different stimulus to the fusion system from that found naturally. © 2011 The College of Optometrists.
Mayro, Eileen L; Hark, Lisa A; Shiuey, Eric; Pond, Michael; Siam, Linda; Hill-Bennett, Tamara; Tran, Judie; Khanna, Nitasha; Silverstein, Marlee; Donaghy, James; Zhan, Tingting; Murchison, Ann P; Levin, Alex V
2018-06-01
To determine the prevalence and severity of uncorrected refractive errors in school-age children attending Philadelphia public schools. The Wills Eye Vision Screening Program for Children is a community-based pediatric vision screening program designed to detect and correct refractive errors and refer those with nonrefractive eye diseases for examination by a pediatric ophthalmologist. Between January 2014 and June 2016 the program screened 18,974 children in grades K-5 in Philadelphia public schools. Children who failed the vision screening were further examined by an on-site ophthalmologist or optometrist; children whose decreased visual acuity was not amenable to spectacle correction were referred to a pediatric ophthalmologist. Of the 18,974 children screened, 2,492 (13.1%) exhibited uncorrected refractive errors: 1,776 (9.4%) children had myopia, 459 (2.4%) had hyperopia, 1,484 (7.8%) had astigmatism, and 846 (4.5%) had anisometropia. Of the 2,492 with uncorrected refractive error, 368 children (14.8%) had more than one refractive error diagnosis. In stratifying refractive error diagnoses by severity, mild myopia (spherical equivalent of -0.50 D to < -3.00 D) was the most common diagnosis, present in 1,573 (8.3%) children. In this urban population 13.1% of school-age children exhibited uncorrected refractive errors. Blurred vision may create challenges for students in the classroom; school-based vision screening programs can provide an avenue to identify and correct refractive errors. Copyright © 2018 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.
Nuijts, Rudy M M A; Jonker, Soraya M R; Kaufer, Robert A; Lapid-Gortzak, Ruth; Mendicute, Javier; Martinez, Cristina Peris; Schmickler, Stefanie; Kohnen, Thomas
2016-02-01
To assess the clinical visual outcomes of bilateral implantation of Restor +2.5 diopter (D) multifocal intraocular lenses (IOLs) and contralateral implantation of a Restor +2.5 D multifocal IOL in the dominant eye and Restor +3.0 D multifocal IOL in the fellow eye. Multicenter study at 8 investigative sites. Prospective randomized parallel-group patient-masked 2-arm study. This study comprised adults requiring bilateral cataract extraction followed by multifocal IOL implantation. The primary endpoint was corrected intermediate visual acuity (CIVA) at 60 cm, and the secondary endpoint was corrected near visual acuity (CNVA) at 40 cm. Both endpoints were measured 3 months after implantation with a noninferiority margin of Δ = 0.1 logMAR. In total, 103 patients completed the study (53 bilateral, 50 contralateral). At 3 months, the mean CIVA at 60 cm was 0.13 logMAR and 0.10 logMAR in the bilateral group and contralateral group, respectively (difference 0.04 logMAR), achieving noninferiority. Noninferiority was not attained for CNVA at 40 cm; mean values at 3 months for bilateral and contralateral implantation were 0.26 logMAR and 0.11 logMAR, respectively (difference 0.15 logMAR). Binocular defocus curves suggested similar performance in distance vision between the 2 groups. Treatment-emergent ocular adverse events rates were similar between the groups. Bilateral implantation of the +2.5 D multifocal IOL resulted in similar distance as contralateral implantation of the +2.5 D multifocal IOL and +3.0 D multifocal IOL for intermediate vision (60 cm), while noninferiority was not achieved for near distances (40 cm). Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Vision-based sensing for autonomous in-flight refueling
NASA Astrophysics Data System (ADS)
Scott, D.; Toal, M.; Dale, J.
2007-04-01
A significant capability of unmanned airborne vehicles (UAV's) is that they can operate tirelessly and at maximum efficiency in comparison to their human pilot counterparts. However a major limiting factor preventing ultra-long endurance missions is that they require landing to refuel. Development effort has been directed to allow UAV's to automatically refuel in the air using current refueling systems and procedures. The 'hose & drogue' refueling system was targeted as it is considered the more difficult case. Recent flight trials resulted in the first-ever fully autonomous airborne refueling operation. Development has gone into precision GPS-based navigation sensors to maneuver the aircraft into the station-keeping position and onwards to dock with the refueling drogue. However in the terminal phases of docking, the accuracy of the GPS is operating at its performance limit and also disturbance factors on the flexible hose and basket are not predictable using an open-loop model. Hence there is significant uncertainty on the position of the refueling drogue relative to the aircraft, and is insufficient in practical operation to achieve a successful and safe docking. A solution is to augment the GPS based system with a vision-based sensor component through the terminal phase to visually acquire and track the drogue in 3D space. The higher bandwidth and resolution of camera sensors gives significantly better estimates on the state of the drogue position. Disturbances in the actual drogue position caused by subtle aircraft maneuvers and wind gusting can be visually tracked and compensated for, providing an accurate estimate. This paper discusses the issues involved in visually detecting a refueling drogue, selecting an optimum camera viewpoint, and acquiring and tracking the drogue throughout a widely varying operating range and conditions.
Molecular patterns of X chromosome-linked color vision genes among 134 men of European ancestry.
Drummond-Borg, M; Deeb, S S; Motulsky, A G
1989-01-01
We used Southern blot hybridization to study X chromosome-linked color vision genes encoding the apoproteins of red and green visual pigments in 134 unselected Caucasian men. One hundred and thirteen individuals (84.3%) had a normal arrangement of their color vision pigment genes. All had one red pigment gene; the number of green pigment genes ranged from one to five with a mode of two. The frequency of molecular genotypes indicative of normal color vision (84.3%) was significantly lower than had been observed in previous studies of color vision phenotypes. Color vision defects can be due to deletions of red or green pigment genes or due to formation of hybrid genes comprising portions of both red and green pigment genes [Nathans, J., Piantanida, T.P., Eddy, R.L., Shows, T.B., Jr., & Hogness, D.S. (1986) Science 232, 203-210]. Characteristic anomalous patterns were seen in 15 (11.2%) individuals: 7 (5.2%) had patterns characteristic of deuteranomaly (mild defect in green color perception), 2 (1.5%) had patterns characteristic of deuteranopia (severe defect in green color perception), and 6 (4.5%) had protan patterns (the red perception defects protanomaly and protanopia cannot be differentiated by current molecular methods). Previously undescribed hybrid gene patterns consisting of both green and red pigment gene fragments in addition to normal red and green genes were observed in another 6 individuals (4.5%). Only 2 of these patterns were considered as deuteranomalous. Thus, DNA testing detected anomalous color vision pigment genes at a higher frequency than expected from phenotypic color vision tests. Some color vision gene arrays associated with hybrid genes are likely to mediate normal color vision. Images PMID:2915991
Effect of Number of Zones on Subjective Vision in Concentric Bifocal Optics.
Legras, Richard; Rio, David
2015-11-01
To evaluate the influence of the number of concentric zones of a center-near bifocal optics on the subjective quality of vision. Twenty-two subjects scored with a five-item continuous grading scale the quality of vision of calculated images (i.e., three high-contrast 20/50 letters) viewed through their best sphero-cylindrical correction and a 3-mm pupil to limit the impact of their aberrations. Through-focus images were calculated from -4 to +2 diopters (D), each 0.25 D, in the presence of center-near bifocal optics (Add 2.5 D) varying by their number of concentric zones (from 2 to 20). To compare the results obtained with these profiles, we calculated the area under the (through-focus) curve (AUC) higher than 2 out of 5 (i.e., limit between a poor and a fair image quality, considered as the limit of acceptability). This value was normalized by the naked eye condition and divided into distance, intermediate, and near AUC. The results showed large interindividual variations. Distance AUC remained quite similar whatever the profile, near AUC decreased with the number of concentric zones, and intermediate AUC rose with the number of concentric zones. With 10 and 20 concentric zones, diffraction phenomenon induced constructive interferences at intermediate proximities and destructive interferences at distance and near proximities. To balance distance, intermediate, and near quality of vision, a number of zones between 8 and 10 should be chosen. If the subject does not need intermediate quality of vision, then a profile with two to five zones should be favored.
Ueyama, Hisao; Li, Yao-Hua; Fu, Gui-Lian; Lertrit, Patcharee; Atchaneeyasakul, La-ongsri; Oda, Sanae; Tanabe, Shoko; Nishida, Yasuhiro; Yamade, Shinichi; Ohkubo, Iwao
2003-01-01
We studied 247 Japanese males with congenital deutan color-vision deficiency and found that 37 subjects (15.0%) had a normal genotype of a single red gene followed by a green gene(s). Two of them had missense mutations in the green gene(s), but the other 35 subjects had no mutations in either the exons or their flanking introns. However, 32 of the 35 subjects, including all 8 subjects with pigment-color defect, a special category of deuteranomaly, had a nucleotide substitution, A−71C, in the promoter of a green gene at the second position in the red/green visual-pigment gene array. Although the −71C substitution was also present in color-normal Japanese males at a frequency of 24.3%, it was never at the second position but always found further downstream. The substitution was found in 19.4% of Chinese males and 7.7% of Thai males but rarely in Caucasians or African Americans. These results suggest that the A−71C substitution in the green gene at the second position is closely associated with deutan color-vision deficiency. In Japanese and presumably other Asian populations further downstream genes with −71C comprise a reservoir of the visual-pigment genes that cause deutan color-vision deficiency by unequal crossing over between the intergenic regions. PMID:12626747
Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank
2015-01-01
Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems) or additional artificial layers (coordinate grids), provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic) displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids) and content-related, irregular line features (i.e. highways and main streets) in official urban topographic maps (scale 1/10,000) further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate) and the mean distances of correctly recalled objects (spatial accuracy). It is shown that the True-3D accentuating of grids (depth offset: 5 cm) significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information. PMID:25679208
Sakata, H; Taira, M; Kusunoki, M; Murata, A; Tanaka, Y
1997-08-01
Recent neurophysiological studies in alert monkeys have revealed that the parietal association cortex plays a crucial role in depth perception and visually guided hand movement. The following five classes of parietal neurons covering various aspects of these functions have been identified: (1) depth-selective visual-fixation (VF) neurons of the inferior parietal lobule (IPL), representing egocentric distance; (2) depth-movement sensitive (DMS) neurons of V5A and the ventral intraparietal (VIP) area representing direction of linear movement in 3-D space; (3) depth-rotation-sensitive (RS) neurons of V5A and the posterior parietal (PP) area representing direction of rotary movement in space; (4) visually responsive manipulation-related neurons (visual-dominant or visual-and-motor type) of the anterior intraparietal (AIP) area, representing 3-D shape or orientation (or both) of objects for manipulation; and (5) axis-orientation-selective (AOS) and surface-orientation-selective (SOS) neurons in the caudal intraparietal sulcus (cIPS) sensitive to binocular disparity and representing the 3-D orientation of the longitudinal axes and flat surfaces, respectively. Some AOS and SOS neurons are selective in both orientation and shape. Thus the dorsal visual pathway is divided into at least two subsystems, V5A, PP and VIP areas for motion vision and V6, LIP and cIPS areas for coding position and 3-D features. The cIPS sends the signals of 3-D features of objects to the AIP area, which is reciprocally connected to the ventral premotor (F5) area and plays an essential role in matching hand orientation and shaping with 3-D objects for manipulation.
Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin
2015-01-01
Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust. PMID:25912350
Bagci, Enise; Heijlen, Marjolein; Vergauwen, Lucia; Hagenaars, An; Houbrechts, Anne M; Esguerra, Camila V; Blust, Ronny; Darras, Veerle M; Knapen, Dries
2015-01-01
Thyroid hormone (TH) balance is essential for vertebrate development. Deiodinase type 1 (D1) and type 2 (D2) increase and deiodinase type 3 (D3) decreases local intracellular levels of T3, the most important active TH. The role of deiodinase-mediated TH effects in early vertebrate development is only partially understood. Therefore, we investigated the role of deiodinases during early development of zebrafish until 96 hours post fertilization at the level of the transcriptome (microarray), biochemistry, morphology and physiology using morpholino (MO) knockdown. Knockdown of D1+D2 (D1D2MO) and knockdown of D3 (D3MO) both resulted in transcriptional regulation of energy metabolism and (muscle) development in abdomen and tail, together with reduced growth, impaired swim bladder inflation, reduced protein content and reduced motility. The reduced growth and impaired swim bladder inflation in D1D2MO could be due to lower levels of T3 which is known to drive growth and development. The pronounced upregulation of a large number of transcripts coding for key proteins in ATP-producing pathways in D1D2MO could reflect a compensatory response to a decreased metabolic rate, also typically linked to hypothyroidism. Compared to D1D2MO, the effects were more pronounced or more frequent in D3MO, in which hyperthyroidism is expected. More specifically, increased heart rate, delayed hatching and increased carbohydrate content were observed only in D3MO. An increase of the metabolic rate, a decrease of the metabolic efficiency and a stimulation of gluconeogenesis using amino acids as substrates may have been involved in the observed reduced protein content, growth and motility in D3MO larvae. Furthermore, expression of transcripts involved in purine metabolism coupled to vision was decreased in both knockdown conditions, suggesting that both may impair vision. This study provides new insights, not only into the role of deiodinases, but also into the importance of a correct TH balance during vertebrate embryonic development.
Bagci, Enise; Heijlen, Marjolein; Vergauwen, Lucia; Hagenaars, An; Houbrechts, Anne M.; Esguerra, Camila V.; Blust, Ronny; Darras, Veerle M.; Knapen, Dries
2015-01-01
Thyroid hormone (TH) balance is essential for vertebrate development. Deiodinase type 1 (D1) and type 2 (D2) increase and deiodinase type 3 (D3) decreases local intracellular levels of T3, the most important active TH. The role of deiodinase-mediated TH effects in early vertebrate development is only partially understood. Therefore, we investigated the role of deiodinases during early development of zebrafish until 96 hours post fertilization at the level of the transcriptome (microarray), biochemistry, morphology and physiology using morpholino (MO) knockdown. Knockdown of D1+D2 (D1D2MO) and knockdown of D3 (D3MO) both resulted in transcriptional regulation of energy metabolism and (muscle) development in abdomen and tail, together with reduced growth, impaired swim bladder inflation, reduced protein content and reduced motility. The reduced growth and impaired swim bladder inflation in D1D2MO could be due to lower levels of T3 which is known to drive growth and development. The pronounced upregulation of a large number of transcripts coding for key proteins in ATP-producing pathways in D1D2MO could reflect a compensatory response to a decreased metabolic rate, also typically linked to hypothyroidism. Compared to D1D2MO, the effects were more pronounced or more frequent in D3MO, in which hyperthyroidism is expected. More specifically, increased heart rate, delayed hatching and increased carbohydrate content were observed only in D3MO. An increase of the metabolic rate, a decrease of the metabolic efficiency and a stimulation of gluconeogenesis using amino acids as substrates may have been involved in the observed reduced protein content, growth and motility in D3MO larvae. Furthermore, expression of transcripts involved in purine metabolism coupled to vision was decreased in both knockdown conditions, suggesting that both may impair vision. This study provides new insights, not only into the role of deiodinases, but also into the importance of a correct TH balance during vertebrate embryonic development. PMID:25855985
3D display considerations for rugged airborne environments
NASA Astrophysics Data System (ADS)
Barnidge, Tracy J.; Tchon, Joseph L.
2015-05-01
The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.
Axial-Stereo 3-D Optical Metrology for Inner Profile of Pipes Using a Scanning Laser Endoscope
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Johnston, Richard S.; Melville, C. David; Seibel, Eric J.
2015-07-01
As the rapid progress in the development of optoelectronic components and computational power, 3-D optical metrology becomes more and more popular in manufacturing and quality control due to its flexibility and high speed. However, most of the optical metrology methods are limited to external surfaces. This article proposed a new approach to measure tiny internal 3-D surfaces with a scanning fiber endoscope and axial-stereo vision algorithm. A dense, accurate point cloud of internally machined threads was generated to compare with its corresponding X-ray 3-D data as ground truth, and the quantification was analyzed by Iterative Closest Points algorithm.
Toward an Interpersonal Paradigm for Superior-Subordinate Communication.
1983-11-01
Super- vision in a Large Industrial Organization." Disserta- tion Abstracts International, 34 (1974), 4460A (Kent State). Farace , Richard V.; Peter R...see, among others, Haney, 1976; Farace , Monge, and Russell, 1977; Mitchell, 1970; McMurry, n.d.; Harriman, 1974), and a problem for which no dearth of...for subordinates a positive 31 correlation exists between trust and their supervisors’ per- ceived willingness to listen. Farace , Monge, and Russell
Bertamini, Marco; Wagemans, Johan
2013-04-01
Interest in convexity has a long history in vision science. For smooth contours in an image, it is possible to code regions of positive (convex) and negative (concave) curvature, and this provides useful information about solid shape. We review a large body of evidence on the role of this information in perception of shape and in attention. This includes evidence from behavioral, neurophysiological, imaging, and developmental studies. A review is necessary to analyze the evidence on how convexity affects (1) separation between figure and ground, (2) part structure, and (3) attention allocation. Despite some broad agreement on the importance of convexity in these areas, there is a lack of consensus on the interpretation of specific claims--for example, on the contribution of convexity to metric depth and on the automatic directing of attention to convexities or to concavities. The focus is on convexity and concavity along a 2-D contour, not convexity and concavity in 3-D, but the important link between the two is discussed. We conclude that there is good evidence for the role of convexity information in figure-ground organization and in parsing, but other, more specific claims are not (yet) well supported.
The precision measurement and assembly for miniature parts based on double machine vision systems
NASA Astrophysics Data System (ADS)
Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.
2015-02-01
In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.
The research of edge extraction and target recognition based on inherent feature of objects
NASA Astrophysics Data System (ADS)
Xie, Yu-chan; Lin, Yu-chi; Huang, Yin-guo
2008-03-01
Current research on computer vision often needs specific techniques for particular problems. Little use has been made of high-level aspects of computer vision, such as three-dimensional (3D) object recognition, that are appropriate for large classes of problems and situations. In particular, high-level vision often focuses mainly on the extraction of symbolic descriptions, and pays little attention to the speed of processing. In order to extract and recognize target intelligently and rapidly, in this paper we developed a new 3D target recognition method based on inherent feature of objects in which cuboid was taken as model. On the basis of analysis cuboid nature contour and greyhound distributing characteristics, overall fuzzy evaluating technique was utilized to recognize and segment the target. Then Hough transform was used to extract and match model's main edges, we reconstruct aim edges by stereo technology in the end. There are three major contributions in this paper. Firstly, the corresponding relations between the parameters of cuboid model's straight edges lines in an image field and in the transform field were summed up. By those, the aimless computations and searches in Hough transform processing can be reduced greatly and the efficiency is improved. Secondly, as the priori knowledge about cuboids contour's geometry character known already, the intersections of the component extracted edges are taken, and assess the geometry of candidate edges matches based on the intersections, rather than the extracted edges. Therefore the outlines are enhanced and the noise is depressed. Finally, a 3-D target recognition method is proposed. Compared with other recognition methods, this new method has a quick response time and can be achieved with high-level computer vision. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in object tracking, port AGV, robots fields. The results of simulation experiments and theory analyzing demonstrate that the proposed method could suppress noise effectively, extracted target edges robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.
NASA Astrophysics Data System (ADS)
Maguen, Ezra I.; Salz, James J.; Nesburn, Anthony B.
1997-05-01
Preliminary results of the correction of myopia up to -7.00 D by tracked photorefractive keratectomy (T-PRK) with a scanning and tracking excimer laser by Autonomous Technologies are discussed. 41 eyes participated (20 males). 28 eyes were evaluated one month postop. At epithelization day mean uncorrected vision was 20/45.3. At one month postop, 92.8 of eyes were 20/40 and 46.4% were 20/20. No eye was worse than 20/50. 75% of eyes were within +/- 0.5 D of emmetropia and 82% were within +/- 1.00 D of emmetropia. Eyes corrected for monovision were included. One eye lost 3 lines of best corrected vision, and had more than 1.00 D induced astigmatism due to a central corneal ulcer. Additional complications included symptomatic recurrent corneal erosions which were controlled with topical hypertonic saline. T-PRK appears to allow effective correction of low to moderate myopia. Further study will establish safety and efficacy of the procedure.
3D environment modeling and location tracking using off-the-shelf components
NASA Astrophysics Data System (ADS)
Luke, Robert H.
2016-05-01
The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.
Parallel asynchronous systems and image processing algorithms
NASA Technical Reports Server (NTRS)
Coon, D. D.; Perera, A. G. U.
1989-01-01
A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.
Missing Optomotor Head-Turning Reflex in the DBA/2J Mouse
Huang, Wei; Chen, Hui; Koehler, Christopher L.; Howell, Gareth; John, Simon W. M.; Tian, Ning; Rentería, René C.; Križaj, David
2011-01-01
Purpose. The optomotor reflex of DBA/2J (D2), DBA/2J-Gpnmb+ (D2-Gpnmb+), and C57BL/6J (B6) mouse strains was assayed, and the retinal ganglion cell (RGC) firing patterns, direction selectivity, vestibulomotor function and central vision was compared between the D2 and B6 mouse lines. Methods. Intraocular pressure (IOP) measurements, real-time PCR, and immunohistochemical analysis were used to assess the time course of glaucomatous changes in D2 retinas. Behavioral analyses of optomotor head-turning reflex, visible platform Morris water maze and Rotarod measurements were conducted to test vision and vestibulomotor function. Electroretinogram (ERG) measurements were used to assay outer retinal function. The multielectrode array (MEA) technique was used to characterize RGC spiking and direction selectivity in D2 and B6 retinas. Results. Progressive increase in IOP and loss of Brn3a signals in D2 animals were consistent with glaucoma progression starting after 6 months of age. D2 mice showed no response to visual stimulation that evoked robust optomotor responses in B6 mice at any age after eye opening. Spatial frequency threshold was also not measurable in the D2-Gpnmb+ strain control. ERG a- and b-waves, central vision, vestibulomotor function, the spiking properties of ON, OFF, ON-OFF, and direction-selective RGCs were normal in young D2 mice. Conclusions. The D2 strain is characterized by a lack of optomotor reflex before IOP elevation and RGC degeneration are observed. This behavioral deficit is D2 strain–specific, but is independent of retinal function and glaucoma. Caution is advised when using the optomotor reflex to follow glaucoma progression in D2 mice. PMID:21757588
Temporal multiplexing with adaptive optics for simultaneous vision
Papadatou, Eleni; Del Águila-Carrasco, Antonio J.; Marín-Franch, Iván; López-Gil, Norberto
2016-01-01
We present and test a methodology for generating simultaneous vision with a deformable mirror that changed shape at 50 Hz between two vergences: 0 D (far vision) and −2.5 D (near vision). Different bifocal designs, including toric and combinations of spherical aberration, were simulated and assessed objectively. We found that typical corneal aberrations of a 60-year-old subject changes the shape of objective through-focus curves of a perfect bifocal lens. This methodology can be used to investigate subjective visual performance for different multifocal contact or intraocular lens designs. PMID:27867718
Vergence–accommodation conflicts hinder visual performance and cause visual fatigue
Hoffman, David M.; Girshick, Ahna R.; Akeley, Kurt; Banks, Martin S.
2010-01-01
Three-dimensional (3D) displays have become important for many applications including vision research, operation of remote devices, medical imaging, surgical training, scientific visualization, virtual prototyping, and more. In many of these applications, it is important for the graphic image to create a faithful impression of the 3D structure of the portrayed object or scene. Unfortunately, 3D displays often yield distortions in perceived 3D structure compared with the percepts of the real scenes the displays depict. A likely cause of such distortions is the fact that computer displays present images on one surface. Thus, focus cues—accommodation and blur in the retinal image—specify the depth of the display rather than the depths in the depicted scene. Additionally, the uncoupling of vergence and accommodation required by 3D displays frequently reduces one’s ability to fuse the binocular stimulus and causes discomfort and fatigue for the viewer. We have developed a novel 3D display that presents focus cues that are correct or nearly correct for the depicted scene. We used this display to evaluate the influence of focus cues on perceptual distortions, fusion failures, and fatigue. We show that when focus cues are correct or nearly correct, (1) the time required to identify a stereoscopic stimulus is reduced, (2) stereoacuity in a time-limited task is increased, (3) distortions in perceived depth are reduced, and (4) viewer fatigue and discomfort are reduced. We discuss the implications of this work for vision research and the design and use of displays. PMID:18484839
NASA Astrophysics Data System (ADS)
Tellaeche, A.; Arana, R.; Ibarguren, A.; Martínez-Otzeta, J. M.
The exhaustive quality control is becoming very important in the world's globalized market. One of these examples where quality control becomes critical is the percussion cap mass production. These elements must achieve a minimum tolerance deviation in their fabrication. This paper outlines a machine vision development using a 3D camera for the inspection of the whole production of percussion caps. This system presents multiple problems, such as metallic reflections in the percussion caps, high speed movement of the system and mechanical errors and irregularities in percussion cap placement. Due to these problems, it is impossible to solve the problem by traditional image processing methods, and hence, machine learning algorithms have been tested to provide a feasible classification of the possible errors present in the percussion caps.
Computer Vision Research and its Applications to Automated Cartography
1985-09-01
D Scene Geometry Thomas M. Strat and Martin A. Fischler Appendix D A New Sense for Depth of Field Alex P. Pentland iv 9.* qb CONTENTS (cont’d...D modeling. A. Baseline Stereo System As a framework for integration and evaluation of our research in modeling * 3-D scene geometry , as well as a...B. New Methods for Stereo Compilation As we previously indicated, the conventional approach to recovering scene geometry from a stereo pair of
Information-Driven Autonomous Exploration for a Vision-Based Mav
NASA Astrophysics Data System (ADS)
Palazzolo, E.; Stachniss, C.
2017-08-01
Most micro aerial vehicles (MAV) are flown manually by a pilot. When it comes to autonomous exploration for MAVs equipped with cameras, we need a good exploration strategy for covering an unknown 3D environment in order to build an accurate map of the scene. In particular, the robot must select appropriate viewpoints to acquire informative measurements. In this paper, we present an approach that computes in real-time a smooth flight path with the exploration of a 3D environment using a vision-based MAV. We assume to know a bounding box of the object or building to explore and our approach iteratively computes the next best viewpoints using a utility function that considers the expected information gain of new measurements, the distance between viewpoints, and the smoothness of the flight trajectories. In addition, the algorithm takes into account the elapsed time of the exploration run to safely land the MAV at its starting point after a user specified time. We implemented our algorithm and our experiments suggest that it allows for a precise reconstruction of the 3D environment while guiding the robot smoothly through the scene.
Hughes, Simon; McClelland, James; Tarte, Segolene; Lawrence, David; Ahmad, Shahreen; Hawkes, David; Landau, David
2009-06-01
In selected patients with NSCLC the therapeutic index of radical radiotherapy can be improved with gating/tracking technology. Both techniques require real-time information on target location. This is often derived from a surrogate ventilatory signal. We assessed the correlation of two novel surrogate ventilatory signals with a spirometer-derived signal. The novel signals were obtained using the VisionRT stereoscopic camera system. The VisionRT-Tracked-Point (VRT-TP) signal was derived from tracking a point located midway between the umbilicus and xiphisternum. The VisionRT-Surface-Derived-Volume (VRT-SDV) signal was derived from 3D body surface imaging of the torso. Both have potential advantages over the current surrogate signals. Eleven subjects with NSCLC were recruited. Each was positioned as for radiotherapy treatment, and then instructed to breathe in five different modes: normal, abdominal, thoracic, deep and shallow breathing. Synchronous ventilatory signals were recorded for later analysis. The signals were analysed for correlation across all modes of breathing, and phase shifts. The VRT-SDV was also assessed for its ability to determine the mode of breathing. Both novel respiratory signals showed good correlation (r>0.80) with spirometry in 9 of 11 subjects. For all subjects the correlation with spirometry was better for the VRT-SDV signal than for the VRT-TP signal. Only one subject displayed a phase shift between the VisionRT-derived signals and spirometry. The VRT-SDV signal could also differentiate between different modes of breathing. Unlike the spirometer-derived signal, neither VisionRT-derived signal was subject to drift. Both the VRT-TP and VRT-SDV signals have potential applications in ventilatory-gated and tracked radiotherapy. They can also be used as a signal for sorting 4DCT images, and to drive 4DCT single- and multiple-parameter motion models.
Wu, Yonghua; Hadly, Elizabeth A; Teng, Wenjia; Hao, Yuyang; Liang, Wei; Liu, Yu; Wang, Haitao
2016-09-20
Owls (Strigiformes) represent a fascinating group of birds that are the ecological night-time counterparts to diurnal raptors (Accipitriformes). The nocturnality of owls, unusual within birds, has favored an exceptional visual system that is highly tuned for hunting at night, yet the molecular basis for this adaptation is lacking. Here, using a comparative evolutionary analysis of 120 vision genes obtained by retinal transcriptome sequencing, we found strong positive selection for low-light vision genes in owls, which contributes to their remarkable nocturnal vision. Not surprisingly, we detected gene loss of the violet/ultraviolet-sensitive opsin (SWS1) in all owls we studied, but two other color vision genes, the red-sensitive LWS and the blue-sensitive SWS2, were found to be under strong positive selection, which may be linked to the spectral tunings of these genes toward maximizing photon absorption in crepuscular conditions. We also detected the only other positively selected genes associated with motion detection in falcons and positively selected genes associated with bright-light vision and eye protection in other diurnal raptors (Accipitriformes). Our results suggest the adaptive evolution of vision genes reflect differentiated activity time and distinct hunting behaviors.
Wu, Yonghua; Hadly, Elizabeth A.; Teng, Wenjia; Hao, Yuyang; Liang, Wei; Liu, Yu; Wang, Haitao
2016-01-01
Owls (Strigiformes) represent a fascinating group of birds that are the ecological night-time counterparts to diurnal raptors (Accipitriformes). The nocturnality of owls, unusual within birds, has favored an exceptional visual system that is highly tuned for hunting at night, yet the molecular basis for this adaptation is lacking. Here, using a comparative evolutionary analysis of 120 vision genes obtained by retinal transcriptome sequencing, we found strong positive selection for low-light vision genes in owls, which contributes to their remarkable nocturnal vision. Not surprisingly, we detected gene loss of the violet/ultraviolet-sensitive opsin (SWS1) in all owls we studied, but two other color vision genes, the red-sensitive LWS and the blue-sensitive SWS2, were found to be under strong positive selection, which may be linked to the spectral tunings of these genes toward maximizing photon absorption in crepuscular conditions. We also detected the only other positively selected genes associated with motion detection in falcons and positively selected genes associated with bright-light vision and eye protection in other diurnal raptors (Accipitriformes). Our results suggest the adaptive evolution of vision genes reflect differentiated activity time and distinct hunting behaviors. PMID:27645106
Vertically integrated photonic multichip module architecture for vision applications
NASA Astrophysics Data System (ADS)
Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong
2000-05-01
The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.
Figueroa Velez, Dario X.; Ellefsen, Kyle L.; Hathaway, Ethan R.; Carathedathu, Mathew C.
2017-01-01
The maturation of cortical parvalbumin-positive (PV) interneurons depends on the interaction of innate and experience-dependent factors. Dark-rearing experiments suggest that visual experience determines when broad orientation selectivity emerges in visual cortical PV interneurons. Here, using neural transplantation and in vivo calcium imaging of mouse visual cortex, we investigated whether innate mechanisms contribute to the maturation of orientation selectivity in PV interneurons. First, we confirmed earlier findings showing that broad orientation selectivity emerges in PV interneurons by 2 weeks after vision onset, ∼35 d after these cells are born. Next, we assessed the functional development of transplanted PV (tPV) interneurons. Surprisingly, 25 d after transplantation (DAT) and >2 weeks after vision onset, we found that tPV interneurons have not developed broad orientation selectivity. By 35 DAT, however, broad orientation selectivity emerges in tPV interneurons. Transplantation does not alter orientation selectivity in host interneurons, suggesting that the maturation of tPV interneurons occurs independently from their endogenous counterparts. Together, these results challenge the notion that the onset of vision solely determines when PV interneurons become broadly tuned. Our results reveal that an innate cortical mechanism contributes to the emergence of broad orientation selectivity in PV interneurons. SIGNIFICANCE STATEMENT Early visual experience and innate developmental programs interact to shape cortical circuits. Visual-deprivation experiments have suggested that the onset of visual experience determines when interneurons mature in the visual cortex. Here we used neuronal transplantation and cellular imaging of visual responses to investigate the maturation of parvalbumin-positive (PV) interneurons. Our results suggest that the emergence of broad orientation selectivity in PV interneurons is innately timed. PMID:28123018
Savini, Giacomo; Næser, Kristian
2015-01-13
To investigate the influence of posterior corneal astigmatism, surgically-induced corneal astigmatism (SICA), intraocular lens (IOL) orientation, and effective lens position on the refractive outcome of toric IOLs. Five models were prospectively investigated. Keratometric astigmatism and an intended SICA of 0.2 diopters (D) were entered into model 1. Total corneal astigmatism, measured by a rotating Scheimpflug camera, was used instead of keratometric astigmatism in model 2. The mean postoperative SICA, the actual postoperative IOL orientation, and the influence of the effective lens position were added, respectively, into models 3, 4, and 5. Astigmatic data were vectorially described by meridional and torsional powers. A set of equations was developed to describe the error in refractive astigmatism (ERA) as the difference between the postoperative refractive astigmatism and the target refractive astigmatism. We enrolled 40 consecutive eyes. In model 1, ERA calculations revealed significant cylinder overcorrection in with-the-rule (WTR) eyes (meridional power = -0.59 ± 0.34 D, P < 0.0001) and undercorrection in against-the-rule (ATR) eyes (0.32 ± 0.42 D, P = 0.01). When total corneal astigmatism was used instead of keratometric astigmatism (model 2), the ERA meridional power decreased in WTR (-0.13 ± 0.42 D) and ATR (0.07 ± 0.59 D) eyes, both values being not statistically significant. Models 3 to 5 did not lead to significant improvement. Posterior corneal astigmatism exerts the highest influence on the ERA after toric IOL implantation. Basing calculations on total corneal astigmatism rather than keratometric astigmatism improves the prediction of the residual refractive astigmatism. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.
Fusion of laser and image sensory data for 3-D modeling of the free navigation space
NASA Technical Reports Server (NTRS)
Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.
1994-01-01
A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.
Grossman, David C; Curry, Susan J; Owens, Douglas K; Barry, Michael J; Davidson, Karina W; Doubeni, Chyke A; Epling, John W; Kemper, Alex R; Krist, Alex H; Kurth, Ann E; Landefeld, C Seth; Mangione, Carol M; Phipps, Maureen G; Silverstein, Michael; Simon, Melissa A; Tseng, Chien-Wen
2017-09-05
One of the most important causes of vision abnormalities in children is amblyopia (also known as "lazy eye"). Amblyopia is an alteration in the visual neural pathway in a child's developing brain that can lead to permanent vision loss in the affected eye. Among children younger than 6 years, 1% to 6% have amblyopia or its risk factors (strabismus, anisometropia, or both). Early identification of vision abnormalities could prevent the development of amblyopia. Studies show that screening rates among children vary by race/ethnicity and family income. Data based on parent reports from 2009-2010 indicated identical screening rates among black non-Hispanic children and white non-Hispanic children (80.7%); however, Hispanic children were less likely than non-Hispanic children to report vision screening (69.8%). Children whose families earned 200% or more above the federal poverty level were more likely to report vision screening than families with lower incomes. To update the 2011 US Preventive Services Task Force (USPSTF) recommendation on screening for amblyopia and its risk factors in children. The USPSTF reviewed the evidence on the accuracy of vision screening tests and the benefits and harms of vision screening and treatment. Surgical interventions were considered to be out of scope for this review. Treatment of amblyopia is associated with moderate improvements in visual acuity in children aged 3 to 5 years, which are likely to result in permanent improvements in vision throughout life. The USPSTF concluded that the benefits are moderate because untreated amblyopia results in permanent, uncorrectable vision loss, and the benefits of screening and treatment potentially can be experienced over a child's lifetime. The USPSTF found adequate evidence to bound the potential harms of treatment (ie, higher false-positive rates in low-prevalence populations) as small. Therefore, the USPSTF concluded with moderate certainty that the overall net benefit is moderate for children aged 3 to 5 years. The USPSTF recommends vision screening at least once in all children aged 3 to 5 years to detect amblyopia or its risk factors. (B recommendation) The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of vision screening in children younger than 3 years. (I statement).
3D Photo Mosaicing of Tagiri Shallow Vent Field by an Autonomous Underwater Vehicle
NASA Astrophysics Data System (ADS)
Maki, Toshihiro; Kondo, Hayato; Ura, Tamaki; Sakamaki, Takashi; Mizushima, Hayato; Yanagisawa, Masao
Although underwater visual observation is an ideal method for detailed survey of seafloors, it is currently a costly process that requires the use of Remotely Operated Vehicles (ROVs) or Human Occupied Vehicles (HOVs), and can cover only a limited area. This paper proposes an innovative method to navigate an autonomous underwater vehicle (AUV) to create both 2D and 3D photo mosaics of seafloors with high positioning accuracy without using any vision-based matching. The vehicle finds vertical pole-like acoustic reflectors to use as positioning landmarks using a profiling sonar based on a SLAM (Simultaneous Localization And Mapping) technique. These reflectors can be either artificial or natural objects, and so the method can be applied to shallow vent fields where conventional acoustic positioning is difficult, since bubble plumes can also be used as landmarks as well as artificial reflectors. Path-planning is performed in real-time based on the positions and types of landmarks so as to navigate safely and stably using landmarks of different types (artificial reflector or bubble plume) found at arbitrary times and locations. Terrain tracker switches control reference between depth and altitude from the seafloor based on a local map of hazardous area created in real-time using onboard perceptual sensors, in order to follow rugged terrains at an altitude of 1 to 2 meters, as this range is ideal for visual observation. The method was implemented in the AUV Tri-Dog 1 and experiments were carried out at Tagiri vent field, Kagoshima Bay in Japan. The AUV succeeded in fully autonomous observation for more than 160 minutes to create a photo mosaic with an area larger than 600 square meters, which revealed the spatial distribution of detailed features such as tube-worm colonies, bubble plumes and bacteria mats. A fine bathymetry of the same area was also created using a light-section ranging system mounted on the vehicle. Finally a 3 D representation of the environment was created by merging the visual and bathymetry data.
2007-07-01
SAS System Analysis and Studies Panel • SCI Systems Concepts and Integration Panel • SET Sensors and Electronics Technology Panel These...Daylight Readability 4-2 4.1.4 Night-Time Readability 4-2 4.1.5 NVIS Radiance 4-2 4.1.6 Human Factors Analysis 4-3 4.1.7 Flight Tests 4-3 4.1.7.1...position is shadowing. Moonlight creates shadows during night-time just as sunlight does during the day. Understanding what cannot be seen in night-time
The Impact of a Sports Vision Training Program in Youth Field Hockey Players
Schwab, Sebastian; Memmert, Daniel
2012-01-01
The aim of this study was to investigate whether a sports vision training program improves the visual performance of youth male field hockey players, ages 12 to 16 years, after an intervention of six weeks compared to a control group with no specific sports vision training. The choice reaction time task at the D2 board (Learning Task I), the functional field of view task (Learning Task II) and the multiple object tracking (MOT) task (Transfer Task) were assessed before and after the intervention and again six weeks after the second test. Analyzes showed significant differences between the two groups for the choice reaction time task at the D2 board and the functional field of view task, with significant improvements for the intervention group and none for the control group. For the transfer task, we could not find statistically significant improvements for either group. The results of this study are discussed in terms of theoretical and practical implications. Key pointsPerceptual training with youth field hockey playersCan a sports vision training program improve the visual performance of youth male field hockey players, ages 12 to 16 years, after an intervention of six weeks compared to a control group with no specific sports vision training?The intervention was performed in the “VisuLab” as DynamicEye® SportsVision Training at the German Sport University Cologne.We ran a series of 3 two-factor univariate analysis of variance (ANOVA) with repeated measures on both within subject independent variables (group; measuring point) to examine the effects on central perception, peripheral perception and choice reaction time.The present study shows an improvement of certain visual abilities with the help of the sports vision training program. PMID:24150071
Paudel, Prakash; Ramson, Prasidh; Naduvilath, Thomas; Wilson, David; Phuong, Ha Thanh; Ho, Suit M; Giap, Nguyen V
2014-01-01
Background To assess the prevalence of vision impairment and refractive error in school children 12–15 years of age in Ba Ria – Vung Tau province, Vietnam. Design Prospective, cross-sectional study. Participants 2238 secondary school children. Methods Subjects were selected based on stratified multistage cluster sampling of 13 secondary schools from urban, rural and semi-urban areas. The examination included visual acuity measurements, ocular motility evaluation, cycloplegic autorefraction, and examination of the external eye, anterior segment, media and fundus. Main Outcome Measures Visual acuity and principal cause of vision impairment. Results The prevalence of uncorrected and presenting visual acuity ≤6/12 in the better eye were 19.4% (95% confidence interval, 12.5–26.3) and 12.2% (95% confidence interval, 8.8–15.6), respectively. Refractive error was the cause of vision impairment in 92.7%, amblyopia in 2.2%, cataract in 0.7%, retinal disorders in 0.4%, other causes in 1.5% and unexplained causes in the remaining 2.6%. The prevalence of vision impairment due to myopia in either eye (–0.50 diopter or greater) was 20.4% (95% confidence interval, 12.8–28.0), hyperopia (≥2.00 D) was 0.4% (95% confidence interval, 0.0–0.7) and emmetropia with astigmatism (≥0.75 D) was 0.7% (95% confidence interval, 0.2–1.2). Vision impairment due to myopia was associated with higher school grade and increased time spent reading and working on a computer. Conclusions Uncorrected refractive error, particularly myopia, among secondary school children in Vietnam is a major public health problem. School-based eye health initiative such as refractive error screening is warranted to reduce vision impairment. PMID:24299145
Paudel, Prakash; Ramson, Prasidh; Naduvilath, Thomas; Wilson, David; Phuong, Ha Thanh; Ho, Suit M; Giap, Nguyen V
2014-04-01
To assess the prevalence of vision impairment and refractive error in school children 12-15 years of age in Ba Ria - Vung Tau province, Vietnam. Prospective, cross-sectional study. 2238 secondary school children. Subjects were selected based on stratified multistage cluster sampling of 13 secondary schools from urban, rural and semi-urban areas. The examination included visual acuity measurements, ocular motility evaluation, cycloplegic autorefraction, and examination of the external eye, anterior segment, media and fundus. Visual acuity and principal cause of vision impairment. The prevalence of uncorrected and presenting visual acuity ≤6/12 in the better eye were 19.4% (95% confidence interval, 12.5-26.3) and 12.2% (95% confidence interval, 8.8-15.6), respectively. Refractive error was the cause of vision impairment in 92.7%, amblyopia in 2.2%, cataract in 0.7%, retinal disorders in 0.4%, other causes in 1.5% and unexplained causes in the remaining 2.6%. The prevalence of vision impairment due to myopia in either eye (-0.50 diopter or greater) was 20.4% (95% confidence interval, 12.8-28.0), hyperopia (≥2.00 D) was 0.4% (95% confidence interval, 0.0-0.7) and emmetropia with astigmatism (≥0.75 D) was 0.7% (95% confidence interval, 0.2-1.2). Vision impairment due to myopia was associated with higher school grade and increased time spent reading and working on a computer. Uncorrected refractive error, particularly myopia, among secondary school children in Vietnam is a major public health problem. School-based eye health initiative such as refractive error screening is warranted to reduce vision impairment. © 2013 The Authors. Clinical & Experimental Ophthalmology published by Wiley Publishing Asia Pty Ltd on behalf of Royal Australian and New Zealand College of Ophthalmologists.
Usta, Taner A; Ozkaynak, Aysel; Kovalak, Ebru; Ergul, Erdinc; Naki, M Murat; Kaya, Erdal
2015-08-01
Two-dimensional (2D) view is known to cause practical difficulties for surgeons in conventional laparoscopy. Our goal was to evaluate whether the new-generation, Three-Dimensional Laparoscopic Vision System (3D LVS) provides greater benefit in terms of execution time and error number during the performance of surgical tasks. This study tests the hypothesis that the use of the new generation 3D LVS can significantly improve technical ability on complex laparoscopic tasks in an experimental model. Twenty-four participants (8 experienced, 8 minimally experienced, and 8 inexperienced) were evaluated for 10 different tasks in terms of total execution time and error number. The 4-point lickert scale was used for subjective assessment of the two imaging modalities. All tasks were completed by all participants. Statistically significant difference was determined between 3D and 2D systems in the tasks of bead transfer and drop, suturing, and pick-and-place in the inexperienced group; in the task of passing through two circles with the needle in the minimally experienced group; and in the tasks of bead transfer and drop, suturing and passing through two circles with the needle in the experienced group. Three-dimensional imaging was preferred over 2D in 6 of the 10 subjective criteria questions on 4-point lickert scale. The majority of the tasks were completed in a shorter time using 3D LVS compared to 2D LVS. The subjective Likert-scale ratings from each group also demonstrated a clear preference for 3D LVS. New 3D LVS has the potential to improve the learning curve, and reduce the operating time and error rate during the performances of laparoscopic surgeons. Our results suggest that the new-generation 3D HD LVS will be helpful for surgeons in laparoscopy (Clinical Trial ID: NCT01799577, Protocol ID: BEHGynobs-4).
Enhanced computer vision with Microsoft Kinect sensor: a review.
Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie
2013-10-01
With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.
Successful treatment of syphilitic uveitis in HIV-positive patients.
Nurfahzura, Mohd-Jamil; Hanizasurana, Hashim; Zunaina, Embong; Adil, Hussein
2013-01-01
We report successful treatment of syphilitic uveitis in a case series of three Human immunodeficiency virus (HIV)-positive patients at Malaysia's Selayang Hospital eye clinic. All three patients with syphilitic uveitis were male, aged from 23 to 35 years old, with a history of high-risk behaviors. Of the patients, two presented with blurring of vision and only one patient presented with floaters in the affected eye. Ocular examination revealed intermediate uveitis (case 1 and case 3) and panuveitis (case 2). Each patient showed a high Venereal Disease Research Laboratory (VDRL) titer at presentation and they were also newly diagnosed as HIV positive with variable CD4 counts. All three patients responded well to a neurosyphilis regimen of intravenous penicillin G. At 3 months posttreatment, there was reduction in VDRL titer with improvement of vision in the affected eye. Diagnosis of syphilis needs to be ruled out in all cases of uveitis. All syphilitic uveitis cases should have HIV screening and vice versa, as syphilis is one of the most common infectious diseases associated with HIV-positive patients. Early detection and treatment are important for a good visual outcome.
Ganesh, Sri; Brar, Sheetal; Patel, Utsav
2018-06-01
To compare the objective and subjective quality of vision after femtosecond laser-assisted small incision lenticule extraction (SMILE) and photorefractive keratectomy (PRK) for low myopia. One hundred and twenty eyes from 60 patients (34 females, 26 males) undergoing bilateral correction of low myopia (≤-4 D SE) with either ReLEx SMILE or PRK were included. Visual acuity, contrast sensitivity and higher-order aberrations were recorded preoperatively and compared postoperatively. A quality of vision questionnaire was scored and analyzed 3 months postoperatively. At 3 months, the SMILE group had significantly better uncorrected and corrected distant visual acuity (CDVA), compared to PRK group (p = 0.01). Post-op spherical equivalent (SE) was comparable in both groups (SMILE = -0.15 ± 0.19 D, PRK = -0.14 ± 0.23 D, p = 0.72). However, SE predictability was better in SMILE group with 97% eyes within ±0.05 D compared to 93% eyes in the PRK group. Total higher-order aberrations (HOAs) were significantly higher in PRK compared to the SMILE group (p = 0.022). The SMILE group demonstrated slightly better contrast sensitivity, which was significant at spatial frequency of 12 cpd (p = 0.03). Four eyes in the PRK group had loss of CDVA by one line due to mild haze. Both SMILE and PRK were effective procedures for correction of low myopia. However, SMILE offered superior quality of vision and patient satisfaction due to better postoperative comfort and lower induction of aberrations at 3 months.
Experimental results in autonomous landing approaches by dynamic machine vision
NASA Astrophysics Data System (ADS)
Dickmanns, Ernst D.; Werner, Stefan; Kraus, S.; Schell, R.
1994-07-01
The 4-D approach to dynamic machine vision, exploiting full spatio-temporal models of the process to be controlled, has been applied to on board autonomous landing approaches of aircraft. Aside from image sequence processing, for which it was developed initially, it is also used for data fusion from a range of sensors. By prediction error feedback an internal representation of the aircraft state relative to the runway in 3-D space and time is servo- maintained in the interpretation process, from which the control applications required are being derived. The validity and efficiency of the approach have been proven both in hardware- in-the-loop simulations and in flight experiments with a twin turboprop aircraft Do128 under perturbations from cross winds and wind gusts. The software package has been ported to `C' and onto a new transputer image processing platform; the system has been expanded for bifocal vision with two cameras of different focal length mounted fixed relative to each other on a two-axes platform for viewing direction control.
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
Ideas for Teaching Vision and Visioning
ERIC Educational Resources Information Center
Quijada, Maria Alejandra
2017-01-01
In teaching leadership, a key element to include should be a discussion about vision: what it is, how to communicate it, and how to ensure that it is effective and shared. This article describes a series of exercises that rely on videos to illustrate different aspects of vision and visioning, both in the positive and in the negative. The article…
Colour helps to solve the binocular matching problem
den Ouden, HEM; van Ee, R; de Haan, EHF
2005-01-01
The spatial differences between the two retinal images, called binocular disparities, can be used to recover the three-dimensional (3D) aspects of a scene. The computation of disparity depends upon the correct identification of corresponding features in the two images. Understanding what image features are used by the brain to solve this binocular matching problem is an important issue in research on stereoscopic vision. The role of colour in binocular vision is controversial and it has been argued that colour is ineffective in achieving binocular vision. In the current experiment subjects were required to indicate the amount of perceived depth. The stimulus consisted of an array of fronto-parallel bars uniformly distributed in a constant sized volume. We studied the perceived depth in those 3D stimuli by manipulating both colour (monochrome, trichrome) and luminance (congruent, incongruent). Our results demonstrate that the amount of perceived depth was influenced by colour, indicating that the visual system uses colour to achieve binocular matching. Physiological data have revealed cortical cells in macaque V2 that are tuned both to binocular disparity and to colour. We suggest that one of the functional roles of these cells may be to help solve the binocular matching problem. PMID:15975983
Colour helps to solve the binocular matching problem.
den Ouden, H E M; van Ee, R; de Haan, E H F
2005-09-01
The spatial differences between the two retinal images, called binocular disparities, can be used to recover the three-dimensional (3D) aspects of a scene. The computation of disparity depends upon the correct identification of corresponding features in the two images. Understanding what image features are used by the brain to solve this binocular matching problem is an important issue in research on stereoscopic vision. The role of colour in binocular vision is controversial and it has been argued that colour is ineffective in achieving binocular vision. In the current experiment subjects were required to indicate the amount of perceived depth. The stimulus consisted of an array of fronto-parallel bars uniformly distributed in a constant sized volume. We studied the perceived depth in those 3D stimuli by manipulating both colour (monochrome, trichrome) and luminance (congruent, incongruent). Our results demonstrate that the amount of perceived depth was influenced by colour, indicating that the visual system uses colour to achieve binocular matching. Physiological data have revealed cortical cells in macaque V2 that are tuned both to binocular disparity and to colour. We suggest that one of the functional roles of these cells may be to help solve the binocular matching problem.
Impairments of colour vision induced by organic solvents: a meta-analysis study.
Paramei, Galina V; Meyer-Baron, Monika; Seeber, Andreas
2004-09-01
The impairment of colour discrimination induced by occupational exposure to toluene, styrene and mixtures of organic solvents is reviewed and analysed using a meta-analytical approach. Thirty-nine studies were surveyed covering a wide range of exposure conditions. Those studies using the Lanthony Panel D-15 desaturated test (D-15d) were further considered. From these for 15 samples data on colour discrimination ability (Colour Confusion Index, CCI) and exposure levels were provided, required for the meta-analysis. In accordance with previously reported higher CCI values for the exposed groups, the computations yielded positive effect sizes for 13 of the 15 samples, indicating that in the great majority of the studies the exposed groups showed inferior colour discrimination. However, the meta-analysis showed great variation in effect sizes across the studies. Possible reasons for inconsistency among the reported findings are discussed. These pertain to exposure-related parameters, as well as to confounders such as conditions of test administration and characteristics of subject samples. Those factors vary considerably among the studies and might have greatly contributed to divergence in measured colour vision capacity, thereby obscuring consistent effects of organic solvents on colour discrimination.
Autonomous Robotic Inspection in Tunnels
NASA Astrophysics Data System (ADS)
Protopapadakis, E.; Stentoumis, C.; Doulamis, N.; Doulamis, A.; Loupos, K.; Makantasis, K.; Kopsiaftis, G.; Amditis, A.
2016-06-01
In this paper, an automatic robotic inspector for tunnel assessment is presented. The proposed platform is able to autonomously navigate within the civil infrastructures, grab stereo images and process/analyse them, in order to identify defect types. At first, there is the crack detection via deep learning approaches. Then, a detailed 3D model of the cracked area is created, utilizing photogrammetric methods. Finally, a laser profiling of the tunnel's lining, for a narrow region close to detected crack is performed; allowing for the deduction of potential deformations. The robotic platform consists of an autonomous mobile vehicle; a crane arm, guided by the computer vision-based crack detector, carrying ultrasound sensors, the stereo cameras and the laser scanner. Visual inspection is based on convolutional neural networks, which support the creation of high-level discriminative features for complex non-linear pattern classification. Then, real-time 3D information is accurately calculated and the crack position and orientation is passed to the robotic platform. The entire system has been evaluated in railway and road tunnels, i.e. in Egnatia Highway and London underground infrastructure.
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.
Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-08-23
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.
Real-Time Vision-Based Stiffness Mapping †
Althoefer, Kaspar; Asama, Hajime
2018-01-01
This paper presents new findings concerning a hand-held stiffness probe for the medical diagnosis of abnormalities during palpation of soft-tissue. Palpation is recognized by the medical community as an essential and low-cost method to detect and diagnose disease in soft-tissue. However, differences are often subtle and clinicians need to train for many years before they can conduct a reliable diagnosis. The probe presented here fills this gap providing a means to easily obtain stiffness values of soft tissue during a palpation procedure. Our stiffness sensor is equipped with a multi degree of freedom (DoF) Aurora magnetic tracker, allowing us to track and record the 3D position of the probe whilst examining a tissue area, and generate a 3D stiffness map in real-time. The stiffness probe was integrated in a robotic arm and tested in an artificial environment representing a good model of soft tissue organs; the results show that the sensor can accurately measure and map the stiffness of a silicon phantom embedded with areas of varying stiffness. PMID:29701704
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor
Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-01-01
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520
A vision detailing enhancements made to the Clean Water Act 303(d) Program informed by the experience gained over the past two decades in assessing and reporting on water quality and in developing approximately 65,000 TMDLs.
Visual and linguistic determinants of the eyes' initial fixation position in reading development.
Ducrot, Stéphanie; Pynte, Joël; Ghio, Alain; Lété, Bernard
2013-03-01
Two eye-movement experiments with one hundred and seven first- through fifth-grade children were conducted to examine the effects of visuomotor and linguistic factors on the recognition of words and pseudowords presented in central vision (using a variable-viewing-position technique) and in parafoveal vision (shifted to the left or right of a central fixation point). For all groups of children, we found a strong effect of stimulus location, in both central and parafoveal vision. This effect corresponds to the children's apparent tendency, for peripherally located targets, to reach a position located halfway between the middle and the left edge of the stimulus (preferred viewing location, PVL), whether saccading to the right or left. For centrally presented targets, refixation probability and lexical-decision time were the lowest near the word's center, suggesting an optimal viewing position (OVP). The viewing-position effects found here were modulated (1) by print exposure, both in central and parafoveal vision; and (2) by the intrinsic qualities of the stimulus (lexicality and word frequency) for targets in central vision but not for parafoveally presented targets. Copyright © 2013 Elsevier B.V. All rights reserved.
System and method for controlling a vision guided robot assembly
Lin, Yhu-Tin; Daro, Timothy; Abell, Jeffrey A.; Turner, III, Raymond D.; Casoli, Daniel J.
2017-03-07
A method includes the following steps: actuating a robotic arm to perform an action at a start position; moving the robotic arm from the start position toward a first position; determining from a vision process method if a first part from the first position will be ready to be subjected to a first action by the robotic arm once the robotic arm reaches the first position; commencing the execution of the visual processing method for determining the position deviation of the second part from the second position and the readiness of the second part to be subjected to a second action by the robotic arm once the robotic arm reaches the second position; and performing a first action on the first part using the robotic arm with the position deviation of the first part from the first position predetermined by the vision process method.
In-Space Inspection Technologies Vision
NASA Technical Reports Server (NTRS)
Studor, George
2012-01-01
Purpose: Assess In-Space NDE technologies and needs - current & future spacecraft. Discover & build on needs, R&D & NDE products in other industries and agencies. Stimulate partnerships in & outside NASA to move technologies forward cooperatively. Facilitate group discussion on challenges and opportunities of mutual benefit. Focus Areas: Miniaturized 3D Penetrating Imagers Controllable Snake-arm Inspection systems Miniature Free-flying Micro-satellite Inspectors
Huang, David; Tang, Maolong; Wang, Li; Zhang, Xinbo; Armour, Rebecca L.; Gattey, Devin M.; Lombardi, Lorinna H.; Koch, Douglas D.
2013-01-01
Purpose: To use optical coherence tomography (OCT) to measure corneal power and improve the selection of intraocular lens (IOL) power in cataract surgeries after laser vision correction. Methods: Patients with previous myopic laser vision corrections were enrolled in this prospective study from two eye centers. Corneal thickness and power were measured by Fourier-domain OCT. Axial length, anterior chamber depth, and automated keratometry were measured by a partial coherence interferometer. An OCT-based IOL formula was developed. The mean absolute error of the OCT-based formula in predicting postoperative refraction was compared to two regression-based IOL formulae for eyes with previous laser vision correction. Results: Forty-six eyes of 46 patients all had uncomplicated cataract surgery with monofocal IOL implantation. The mean arithmetic prediction error of postoperative refraction was 0.05 ± 0.65 diopter (D) for the OCT formula, 0.14 ± 0.83 D for the Haigis-L formula, and 0.24 ± 0.82 D for the no-history Shammas-PL formula. The mean absolute error was 0.50 D for OCT compared to a mean absolute error of 0.67 D for Haigis-L and 0.67 D for Shammas-PL. The adjusted mean absolute error (average prediction error removed) was 0.49 D for OCT, 0.65 D for Haigis-L (P=.031), and 0.62 D for Shammas-PL (P=.044). For OCT, 61% of the eyes were within 0.5 D of prediction error, whereas 46% were within 0.5 D for both Haigis-L and Shammas-PL (P=.034). Conclusions: The predictive accuracy of OCT-based IOL power calculation was better than Haigis-L and Shammas-PL formulas in eyes after laser vision correction. PMID:24167323
Model identification and vision-based H∞ position control of 6-DoF cable-driven parallel robots
NASA Astrophysics Data System (ADS)
Chellal, R.; Cuvillon, L.; Laroche, E.
2017-04-01
This paper presents methodologies for the identification and control of 6-degrees of freedom (6-DoF) cable-driven parallel robots (CDPRs). First a two-step identification methodology is proposed to accurately estimate the kinematic parameters independently and prior to the dynamic parameters of a physics-based model of CDPRs. Second, an original control scheme is developed, including a vision-based position controller tuned with the H∞ methodology and a cable tension distribution algorithm. The position is controlled in the operational space, making use of the end-effector pose measured by a motion-tracking system. A four-block H∞ design scheme with adjusted weighting filters ensures good trajectory tracking and disturbance rejection properties for the CDPR system, which is a nonlinear-coupled MIMO system with constrained states. The tension management algorithm generates control signals that maintain the cables under feasible tensions. The paper makes an extensive review of the available methods and presents an extension of one of them. The presented methodologies are evaluated by simulations and experimentally on a redundant 6-DoF INCA 6D CDPR with eight cables, equipped with a motion-tracking system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowry, Thomas Stephen; Finger, John T.; Carrigan, Charles R.
This report documents the key findings from the Reservoir Maintenance and Development (RM&D) Task of the U.S. Department of Energy's (DOE), Geothermal Technologies Office (GTO) Geothermal Vision Study (GeoVision Study). The GeoVision Study had the objective of conducting analyses of future geothermal growth based on sets of current and future geothermal technology developments. The RM&D Task is one of seven tasks within the GeoVision Study with the others being, Exploration and Confirmation, Potential to Penetration, Institutional Market Barriers, Environmental and Social Impacts, Thermal Applications, and Hybrid Systems. The full set of findings and the details of the GeoVision Study canmore » be found in the final GeoVision Study report on the DOE-GTO website. As applied here, RM&D refers to the activities associated with developing, exploiting, and maintaining a known geothermal resource. It assumes that the site has already been vetted and that the resource has been evaluated to be of sufficient quality to move towards full-scale development. It also assumes that the resource is to be developed for power generation, as opposed to low-temperature or direct use applications. This document presents the key factors influencing RM&D from both a technological and operational standpoint and provides a baseline of its current state. It also looks forward to describe areas of research and development that must be pursued if the development geothermal energy is to reach its full potential.« less
Kaur, Gurvinder; Koshy, Jacob; Thomas, Satish; Kapoor, Harpreet; Zachariah, Jiju George; Bedi, Sahiba
2016-04-01
Early detection and treatment of vision problems in children is imperative to meet the challenges of childhood blindness. Considering the problems of inequitable distribution of trained manpower and limited access of quality eye care services to majority of our population, innovative community based strategies like 'Teachers training in vision screening' need to be developed for effective utilization of the available human resources. To evaluate the effectiveness of introducing teachers as the first level vision screeners. Teacher training programs were conducted for school teachers to educate them about childhood ocular disorders and the importance of their early detection. Teachers from government and semi-government schools located in Ludhiana were given training in vision screening. These teachers then conducted vision screening of children in their schools. Subsequently an ophthalmology team visited these schools for re-evaluation of children identified with low vision. Refraction was performed for all children identified with refractive errors and spectacles were prescribed. Children requiring further evaluation were referred to the base hospital. The project was done in two phases. True positives, false positives, true negatives and false negatives were calculated for evaluation. In phase 1, teachers from 166 schools underwent training in vision screening. The teachers screened 30,205 children and reported eye problems in 4523 (14.97%) children. Subsequently, the ophthalmology team examined 4150 children and confirmed eye problems in 2137 children. Thus, the teachers were able to correctly identify eye problems (true positives) in 47.25% children. Also, only 13.69% children had to be examined by the ophthalmology team, thus reducing their work load. Similarly, in phase 2, 46.22% children were correctly identified to have eye problems (true positives) by the teachers. By random sampling, 95.65% children were correctly identified as normal (true negatives) by the teachers. Considering the high true negative rates and reasonably good true positive rates and the wider coverage provided by the program, vision screening in schools by teachers is an effective method of identifying children with low vision. This strategy is also valuable in reducing the workload of the eye care staff.
Creating a vision for your medical call center.
Barr, J L; Laufenberg, S; Sieckman, B L
1998-01-01
MCC technologies and applications that can have a positive impact on managed care delivery are almost limitless. As you determine your vision, be sure to have in mind the following questions: (1) Do you simply want an efficient front end for receiving calls? (2) Do you want to offer triage services? (3) Is your organization ready for a fully functional "electronic physician's office?" Understand your organization's strategy. Where are you going, not only today but five years from now? That information is essential to determine your vision. Once established, your vision will help determine what you need and whether you should build or outsource. Vendors will assist in cost/benefit analysis of their equipment, but do not lose sight of internal factors such as "prior inclination" costs in the case of a nurse triage program. The technology is available to take your vision to its outer reaches. With the projected increase in utilization of call center services, don't let your organization be left behind!
NASA Astrophysics Data System (ADS)
Perez-Bayas, Luis
2001-06-01
In stereoscopic perception of a three-dimensional world, binocular disparity might be thought of as the most important cue to 3D depth perception. Nevertheless, in reality there are many other factors involved before the 'final' conscious and subconscious stereoscopic perception, such as luminance, contrast, orientation, color, motion, and figure-ground extraction (pop-out phenomenon). In addition, more complex perceptual factors exist, such as attention and its duration (an equivalent of 'brain zooming') in relation to physiological central vision, In opposition to attention to peripheral vision and the brain 'top-down' information in relation to psychological factors like memory of previous experiences and present emotions. The brain's internal mapping of a pure perceptual world might be different from the internal mapping of a visual-motor space, which represents an 'action-directed perceptual world.' In addition, psychological factors (emotions and fine adjustments) are much more involved in a stereoscopic world than in a flat 2D-world, as well as in a world using peripheral vision (like VR, using a curved perspective representation, and displays, as natural vision does) as opposed to presenting only central vision (bi-macular stereoscopic vision) as in the majority of typical stereoscopic displays. Here is presented the most recent and precise information available about the psycho-neuro- physiological factors involved in the perception of stereoscopic three-dimensional world, with an attempt to give practical, functional, and pertinent guidelines for building more 'natural' stereoscopic displays.
NASA Astrophysics Data System (ADS)
Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul
2012-06-01
Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.
Optical performance of multifocal soft contact lenses via a single-pass method.
Bakaraju, Ravi C; Ehrmann, Klaus; Falk, Darrin; Ho, Arthur; Papas, Eric
2012-08-01
A physical model eye capable of carrying soft contact lenses (CLs) was used as a platform to evaluate optical performance of several commercial multifocals (MFCLs) with high- and low-add powers and a single-vision control. Optical performance was evaluated at three pupil sizes, six target vergences, and five CL-correcting positions using a spatially filtered monochromatic (632.8 nm) light source. The various target vergences were achieved by using negative trial lenses. A photosensor in the retinal plane recorded the image point-spread that enabled the computation of visual Strehl ratios. The centration of CLs was monitored by an additional integrated en face camera. Hydration of the correcting lens was maintained using a humidity chamber and repeated instillations of rewetting saline drops. All the MFCLs reduced performance for distance but considerably improved performance along the range of distance to near target vergences, relative to the single-vision CL. Performance was dependent on add power, design, pupil, and centration of the correcting CLs. Proclear (D) design produced good performance for intermediate vision, whereas Proclear (N) design performed well at near vision (p < 0.05). AirOptix design exhibited good performance for distance and intermediate vision. PureVision design showed improved performance across the test vergences, but only for pupils ≥4 mm in diameter. Performance of Acuvue bifocal was comparable with other MFCLs, but only for pupils >4 mm in diameter. Acuvue Oasys bifocal produced performance comparable with single-vision CL for most vergences. Direct measurement of single-pass images at the retinal plane of a physical model eye used in conjunction with various MFCLs is demonstrated. This method may have utility in evaluating the relative effectiveness of commercial and prototype designs.
Deemer, Ashley D; Massof, Robert W; Rovner, Barry W; Casten, Robin J; Piersol, Catherine V
2017-03-01
To compare the efficacy of behavioral activation (BA) plus low vision rehabilitation with an occupational therapist (OT-LVR) with supportive therapy (ST) on visual function in patients with age-related macular degeneration (AMD). Single-masked, attention-controlled, randomized clinical trial with AMD patients with subsyndromal depressive symptoms (n = 188). All subjects had two outpatient low vision rehabilitation optometry visits, then were randomized to in-home BA + OT-LVR or ST. Behavioral activation is a structured behavioral treatment aiming to increase adaptive behaviors and achieve valued goals. Supportive therapy is a nondirective, psychological treatment that provides emotional support and controls for attention. Functional vision was assessed with the activity inventory (AI) in which participants rate the difficulty level of goals and corresponding tasks. Participants were assessed at baseline and 4 months. Improvements in functional vision measures were seen in both the BA + OT-LVR and ST groups at the goal level (d = 0.71; d = 0.56 respectively). At the task level, BA + OT-LVR patients showed more improvement in reading, inside-the-home tasks and outside-the-home tasks, when compared to ST patients. The greatest effects were seen in the BA + OT-LVR group in subjects with a visual acuity ≥20/70 (d = 0.360 reading; d = 0.500 inside the home; d = 0.468 outside the home). Based on the trends of the AI data, we suggest that BA + OT-LVR services, provided by an OT in the patient's home following conventional low vision optometry services, are more effective than conventional optometric low vision services alone for those with mild visual impairment. (ClinicalTrials.gov number, NCT00769015.).
2012-01-01
Background The increasing popularity of commercial movies showing three dimensional (3D) computer generated images has raised concern about image safety and possible side effects on population health. This study aims to (1) quantify the occurrence of visually induced symptoms suffered by the spectators during and after viewing a commercial 3D movie and (2) to assess individual and environmental factors associated to those symptoms. Methods A cross-sectional survey was carried out using a paper based, self administered questionnaire. The questionnaire includes individual and movie characteristics and selected visually induced symptoms (tired eyes, double vision, headache, dizziness, nausea and palpitations). Symptoms were queried at 3 different times: during, right after and after 2 hours from the movie. Results We collected 953 questionnaires. In our sample, 539 (60.4%) individuals reported 1 or more symptoms during the movie, 392 (43.2%) right after and 139 (15.3%) at 2 hours from the movie. The most frequently reported symptoms were tired eyes (during the movie by 34.8%, right after by 24.0%, after 2 hours by 5.7% of individuals) and headache (during the movie by 13.7%, right after by 16.8%, after 2 hours by 8.3% of individuals). Individual history for frequent headache was associated with tired eyes (OR = 1.34, 95%CI = 1.01-1.79), double vision (OR = 1.96; 95%CI = 1.13-3.41), headache (OR = 2.09; 95%CI = 1.41-3.10) during the movie and of headache after the movie (OR = 1.64; 95%CI = 1.16-2.32). Individual susceptibility to car sickness, dizziness, anxiety level, movie show time, animation 3D movie were also associated to several other symptoms. Conclusions The high occurrence of visually induced symptoms resulting from this survey suggests the need of raising public awareness on possible discomfort that susceptible individuals may suffer during and after the vision of 3D movies. PMID:22974235
Dave, Pujan; Villarreal, Guadalupe; Friedman, David S; Kahook, Malik Y; Ramulu, Pradeep Y
2015-12-01
To determine the accuracy of patient-physician communication regarding topical ophthalmic medication use based on bottle cap color, particularly among individuals who may have acquired color vision deficiency from glaucoma. Cross-sectional, clinical study. Patients aged ≥18 years with primary open-angle, primary angle-closure, pseudoexfoliation, or pigment dispersion glaucoma, bilateral visual acuity of ≥20/400, and no concurrent conditions that may affect color vision. A total of 100 patients provided color descriptions of 11 distinct medication bottle caps. Color descriptors were then presented to 3 physicians. Physicians matched each color descriptor to the medication they thought the descriptor was describing. Frequency of patient-physician agreement, occurring when all 3 physicians accurately matched the color descriptor to the correct medication. Multivariate regression models evaluated whether patient-physician agreement decreased with degree of better-eye visual field (VF) damage, color descriptor heterogeneity, or color vision deficiency, as determined by the Hardy-Rand-Rittler (HRR) score and Lanthony D15 color confusion index (D15 CCI). Subjects had a mean age of 69 (±11) years, with VF mean deviation of -4.7 (±6.0) and -10.9 (±8.4) decibels (dB) in the better- and worse-seeing eyes, respectively. Patients produced 102 unique color descriptors to describe the colors of the 11 bottle caps. Among individual patients, the mean number of medications demonstrating agreement was 6.1/11 (55.5%). Agreement was less than 15% for 4 medications (prednisolone acetate [generic], betaxolol HCl [Betoptic; Alcon Laboratories Inc., Fort Worth, TX], brinzolamide/brimonidine [Simbrinza; Alcon Laboratories Inc.], and latanoprost [Xalatan; Pfizer, Inc., New York, NY]). Lower HRR scores and higher D15 CCI (both indicating worse color vision) were associated with greater VF damage (P < 0.001). Extent of color vision deficiency and color descriptor heterogeneity significantly predicted agreement in multivariate models (odds of agreement = 0.90 per 1 point decrement in HRR score, P < 0.001; odds of agreement = 0.30 for medications exhibiting high heterogeneity [≥11 descriptors], P = 0.007). Physician understanding of patient medication use based solely on bottle cap color is frequently incorrect, particularly in patients with glaucoma who may have color vision deficiency. Errors based on communication using bottle cap color alone may be common and could lead to confusion and harm. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Bubble behavior characteristics based on virtual binocular stereo vision
NASA Astrophysics Data System (ADS)
Xue, Ting; Xu, Ling-shuang; Zhang, Shang-zhen
2018-01-01
The three-dimensional (3D) behavior characteristics of bubble rising in gas-liquid two-phase flow are of great importance to study bubbly flow mechanism and guide engineering practice. Based on the dual-perspective imaging of virtual binocular stereo vision, the 3D behavior characteristics of bubbles in gas-liquid two-phase flow are studied in detail, which effectively increases the projection information of bubbles to acquire more accurate behavior features. In this paper, the variations of bubble equivalent diameter, volume, velocity and trajectory in the rising process are estimated, and the factors affecting bubble behavior characteristics are analyzed. It is shown that the method is real-time and valid, the equivalent diameter of the rising bubble in the stagnant water is periodically changed, and the crests and troughs in the equivalent diameter curve appear alternately. The bubble behavior characteristics as well as the spiral amplitude are affected by the orifice diameter and the gas volume flow.
NASA Astrophysics Data System (ADS)
Isik-Ercan, Zeynep; Kim, Beomjin; Nowak, Jeffrey
This research-in-progress hypothesizes that urban second graders can have an early understanding about the shape of Sun, Moon, and Earth, how day and night happens, and how Moon appears to change its shape by using three dimensional stereoscopic vision. The 3D stereoscopic vision system might be an effective way to teach subjects like astronomy that explains relationships among objects in space. Currently, Indiana state standards for science teaching do not suggest the teaching of these astronomical concepts explicitly before fourth grade. Yet, we expect our findings to indicate that students can learn these concepts earlier in their educational lives with the implementation of such technologies. We also project that these technologies could revolutionize when these concepts could be taught to children and expand the ways we think about children's cognitive capacities in understanding scientific concepts.
Congdon, Nathan G; Patel, Nita; Esteso, Paul; Chikwembani, Florence; Webber, Fiona; Msithini, Robert Bongi; Ratcliffe, Amy
2008-01-01
To evaluate different refractive cutoffs for spectacle provision with regards to their impact on visual improvement and spectacle compliance. Prospective study of visual improvement and spectacle compliance. South African school children aged 6-19 years receiving free spectacles in a programme supported by Helen Keller International. Refractive error, age, gender, urban versus rural residence, presenting and best-corrected vision were recorded for participants. Spectacle wear was observed directly at an unannounced follow-up examination 4-11 months after initial provision of spectacles. The association between five proposed refractive cutoff protocols and visual improvement and spectacle compliance were examined in separate multivariate models. Refractive cutoffs for spectacle distribution which would effectively identify children with improved vision, and those more likely to comply with spectacle wear. Among 8520 children screened, 810 (9.5%) received spectacles, of whom 636 (79%) were aged 10-14 years, 530 (65%) were girls, 324 (40%) had vision improvement > or = 3 lines, and 483 (60%) were examined 6.4+/-1.5 (range 4.6 to 10.9) months after spectacle dispensing. Among examined children, 149 (31%) were wearing or carrying their glasses. Children meeting cutoffs < or = -0.75 D of myopia, > or = +1.00 D of hyperopia and > or = +0.75 D of astigmatism had significantly greater improvement in vision than children failing to meet these criteria, when adjusting for age, gender and urban versus rural residence. None of the proposed refractive protocols discriminated between children wearing and not wearing spectacles. Presenting vision and improvement in vision were unassociated with subsequent spectacle wear, but girls (p < or = 0.0006 for all models) were more likely to be wearing glasses than were boys. To the best of our knowledge, this is the first suggested refractive cutoff for glasses dispensing validated with respect to key programme outcomes. The lack of association between spectacle retention and either refractive error or vision may have been due to the relatively modest degree of refractive error in this African population.
Project Photofly: New 3d Modeling Online Web Service (case Studies and Assessments)
NASA Astrophysics Data System (ADS)
Abate, D.; Furini, G.; Migliori, S.; Pierattini, S.
2011-09-01
During summer 2010, Autodesk has released a still ongoing project called Project Photofly, freely downloadable from AutodeskLab web site until August 1 2011. Project Photofly based on computer-vision and photogrammetric principles, exploiting the power of cloud computing, is a web service able to convert collections of photographs into 3D models. Aim of our research was to evaluate the Project Photofly, through different case studies, for 3D modeling of cultural heritage monuments and objects, mostly to identify for which goals and objects it is suitable. The automatic approach will be mainly analyzed.
Camera calibration method of binocular stereo vision based on OpenCV
NASA Astrophysics Data System (ADS)
Zhong, Wanzhen; Dong, Xiaona
2015-10-01
Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.
16 CFR 1203.14 - Peripheral vision test.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Peripheral vision test. 1203.14 Section 1203... SAFETY STANDARD FOR BICYCLE HELMETS The Standard § 1203.14 Peripheral vision test. Position the helmet on... the helmet to set the comfort or fit padding. (Note: Peripheral vision clearance may be determined...
16 CFR 1203.14 - Peripheral vision test.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Peripheral vision test. 1203.14 Section 1203... SAFETY STANDARD FOR BICYCLE HELMETS The Standard § 1203.14 Peripheral vision test. Position the helmet on... the helmet to set the comfort or fit padding. (Note: Peripheral vision clearance may be determined...
Risk factors for astigmatism in the Vision in Preschoolers Study.
Huang, Jiayan; Maguire, Maureen G; Ciner, Elise; Kulp, Marjean Taylor; Cyert, Lynn A; Quinn, Graham E; Orel-Bixler, Deborah; Moore, Bruce; Ying, Gui-Shuang
2014-05-01
To determine demographic and refractive risk factors for astigmatism in the Vision in Preschoolers Study. Three- to 5-year-old Head Start preschoolers (N = 4040) from five clinical centers underwent comprehensive eye examinations by study-certified optometrists and ophthalmologists, including monocular visual acuity testing, cover testing, and cycloplegic retinoscopy. Astigmatism was defined as the presence of greater than or equal to +1.5 diopters (D) cylinder in either eye, measured with cycloplegic refraction. The associations of risk factors with astigmatism were evaluated using the odds ratio (OR) and its 95% confidence interval (CI) from logistic regression models. Among 4040 Vision in Preschoolers Study participants overrepresenting children with vision disorders, 687 (17%) had astigmatism, and most (83.8%) had with-the-rule astigmatism. In multivariate analyses, African American (OR, 1.65; 95% CI, 1.22 to 2.24), Hispanic (OR, 2.25; 95% CI, 1.62 to 3.12), and Asian (OR, 1.76; 95% CI, 1.06 to 2.93) children were more likely to have astigmatism than non-Hispanic white children, whereas American Indian children were less likely to have astigmatism than Hispanic, African American, and Asian children (p < 0.0001). Refractive error was associated with astigmatism in a nonlinear manner, with an OR of 4.50 (95% CI, 3.00 to 6.76) for myopia (≤-1.0 D in spherical equivalent) and 1.55 (95% CI, 1.29 to 1.86) for hyperopia (≥+2.0 D) when compared with children without refractive error (>-1.0 D, <+2.0 D). There was a trend of an increasing percentage of astigmatism among older children (linear trend p = 0.06). The analysis for risk factors of with-the-rule astigmatism provided similar results. Among Head Start preschoolers, Hispanic, African American, and Asian race as well as myopic and hyperopic refractive error were associated with an increased risk of astigmatism, consistent with findings from the population-based Multi-ethnic Pediatric Eye Disease Study and the Baltimore Pediatric Eye Disease Study. American Indian children had lower risk of astigmatism.
Iida, Hiroyuki; Nakamura, Yuko; Matsumoto, Hitoshi; Kawahata, Keiko; Koga, Jinichiro; Katsumi, Osamu
2013-01-01
To compare the inhibitory effects of 4 different types of black currant anthocyanins (BCAs) on ocular elongation in 2 different chick myopia models. In the first model, diffusers were used to induce form vision deprivation. In the second model, negative (-8D) spherical lenses were used to create a defocused retinal image. Either the diffusers or the -8D lenses were placed on the right eyes of 8-day-old chicks for 4 days. Ocular biometric components were measured using an A-scan ultrasound instrument on the third day after application of either the diffusers or -8D lenses. Interocular differences (globe component dimensions of the right diffuser or eyes covered with -8D lenses minus those of the open left eyes) were considered to evaluate the effect of BCAs. The BCAs used were cyanidin-3-glucoside (C3G), cyanidin-3-rutinoside (C3R), delphinidin-3-rutinoside (D3R), and delphinidin-3-glucoside (D3G). Each anthocyanin was administered intravenously at a dose of 0.027 μmol/kg once a day for 3 days. Compared to the vehicle treatment, C3G and C3R treatments significantly reduced both differential increases (positive values of interocular differences) of the ocular axial length induced by diffusers or -8D lenses (diffusers; C3G, C3R, and control: 0.32±0.051 mm, P<0.05; 0.25±0.034 mm, P<0.01; and 0.52±0.047 mm, -8D lenses; C3G, C3R, and control: 0.25±0.049 mm, P<0.01; 0.17±0.049 mm, P<0.001; and 0.50±0.056 mm). In contrast, compared to vehicle treatment, D3R treatment significantly decreased the differential increases in the ocular axial length only in chicks with myopia induced by -8D lenses (D3R and control: 0.17±0.049 mm and 0.50±0.056 mm, P<0.001). D3G did not inhibit the differential increase in the ocular axial length induced by either diffusers or -8D lenses. This study showed that the 4 tested BCAs had different effects on the 2 different experimental models of myopia.
Effects of V4c-ICL Implantation on Myopic Patients' Vision-Related Daily Activities
Linghu, Shaorong; Pan, Le; Shi, Rong
2016-01-01
The new type implantable Collamer lens with a central hole (V4c-ICL) is widely used to treat myopia. However, halos occur in some patients after surgery. The aim is to evaluate the effect of V4c-ICL implantation on vision-related daily activities. This retrospective study included 42 patients. Uncorrected visual acuity (UCVA), best corrected visual acuity (BCVA), intraocular pressure (IOP), endothelial cell density (ECD), and vault were recorded and vision-related daily activities were evaluated at 3 months after operation. The average spherical equivalent was −0.12 ± 0.33 D at 3 months after operation. UCVA equal to or better than preoperative BCVA occurred in 98% of eyes. The average BCVA at 3 months after operation was −0.03 ± 0.07 LogMAR, which was significantly better than preoperative BCVA (0.08 ± 0.10 LogMAR) (P = 0.029). Apart from one patient (2.4%) who had difficulty reading computer screens, all patients had satisfactory or very satisfactory results. During the early postoperation, halos occurred in 23 patients (54.8%). However there were no significant differences in the scores of visual functions between patients with and without halos (P > 0.05). Patients were very satisfied with their vision-related daily activities at 3 months after operation. The central hole of V4c-ICL does not affect patients' vision-related daily activities. PMID:27965890
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shwetha, Bondel; Ravikumar, Manickam, E-mail: drravikumarm@gmail.com; Supe, Sanjay S.
2012-04-01
Various treatment planning systems are used to design plans for the treatment of cervical cancer using high-dose-rate brachytherapy. The purpose of this study was to make a dosimetric comparison of the 2 treatment planning systems from Varian medical systems, namely ABACUS and BrachyVision. The dose distribution of Ir-192 source generated with a single dwell position was compared using ABACUS (version 3.1) and BrachyVision (version 6.5) planning systems. Ten patients with intracavitary applications were planned on both systems using orthogonal radiographs. Doses were calculated at the prescription points (point A, right and left) and reference points RU, LU, RM, LM, bladder,more » and rectum. For single dwell position, little difference was observed in the doses to points along the perpendicular bisector. The mean difference between ABACUS and BrachyVision for these points was 1.88%. The mean difference in the dose calculated toward the distal end of the cable by ABACUS and BrachyVision was 3.78%, whereas along the proximal end the difference was 19.82%. For the patient case there was approximately 2% difference between ABACUS and BrachyVision planning for dose to the prescription points. The dose difference for the reference points ranged from 0.4-1.5%. For bladder and rectum, the differences were 5.2% and 13.5%, respectively. The dose difference between the rectum points was statistically significant. There is considerable difference between the dose calculations performed by the 2 treatment planning systems. It is seen that these discrepancies are caused by the differences in the calculation methodology adopted by the 2 systems.« less
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan
2017-06-01
The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the stereo vision system of the Chang'e-3 lunar rover on the moon.
Massie, Isobel; Dale, Sarah B; Daniels, Julie T
2015-06-01
Limbal epithelial stem cell deficiency can cause blindness, but transplantation of these cells on a carrier such as human amniotic membrane can restore vision. Unfortunately, clinical graft manufacture using amnion can be inconsistent. Therefore, we have developed an alternative substrate, Real Architecture for 3D Tissue (RAFT), which supports human limbal epithelial cells (hLE) expansion. Epithelial organization is improved when human limbal fibroblasts (hLF) are incorporated into RAFT tissue equivalent (TE). However, hLF have the potential to transdifferentiate into a pro-scarring cell type, which would be incompatible with therapeutic transplantation. The aim of this work was to assess the scarring phenotype of hLF in RAFT TEs in hLE+ and hLE- RAFT TEs and in nonairlifted and airlifted RAFT TEs. Diseased fibroblasts (dFib) isolated from the fibrotic conjunctivae of ocular mucous membrane pemphigoid (Oc-MMP) patients were used as a pro-scarring positive control against which hLF were compared using surrogate scarring parameters: matrix metalloproteinase (MMP) activity, de novo collagen synthesis, α-smooth muscle actin (α-SMA) expression, and transforming growth factor-β (TGF-β) secretion. Normal hLF and dFib maintained different phenotypes in RAFT TE. MMP-2 and -9 activity, de novo collagen synthesis, and α-SMA expression were all increased in dFib cf. normal hLF RAFT TEs, although TGF-β1 secretion did not differ between normal hLF and dFib RAFT TEs. Normal hLF do not progress toward a scarring-like phenotype during culture in RAFT TEs and, therefore, may be safe to include in therapeutic RAFT TE, where they can support hLE, although in vivo work is required to confirm this. dFib RAFT TEs (used in this study as a positive control) may be useful toward the development of an ex vivo disease model of Oc-MMP.
Dale, Sarah B.; Daniels, Julie T.
2015-01-01
Limbal epithelial stem cell deficiency can cause blindness, but transplantation of these cells on a carrier such as human amniotic membrane can restore vision. Unfortunately, clinical graft manufacture using amnion can be inconsistent. Therefore, we have developed an alternative substrate, Real Architecture for 3D Tissue (RAFT), which supports human limbal epithelial cells (hLE) expansion. Epithelial organization is improved when human limbal fibroblasts (hLF) are incorporated into RAFT tissue equivalent (TE). However, hLF have the potential to transdifferentiate into a pro-scarring cell type, which would be incompatible with therapeutic transplantation. The aim of this work was to assess the scarring phenotype of hLF in RAFT TEs in hLE+ and hLE− RAFT TEs and in nonairlifted and airlifted RAFT TEs. Diseased fibroblasts (dFib) isolated from the fibrotic conjunctivae of ocular mucous membrane pemphigoid (Oc-MMP) patients were used as a pro-scarring positive control against which hLF were compared using surrogate scarring parameters: matrix metalloproteinase (MMP) activity, de novo collagen synthesis, α-smooth muscle actin (α-SMA) expression, and transforming growth factor-β (TGF-β) secretion. Normal hLF and dFib maintained different phenotypes in RAFT TE. MMP-2 and -9 activity, de novo collagen synthesis, and α-SMA expression were all increased in dFib cf. normal hLF RAFT TEs, although TGF-β1 secretion did not differ between normal hLF and dFib RAFT TEs. Normal hLF do not progress toward a scarring-like phenotype during culture in RAFT TEs and, therefore, may be safe to include in therapeutic RAFT TE, where they can support hLE, although in vivo work is required to confirm this. dFib RAFT TEs (used in this study as a positive control) may be useful toward the development of an ex vivo disease model of Oc-MMP. PMID:25380529
NASA Technical Reports Server (NTRS)
2008-01-01
We can determine distances between objects and points of interest in 3-D space to a useful degree of accuracy from a set of camera images by using multiple camera views and reference targets in the camera s field of view (FOV). The core of the software processing is based on the previously developed foreign-object debris vision trajectory software (see KSC Research and Technology 2004 Annual Report, pp. 2 5). The current version of this photogrammetry software includes the ability to calculate distances between any specified point pairs, the ability to process any number of reference targets and any number of camera images, user-friendly editing features, including zoom in/out, translate, and load/unload, routines to help mark reference points with a Find function, while comparing them with the reference point database file, and a comprehensive output report in HTML format. In this system, scene reference targets are replaced by a photogrammetry cube whose exterior surface contains multiple predetermined precision 2-D targets. Precise measurement of the cube s 2-D targets during the fabrication phase eliminates the need for measuring 3-D coordinates of reference target positions in the camera's FOV, using for example a survey theodolite or a Faroarm. Placing the 2-D targets on the cube s surface required the development of precise machining methods. In response, 2-D targets were embedded into the surface of the cube and then painted black for high contrast. A 12-inch collapsible cube was developed for room-size scenes. A 3-inch, solid, stainless-steel photogrammetry cube was also fabricated for photogrammetry analysis of small objects.
Nickla, Debora L; Sharda, Vandhana; Troilo, David
2005-04-01
In chicks, the temporal response characteristics to form deprivation and to spectacle lens wear (myopic and hyperopic defocus) show essential differences, suggesting that the emmetropization system "weights" the visual signals differently. To further explore how the eye integrates opposing visual signals, we examined the responses to myopic defocus induced by prior form deprivation vs. that induced by positive spectacle lenses, in both cases alternating with form deprivation. Three experimental paradigms were used: 1) Form deprivation was induced by monocular occluders for 7 days. Over the subsequent 7 days, the occluders were removed daily for 12 hours (n = 13), 4 hours (n = 7), 2 hours (n = 7), or 0 hours (n = 6). 2) Birds were form-deprived on day 12. Over the subsequent 7 days, occluders were replaced with a +10 D lens for 2 hours per day (n = 13). 3) Starting at day 11, a +10 D lens was placed over one eye for 2 hours (n = 13), 3 hours (n = 5), or 6 hours (n = 10) per day and were otherwise untreated. Ocular dimensions were measured with high-frequency A-scan ultrasonography; refractive errors were measured by streak retinoscopy at various intervals. In recovering eyes, 2 hours per day of myopic defocus was as effective as 12 hours at inducing refractive and axial recovery (change in refractive error: +10 D vs. +13 D, respectively). By contrast, 2 hours of lens-induced defocus (alternating with form deprivation) was not sufficient to induce refractive or axial compensation (change in refractive error: -1.7 D). When myopic defocus alternated with unrestricted vision, 6 hours per day were sufficient to induce nearly full compensation (2 hours vs. 6 hours: 4.4 D vs. 8.2 D; p < 0.0005). Choroids showed rapid increases in thickness to the daily episodes of myopic defocus; these resulted in "long-term" thickness changes in recovering eyes and eyes wearing lenses for 3 or 6 hours per day. The response to myopic defocus induced by prior form deprivation is more robust than the response induced by positive lenses, suggesting that the underlying mechanisms differ. Presumably, this difference is related to the size of the eye at the onset. Compensatory decreases in growth rate occur without full compensatory choroidal thickening.
Perception of object motion in three-dimensional space induced by cast shadows.
Katsuyama, Narumi; Usui, Nobuo; Nose, Izuru; Taira, Masato
2011-01-01
Cast shadows can be salient depth cues in three-dimensional (3D) vision. Using a motion illusion in which a ball is perceived to roll in depth on the bottom or to flow in the front plane depending on the slope of the trajectory of its cast shadow, we investigated cortical mechanisms underlying 3D vision based on cast shadows using fMRI techniques. When modified versions of the original illusion, in which the slope of the shadow trajectory (shadow slope) was changed in 5 steps from the same one as the ball trajectory to the horizontal, were presented to participants, their perceived ball trajectory shifted gradually from rolling on the bottom to floating in the front plane as the change of the shadow slope. This observation suggests that the perception of the ball trajectory in this illusion is strongly affected by the motion of the cast shadow. In the fMRI study, cortical activity during observation of the movies of the illusion was investigated. We found that the bilateral posterior-occipital sulcus (POS) and right ventral precuneus showed activation related to the perception of the ball trajectory induced by the cast shadows in the illusion. Of these areas, it was suggested that the right POS may be involved in the inferring of the ball trajectory by the given spatial relation between the ball and the shadow. Our present results suggest that the posterior portion of the medial parietal cortex may be involved in 3D vision by cast shadows. Copyright © 2010 Elsevier Inc. All rights reserved.
Huang, Juan; Hung, Li-Fang; Smith, Earl L.
2012-01-01
This study aimed to investigate the changes in ocular shape and relative peripheral refraction during the recovery from myopia produced by form deprivation (FD) and hyperopic defocus. FD was imposed in 6 monkeys by securing a diffuser lens over one eye; hyperopic defocus was produced in another 6 monkeys by fitting one eye with -3D spectacle. When unrestricted vision was re-established, the treated eyes recovered from the vision-induced central and peripheral refractive errors. The recovery of peripheral refractive errors was associated with corresponding changes in the shape of the posterior globe. The results suggest that vision can actively regulate ocular shape and the development of central and peripheral refractions in infant primates. PMID:23026012
Manche, Edward E; Haw, Weldon W
2011-12-01
To compare the safety and efficacy of wavefront-guided laser in situ keratomileusis (LASIK) vs photorefractive keratectomy (PRK) in a prospective randomized clinical trial. A cohort of 68 eyes of 34 patients with -0.75 to -8.13 diopters (D) of myopia (spherical equivalent) were randomized to receive either wavefront-guided PRK or LASIK in the fellow eye using the VISX CustomVue laser. Patients were evaluated at 1 day, 1 week, and months 1, 3, 6, and 12. At 1 month, uncorrected visual acuity (UCVA), best spectacle-corrected visual acuity (BSCVA), 5% and 25% contrast sensitivity, induction of higher-order aberrations (HOAs), and subjective symptoms of vision clarity, vision fluctuation, ghosting, and overall self-assessment of vision were worse (P<0.05) in the PRK group. By 3 months, these differences had resolved (P>0.05). At 1 year, mean spherical equivalent was reduced 94% to -0.27 ± 0.31 D in the LASIK group and reduced 96% to -0.17 ± 0.41 D in the PRK group. At 1 year, 91% of eyes were within ±0.50 D and 97 % were within ±1.0 D in the PRK group. At 1 year, 88% of eyes were within ±0.50 D and 97% were within ±1.0 D in the LASIK group. At 1 year, 97% of eyes in the PRK group and 94% of eyes in the LASIK group achieved an UCVA of 20/20 or better (P=0.72). Refractive stability was achieved in both PRK and LASIK groups after 1 month. There were no intraoperative or postoperative flap complications in the LASIK group. There were no instances of corneal haze in the PRK group. Wavefront-guided LASIK and PRK are safe and effective at reducing myopia. At 1 month postoperatively, LASIK demonstrates an advantage over PRK in UCVA, BSCVA, low-contrast acuity, induction of total HOAs, and several subjective symptoms. At postoperative month 3, these differences between PRK and LASIK results had resolved.
Manche, Edward E.; Haw, Weldon W.
2011-01-01
Purpose To compare the safety and efficacy of wavefront-guided laser in situ keratomileusis (LASIK) vs photorefractive keratectomy (PRK) in a prospective randomized clinical trial. Methods A cohort of 68 eyes of 34 patients with −0.75 to −8.13 diopters (D) of myopia (spherical equivalent) were randomized to receive either wavefront-guided PRK or LASIK in the fellow eye using the VISX CustomVue laser. Patients were evaluated at 1 day, 1 week, and months 1, 3, 6, and 12. Results At 1 month, uncorrected visual acuity (UCVA), best spectacle-corrected visual acuity (BSCVA), 5% and 25% contrast sensitivity, induction of higher-order aberrations (HOAs), and subjective symptoms of vision clarity, vision fluctuation, ghosting, and overall self-assessment of vision were worse (P<0.05) in the PRK group. By 3 months, these differences had resolved (P>0.05). At 1 year, mean spherical equivalent was reduced 94% to −0.27 ± 0.31 D in the LASIK group and reduced 96% to −0.17 ± 0.41 D in the PRK group. At 1 year, 91% of eyes were within ±0.50 D and 97 % were within ±1.0 D in the PRK group. At 1 year, 88% of eyes were within ±0.50 D and 97% were within ±1.0 D in the LASIK group. At 1 year, 97% of eyes in the PRK group and 94% of eyes in the LASIK group achieved an UCVA of 20/20 or better (P=0.72). Refractive stability was achieved in both PRK and LASIK groups after 1 month. There were no intraoperative or postoperative flap complications in the LASIK group. There were no instances of corneal haze in the PRK group. Conclusions Wavefront-guided LASIK and PRK are safe and effective at reducing myopia. At 1 month postoperatively, LASIK demonstrates an advantage over PRK in UCVA, BSCVA, low-contrast acuity, induction of total HOAs, and several subjective symptoms. At postoperative month 3, these differences between PRK and LASIK results had resolved. PMID:22253488
An Automatic Registration Algorithm for 3D Maxillofacial Model
NASA Astrophysics Data System (ADS)
Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng
2016-09-01
3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.
Cell sorting using efficient light shaping approaches
NASA Astrophysics Data System (ADS)
Bañas, Andrew; Palima, Darwin; Villangca, Mark; Glückstad, Jesper
2016-03-01
Early detection of diseases can save lives. Hence, there is emphasis in sorting rare disease-indicating cells within small dilute quantities such as in the confines of lab-on-a-chip devices. In our work, we use optical forces to isolate red blood cells detected by machine vision. This approach is gentler, less invasive and more economical compared to conventional FACS systems. As cells are less responsive to plastic or glass beads commonly used in the optical manipulation literature, and since laser safety would be an issue in clinical use, we develop efficient approaches in utilizing lasers and light modulation devices. The Generalized Phase Contrast (GPC) method that can be used for efficiently illuminating spatial light modulators or creating well-defined contiguous optical traps is supplemented by diffractive techniques capable of integrating the available light and creating 2D or 3D beam distributions aimed at the positions of the detected cells. Furthermore, the beam shaping freedom provided by GPC can allow optimizations in the beam's propagation and its interaction with the catapulted cells.
NASA Astrophysics Data System (ADS)
Astafiev, A.; Orlov, A.; Privezencev, D.
2018-01-01
The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.
Howard, Anita R.
2015-01-01
Drawing on intentional change theory (ICT; Boyatzis, 2006), this study examined the differential impact of inducing coaching recipients’ vision/positive emotion versus improvement needs/negative emotion during real time executive coaching sessions. A core aim of the study was to empirically test two central ICT propositions on the effects of using the coached person’s Positive Emotional Attractor (vision/PEA) versus Negative Emotional Attractor (improvement needs/NEA) as the anchoring framework of a onetime, one-on-one coaching session on appraisal of 360° feedback and discussion of possible change goals. Eighteen coaching recipients were randomly assigned to two coaching conditions, the coaching to vision/PEA condition and the coaching to improvement needs/NEA condition. Two main hypotheses were tested. Hypothesis1 predicted that participants in the vision/PEA condition would show higher levels of expressed positive emotion during appraisal of 360° feedback results and discussion of change goals than recipients in the improvement needs/NEA condition. Hypothesis2 predicted that vision/PEA participants would show lower levels of stress immediately after the coaching session than improvement needs/NEA participants. Findings showed that coaching to vision/the PEA fostered significantly lower levels of expressed negative emotion and anger during appraisal of 360° feedback results as compared to coaching to improvements needs/the NEA. Vision-focused coaching also fostered significantly greater exploration of personal passions and future desires, and more positive engagement during 360° feedback appraisal. No significant differences between the two conditions were found in emotional processing during discussion of change goals or levels of stress immediately after the coaching session. Current findings suggest that vision/PEA arousal versus improvement needs/NEA arousal impact the coaching process in quite different ways; that the coach’s initial framing of the session predominantly in the PEA (or, alternatively, predominantly in the NEA) fosters emotional processing that is driven by this initial framing; and that both the PEA (and associated positive emotions) and NEA (and associated negative emotions) play an important and recurrent role in shaping the change process. Further study on these outcomes will enable researchers to shed more light on the differential impact of the PEA versus NEA on intentional change, and how to leverage the benefits of both emotional attractors. Findings also suggest that coaches can benefit from better understanding the importance of tapping intrinsic motivation and personal passions through coaching to vision/the PEA. Coaches additionally may benefit from better understanding how to leverage the long-term advantages, and restorative benefits, of positive emotions during coaching engagements. The findings also highlight coaches’ need to appreciate the impact of timing effects on coaching intentional change, and how coaches can play a critical role in calibrating the pace and focus of work on intentional change. Early arousal of the coachee’s PEA, accompanied by recurrent PEA–NEA induction, may help coachees be/become more creative, optimistic, and resilient during a given change process. Overall, primary focus on vision/PEA and secondary focus on improvement needs/NEA may better equip coaches and coaching recipients to work together on building robust learning, development, and change. Keywords-133pt executive coaching, vision, improvement needs, positive emotion, negative emotion, emotional appraisal, intentional change, positive psychology PMID:25964768
Howard, Anita R
2015-01-01
Drawing on intentional change theory (ICT; Boyatzis, 2006), this study examined the differential impact of inducing coaching recipients' vision/positive emotion versus improvement needs/negative emotion during real time executive coaching sessions. A core aim of the study was to empirically test two central ICT propositions on the effects of using the coached person's Positive Emotional Attractor (vision/PEA) versus Negative Emotional Attractor (improvement needs/NEA) as the anchoring framework of a onetime, one-on-one coaching session on appraisal of 360° feedback and discussion of possible change goals. Eighteen coaching recipients were randomly assigned to two coaching conditions, the coaching to vision/PEA condition and the coaching to improvement needs/NEA condition. Two main hypotheses were tested. Hypothesis1 predicted that participants in the vision/PEA condition would show higher levels of expressed positive emotion during appraisal of 360° feedback results and discussion of change goals than recipients in the improvement needs/NEA condition. Hypothesis2 predicted that vision/PEA participants would show lower levels of stress immediately after the coaching session than improvement needs/NEA participants. Findings showed that coaching to vision/the PEA fostered significantly lower levels of expressed negative emotion and anger during appraisal of 360° feedback results as compared to coaching to improvements needs/the NEA. Vision-focused coaching also fostered significantly greater exploration of personal passions and future desires, and more positive engagement during 360° feedback appraisal. No significant differences between the two conditions were found in emotional processing during discussion of change goals or levels of stress immediately after the coaching session. Current findings suggest that vision/PEA arousal versus improvement needs/NEA arousal impact the coaching process in quite different ways; that the coach's initial framing of the session predominantly in the PEA (or, alternatively, predominantly in the NEA) fosters emotional processing that is driven by this initial framing; and that both the PEA (and associated positive emotions) and NEA (and associated negative emotions) play an important and recurrent role in shaping the change process. Further study on these outcomes will enable researchers to shed more light on the differential impact of the PEA versus NEA on intentional change, and how to leverage the benefits of both emotional attractors. Findings also suggest that coaches can benefit from better understanding the importance of tapping intrinsic motivation and personal passions through coaching to vision/the PEA. Coaches additionally may benefit from better understanding how to leverage the long-term advantages, and restorative benefits, of positive emotions during coaching engagements. The findings also highlight coaches' need to appreciate the impact of timing effects on coaching intentional change, and how coaches can play a critical role in calibrating the pace and focus of work on intentional change. Early arousal of the coachee's PEA, accompanied by recurrent PEA-NEA induction, may help coachees be/become more creative, optimistic, and resilient during a given change process. Overall, primary focus on vision/PEA and secondary focus on improvement needs/NEA may better equip coaches and coaching recipients to work together on building robust learning, development, and change. Keywords-133pt executive coaching, vision, improvement needs, positive emotion, negative emotion, emotional appraisal, intentional change, positive psychology.
Implementation of a stereofluoroscopic system
NASA Technical Reports Server (NTRS)
Rivers, D. B.
1976-01-01
Clinical applications of a 3-D video imaging technique developed by NASA for observation and control of remote manipulators are discussed. Incorporation of this technique in a stereo fluoroscopic system provides reduced radiation dosage and greater vision and mobility of the user.
A novel upper limb rehabilitation system with self-driven virtual arm illusion.
Aung, Yee Mon; Al-Jumaily, Adel; Anam, Khairul
2014-01-01
This paper proposes a novel upper extremity rehabilitation system with virtual arm illusion. It aims for fast recovery from lost functions of the upper limb as a result of stroke to provide a novel rehabilitation system for paralyzed patients. The system is integrated with a number of technologies that include Augmented Reality (AR) technology to develop game like exercise, computer vision technology to create the illusion scene, 3D modeling and model simulation, and signal processing to detect user intention via EMG signal. The effectiveness of the developed system has evaluated via usability study and questionnaires which is represented by graphical and analytical methods. The evaluation provides with positive results and this indicates the developed system has potential as an effective rehabilitation system for upper limb impairment.
Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image
NASA Astrophysics Data System (ADS)
Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren
2012-01-01
The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.
NASA Astrophysics Data System (ADS)
Radkowski, Rafael; Holland, Stephen; Grandin, Robert
2018-04-01
This research addresses inspection location tracking in the field of nondestructive evaluation (NDE) using a computer vision technique to determine the position and orientation of typical NDE equipment in a test setup. The objective is the tracking accuracy for typical NDE equipment to facilitate automatic NDE data integration. Since the employed tracking technique relies on surface curvatures of an object of interest, the accuracy can be only experimentally determined. We work with flash-thermography and conducted an experiment in which we tracked a specimen and a thermography flash hood, measured the spatial relation between both, and used the relation as input to map thermography data onto a 3D model of the specimen. The results indicate an appropriate accuracy, however, unveiled calibration challenges.
Neuropharmacology of vision in goldfish: a review.
Mora-Ferrer, Carlos; Neumeyer, Christa
2009-05-01
The goldfish is one of the few animals exceptionally well analyzed in behavioral experiments and also in electrophysiological and neuroanatomical investigations of the retina. To get insight into the functional organization of the retina we studied color vision, motion detection and temporal resolution before and after intra-ocular injection of neuropharmaca with known effects on retinal neurons. Bicuculline, strychnine, curare, atropine, and dopamine D1- and D2-receptor antagonists were used. The results reviewed here indicate separate and parallel processing of L-cone contribution to different visual functions, and the influence of several neurotransmitters (dopamine, acetylcholine, glycine, and GABA) on motion vision, color vision, and temporal resolution.
Optical devices in highly myopic eyes with low vision: a prospective study.
Scassa, C; Cupo, G; Bruno, M; Iervolino, R; Capozzi, S; Tempesta, C; Giusti, C
2012-01-01
To compare, in relation to the cause of visual impairment, the possibility of rehabilitation, the corrective systems already in use and the finally prescribed optical devices in highly myopic patients with low vision. Some considerations about the rehabilitation of these subjects, especially in relation to their different pathologies, have also been made. 25 highly myopic subjects were enrolled. We evaluated both visual acuity and retinal sensitivity by Scanning Laser Ophthalmoscope (SLO) microperimetry. 20 patients (80%) were rehabilitated by means of monocular optical devices while five patients (20%) were rehabilitated binocularly. We found a good correlation between visual acuity and retinal sensitivity only when the macular pathology did not induce large areas of chorioretinal atrophy that cause lack of stabilization of the preferential retinal locus. In fact, the best results in reading and performing daily visual tasks were obtained by maximizing the residual vision in patients with retinal sensitivity greater than 10 dB. A well circumscribed area of absolute scotoma with a defined new retinal fixation locus could be considered as a positive predictive factor for the final rehabilitation process. A more careful evaluation of visual acuity, retinal sensitivity and preferential fixation locus is necessary in order to prescribe the best optical devices to patients with low vision, thus reducing the impact of the disability on their daily life.
2016-10-01
Additive Manufacturing ) Partnership Jennifer Fielding, Ph.D. Ed Morris Rob Gorham Emily Fehrman Cory, Ph.D. Scott Leonard Fielding is the...government partners for America Makes and other Manufacturing Innovation Institutes. America Makes is the National Additive Manufactur -ing Innovation Institute...vision for America Makes is to accelerate additive manufacturing (AM) inno-vation to enable widespread adoption by bridging the gap between basic
Gokce, Hasan Suat; Piskin, Bulent; Ceyhan, Dogan; Gokce, Sila Mermut; Arisan, Volkan
2010-03-01
The lighting conditions of the environment and visual deficiencies such as red-green color vision deficiency affect the clinical shade matching performance of dental professionals. The purpose of this study was to evaluate the shade matching performance of normal and color vision-deficient dental professionals with standard daylight and tungsten illuminants. Two sets of porcelain disc replicas of 16 shade guide tabs (VITA Lumin) were manufactured to exact L*a*b* values by using a colorimeter. Then these twin porcelain discs (13 mm x 2.4 mm) were mixed up and placed into a color-matching cabinet that standardized the lighting conditions for the observation tests. Normal and red-green color vision-deficient dental professionals were asked to match the 32 porcelain discs using standard artificial daylight D65 (high color temperature) and tungsten filament lamp light (T) (low color temperature) illuminants. The results were analyzed by repeated-measures ANOVA and paired and independent samples t tests for the differences between dental professionals and differences between the illuminants (alpha=.05). Regarding the sum of the correct shade match scores of all observations with both illuminants, the difference between normal vision and red-green color vision-deficient dental professional groups was not statistically significant (F=4.132; P=.054). However, the correct shade match scores of each group were significantly different for each illuminant (P<.005). The correct shade matching scores of normal color vision dental professionals were significantly higher with D65 illuminant (t=7.004; P<.001). Color matching scores of red-green color vision-deficient dental professionals (approximately 5.7 more pairs than with D65) were significantly higher with T illuminant (t=5.977; P<.001). CONCLUSIONS.: Within the limitations of this study, the shade matching performance of dental professionals was affected by color vision deficiency and the color temperature of the illuminant. The color vision-deficient group was notably unsuccessful with the D65 illuminant in shade matching. In contrast, there was a significant increase in the shade matching performance of the color vision-deficient group with T illuminant. The lower color temperature illuminant dramatically decreased the normal color vision groups' correct shade matching score. (c) 2010 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
The reliability of data collection periods of personal costs associated with vision impairment.
Lamoureux, E L; Chou, S L; Larizza, M F; Keeffe, J E
2006-04-01
To determine the reliability of vision-related personal costs collected over 1, 3 and 6 months (extrapolated to 12 months) compared to one-year data. Participants of any age, with a presenting visual acuity of < 20/40 in the better eye and an ability to converse in English, were recruited. Monthly cost diaries, in large print and electronic copies with instructions available in audio and Braille, were used prospectively to collect personal costs. The personal expenses were grouped under four categories, namely: (a) medicines, products and equipment, (b) health and community services, (c) informal care and support and (d) other expenses. Sociodemographic and clinical data were also collected. 104 participants (59 females) with a mean age of 64 years completed the 12-months diaries. Almost 40% of the participants had severe visual impairment (< 20/200) in the better eye and the most common cause of vision loss was AMD (n=40; 38%). The mean total personal costs collected from the 12-months diaries were 3,330+/-2,887 AUS dollars. There were no significant differences between the 12-months data and extrapolated 1, 3 and 6-months diaries (t-tests; p=0.17, 0.89 and 0.73, respectively). However, the 1-month variation was substantially larger (SD+/-5,860) compared to the 3-month and 6-month variances (SD+/-3,037 and 3,030, respectively) for total costs. Also, compared to the 12-months diaries, the 1-month data consistently recorded the weakest correlation coefficients for all cost categories compared to the other time intervals. Given that diary completion can be particularly challenging for individuals with impaired vision, a minimum 3-months data collection period can provide reliable estimates of annual costs associated with vision impairment.
Leadership: Vision & Structure. Resource Paper No. 36.
ERIC Educational Resources Information Center
Groff, Warren H.
Research indicates that leaders tend to be remarkably well-balanced people who embody four areas of competency: vision, the ability to communicate that vision, positive self-regard, and the ability to build trust with associates. The process of creating a vision of the future requires an understanding of the opportunities and threats present in…
High accuracy position method based on computer vision and error analysis
NASA Astrophysics Data System (ADS)
Chen, Shihao; Shi, Zhongke
2003-09-01
The study of high accuracy position system is becoming the hotspot in the field of autocontrol. And positioning is one of the most researched tasks in vision system. So we decide to solve the object locating by using the image processing method. This paper describes a new method of high accuracy positioning method through vision system. In the proposed method, an edge-detection filter is designed for a certain running condition. Here, the filter contains two mainly parts: one is image-processing module, this module is to implement edge detection, it contains of multi-level threshold self-adapting segmentation, edge-detection and edge filter; the other one is object-locating module, it is to point out the location of each object in high accurate, and it is made up of medium-filtering and curve-fitting. This paper gives some analysis error for the method to prove the feasibility of vision in position detecting. Finally, to verify the availability of the method, an example of positioning worktable, which is using the proposed method, is given at the end of the paper. Results show that the method can accurately detect the position of measured object and identify object attitude.
Tan, N C; Yip, W F; Kallakuri, S; Sankari, U; Koh, Y L E
2017-06-02
Patients with type 2 diabetes mellitus (T2DM) may develop color vision impairment. This study aimed to determine the prevalence and factors associated with impaired color vision in patients with T2DM but without diabetic retinopathy. Enrolment criteria included multi-ethnic Asian participants, age 21 to 80 years, with known T2DM for a minimum of 2 years. Their diagnoses were affirmed from oral glucose tolerance test results and they were screened for impaired color vision using the Farnsworth D-15 instrument. Demographic characteristics were described and clinical data for the preceding 2 years were analyzed using logistic regression. Twenty-two percent of 849 eligible participants had impaired color vision with higher involvement of the right eye. Impaired blue-yellow color-vision(Tritanomaly) was the commonest impaired color vision. Participants with impaired color vision were significantly associated with age and lower education; longer duration of T2DM (median 6 years vs 4 years); higher HbA1c level and HDL-Cholesterol in 2nd year; lower mean total cholesterol, mean LDL-Cholesterol and mean triglyceride in 2nd year. They also have poorer vision beyond 6/12 in the affected eye. Logistic regression showed that impaired color vision was associated with older patients (OR=1.04), increased duration of T2DM (OR=1.07); prescription of Tolbutamide (OR=3.79) and lower mean systolic blood pressure (OR=0.98). Almost one in four participants with T2DM had impaired color vision, largely with tritanomaly. Color vision screening may be considered for participants who develop T2DM for 6 years or longer, but this requires further cost-effectiveness evaluation.
ERIC Educational Resources Information Center
Suss, Gavin
2010-01-01
The question is, "What can vision do?" (Fritz, 1989) rather than "What is vision?" Keter's Chairman, Mr. Sami Sagol's vision is to establish an internship program that will strengthen the competitive edge of the Israeli industry, within the international arena. The program will set new standards of excellence for product…
Effectiveness of Stereoscopic Displays for Indirect-Vision Driving and Robot Teleoperation
2010-08-01
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) U.S. Army Research Laboratory ATTN: RDRL- HRM -AT Aberdeen Proving Ground, MD 21005-5425 8...course of varying negative and positive terrain features (i.e., holes in the ground, drop- offs, and hills; figure 4c ). Participants were instructed...ORGANIZATION COPIES ORGANIZATION 31 1 ARMY RSCH LABORATORY – HRED RDRL HRM A J MARTIN MYER CENTER BLDG 2700 RM 2D311 FORT MONMOUTH NJ 07703
Identification of geometric faces in hand-sketched 3D objects containing curved lines
NASA Astrophysics Data System (ADS)
El-Sayed, Ahmed M.; Wahdan, A. A.; Youssif, Aliaa A. A.
2017-07-01
The reconstruction of 3D objects from 2D line drawings is regarded as one of the key topics in the field of computer vision. The ongoing research is mainly focusing on the reconstruction of 3D objects that are mapped only from 2D straight lines, and that are symmetric in nature. Commonly, this approach only produces basic and simple shapes that are mostly flat or rather polygonized in nature, which is normally attributed to inability to handle curves. To overcome the above-mentioned limitations, a technique capable of handling non-symmetric drawings that encompass curves is considered. This paper discusses a novel technique that can be used to reconstruct 3D objects containing curved lines. In addition, it highlights an application that has been developed in accordance with the suggested technique that can convert a freehand sketch to a 3D shape using a mobile phone.
On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques
NASA Astrophysics Data System (ADS)
Blundell, Barry G.
2015-06-01
In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.
NASA Astrophysics Data System (ADS)
Barnes, Robert; Gupta, Sanjeev; Gunn, Matt; Paar, Gerhard; Balme, Matt; Huber, Ben; Bauer, Arnold; Furya, Komyo; Caballo-Perucha, Maria del Pilar; Traxler, Chris; Hesina, Gerd; Ortner, Thomas; Banham, Steven; Harris, Jennifer; Muller, Jan-Peter; Tao, Yu
2017-04-01
A key focus of planetary rover missions is to use panoramic camera systems to image outcrops along rover traverses, in order to characterise their geology in search of ancient life. This data can be processed to create 3D point clouds of rock outcrops to be quantitatively analysed. The Mars Utah Rover Field Investigation (MURFI 2016) is a Mars Rover field analogue mission run by the UK Space Agency (UKSA) in collaboration with the Canadian Space Agency (CSA). It took place between 22nd October and 13th November 2016 and consisted of a science team based in Harwell, UK, and a field team including an instrumented Rover platform at the field site near Hanksville (Utah, USA). The Aberystwyth University PanCam Emulator 3 (AUPE3) camera system was used to collect stereo panoramas of the terrain the rover encountered during the field trials. Stereo-imagery processed in PRoViP is rendered as Ordered Point Clouds (OPCs) in PRo3D, enabling the user to zoom, rotate and translate the 3D outcrop model. Interpretations can be digitised directly onto the 3D surface, and simple measurements can be taken of the dimensions of the outcrop and sedimentary features, including grain size. Dip and strike of bedding planes, stratigraphic and sedimentological boundaries and fractures is calculated within PRo3D from mapped bedding contacts and fracture traces. Merging of rover-derived imagery with UAV and orbital datasets, to build semi-regional multi-resolution 3D models of the area of operations for immersive analysis and contextual understanding. In-simulation, AUPE3 was mounted onto the rover mast, collecting 16 stereo panoramas over 9 'sols'. 5 out-of-simulation datasets were collected in the Hanksville-Burpee Quarry. Stereo panoramas were processed using an automated pipeline and data transfer through an ftp server. PRo3D has been used for visualisation and analysis of this stereo data. Features of interest in the area could be annotated, and their distances between to the rover position can be measured to aid prioritisation of science targeting. Where grains or rocks are present and visible, their dimensions can be measured. Interpretation of the sedimentological features of the outcrops has also been carried out. OPCs created from stereo imagery collected in the Hanskville-Burpee Quarry showed a general coarsening-up succession with a red, well-layered mudstone overlain by stacked layers of irregular thickness and medium-coarse to pebbly sandstone layers. Cross beds/laminations, and lenses of finer sandstone were common. These features provide valuable information on their depositional environment. Development of Pro3D in preparation for application to the ExoMars 2020 and NASA 2020 missions will be centred on validation of the data and measurements. Collection of in-situ field data by a human geologist allows for direct comparison of viewer-derived measurements with those taken in the field. The research leading to these results has received funding from the UK Space Agency Aurora programme and the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 PRoViDE, ESA PRODEX Contracts 4000105568 "ExoMars PanCam 3D Vision" and 4000116566 "Mars 2020 Mastcam-Z 3D Vision".
Computer-based System for the Virtual-Endoscopic Guidance of Bronchoscopy.
Helferty, J P; Sherbondy, A J; Kiraly, A P; Higgins, W E
2007-11-01
The standard procedure for diagnosing lung cancer involves two stages: three-dimensional (3D) computed-tomography (CT) image assessment, followed by interventional bronchoscopy. In general, the physician has no link between the 3D CT image assessment results and the follow-on bronchoscopy. Thus, the physician essentially performs bronchoscopic biopsy of suspect cancer sites blindly. We have devised a computer-based system that greatly augments the physician's vision during bronchoscopy. The system uses techniques from computer graphics and computer vision to enable detailed 3D CT procedure planning and follow-on image-guided bronchoscopy. The procedure plan is directly linked to the bronchoscope procedure, through a live registration and fusion of the 3D CT data and bronchoscopic video. During a procedure, the system provides many visual tools, fused CT-video data, and quantitative distance measures; this gives the physician considerable visual feedback on how to maneuver the bronchoscope and where to insert the biopsy needle. Central to the system is a CT-video registration technique, based on normalized mutual information. Several sets of results verify the efficacy of the registration technique. In addition, we present a series of test results for the complete system for phantoms, animals, and human lung-cancer patients. The results indicate that not only is the variation in skill level between different physicians greatly reduced by the system over the standard procedure, but that biopsy effectiveness increases.
Takeda, Noriaki; Uno, Atsuhiko; Inohara, Hidenori; Shimada, Shoichi
2016-01-01
Background The mouse is the most commonly used animal model in biomedical research because of recent advances in molecular genetic techniques. Studies related to eye movement in mice are common in fields such as ophthalmology relating to vision, neuro-otology relating to the vestibulo-ocular reflex (VOR), neurology relating to the cerebellum’s role in movement, and psychology relating to attention. Recording eye movements in mice, however, is technically difficult. Methods We developed a new algorithm for analyzing the three-dimensional (3D) rotation vector of eye movement in mice using high-speed video-oculography (VOG). The algorithm made it possible to analyze the gain and phase of VOR using the eye’s angular velocity around the axis of eye rotation. Results When mice were rotated at 0.5 Hz and 2.5 Hz around the earth’s vertical axis with their heads in a 30° nose-down position, the vertical components of their left eye movements were in phase with the horizontal components. The VOR gain was 0.42 at 0.5 Hz and 0.74 at 2.5 Hz, and the phase lead of the eye movement against the turntable was 16.1° at 0.5 Hz and 4.88° at 2.5 Hz. Conclusions To the best of our knowledge, this is the first report of this algorithm being used to calculate a 3D rotation vector of eye movement in mice using high-speed VOG. We developed a technique for analyzing the 3D rotation vector of eye movements in mice with a high-speed infrared CCD camera. We concluded that the technique is suitable for analyzing eye movements in mice. We also include a C++ source code that can calculate the 3D rotation vectors of the eye position from two-dimensional coordinates of the pupil and the iris freckle in the image to this article. PMID:27023859
Ma, Li; Kaufman, Yardana; Zhang, Junhua; Washington, Ilyas
2011-01-01
Stargardt disease, also known as juvenile macular degeneration, occurs in approximately one in 10,000 people and results from genetic defects in the ABCA4 gene. The disease is characterized by premature accumulation of lipofuscin in the retinal pigment epithelium (RPE) of the eye and by vision loss. No cure or treatment is available. Although lipofuscin is considered a hallmark of Stargardt disease, its mechanism of formation and its role in disease pathogenesis are poorly understood. In this work we investigated the effects of long-term administration of deuterium-enriched vitamin A, C20-D3-vitamin A, on RPE lipofuscin deposition and eye function in a mouse model of Stargardt's disease. Results support the notion that lipofuscin forms partly as a result of the aberrant reactivity of vitamin A through the formation of vitamin A dimers, provide evidence that preventing vitamin A dimerization may slow disease related, retinal physiological changes and perhaps vision loss and suggest that administration of C20-D3-vitamin A may be a potential clinical strategy to ameliorate clinical symptoms resulting from ABCA4 genetic defects. PMID:21156790
Horowitz, Seth S; Cheney, Cheryl A; Simmons, James A
2004-01-01
The big brown bat (Eptesicus fuscus) is an aerial-feeding insectivorous species that relies on echolocation to avoid obstacles and to detect flying insects. Spatial perception in the dark using echolocation challenges the vestibular system to function without substantial visual input for orientation. IR thermal video recordings show the complexity of bat flights in the field and suggest a highly dynamic role for the vestibular system in orientation and flight control. To examine this role, we carried out laboratory studies of flight behavior under illuminated and dark conditions in both static and rotating obstacle tests while administering heavy water (D2O) to impair vestibular inputs. Eptesicus carried out complex maneuvers through both fixed arrays of wires and a rotating obstacle array using both vision and echolocation, or when guided by echolocation alone. When treated with D2O in combination with lack of visual cues, bats showed considerable decrements in performance. These data indicate that big brown bats use both vision and echolocation to provide spatial registration for head position information generated by the vestibular system.
FORGE Newberry 3D Gravity Density Model for Newberry Volcano
Alain Bonneville
2016-03-11
These data are Pacific Northwest National Lab inversions of an amalgamation of two surface gravity datasets: Davenport-Newberry gravity collected prior to 2012 stimulations and Zonge International gravity collected for the project "Novel use of 4D Monitoring Techniques to Improve Reservoir Longevity and Productivity in Enhanced Geothermal Systems" in 2012. Inversions of surface gravity recover a 3D distribution of density contrast from which intrusive igneous bodies are identified. The data indicate a body name, body type, point type, UTM X and Y coordinates, Z data is specified as meters below sea level (negative values then indicate elevations above sea level), thickness of the body in meters, suscept, density anomaly in g/cc, background density in g/cc, and density in g/cc. The model was created using a commercial gravity inversion software called ModelVision 12.0 (http://www.tensor-research.com.au/Geophysical-Products/ModelVision). The initial model is based on the seismic tomography interpretation (Beachly et al., 2012). All the gravity data used to constrain this model are on the GDR: https://gdr.openei.org/submissions/760.
Impact of Gamification of Vision Tests on the User Experience.
Bodduluri, Lakshmi; Boon, Mei Ying; Ryan, Malcolm; Dain, Stephen J
2017-08-01
Gamification has been incorporated into vision tests and vision therapies in the expectation that it may increase the user experience and engagement with the task. The current study aimed to understand how gamification affects the user experience, specifically during the undertaking of psychophysical tasks designed to estimate vision thresholds (chromatic and achromatic contrast sensitivity). Three tablet computer-based games were developed with three levels of gaming elements. Game 1 was designed to be a simple clinical test (no gaming elements), game 2 was similar to game 1 but with added gaming elements (i.e., feedback, scores, and sounds), and game 3 was a complete game. Participants (N = 144, age: 9.9-42 years) played three games in random order. The user experience for each game was assessed using a Short Feedback Questionnaire. The median (interquartile range) fun level for the three games was 2.5 (1.6), 3.9 (1.7), and 2.5 (2.8), respectively. Overall, participants reported greater fun level and higher preparedness to play the game again for game 2 than games 1 and 3 (P < 0.05). There were significant positive correlations observed between fun level and preparedness to play the game again for all the games (p < 0.05). Engagement (assessed as completion rates) did not differ between the games. Gamified version (game 2) was preferred to the other two versions. Over the short term, the careful application of gaming elements to vision tests was found to increase the fun level of users, without affecting engagement with the vision test.
3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging
NASA Astrophysics Data System (ADS)
Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak
2017-10-01
Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.
Volumetric segmentation of range images for printed circuit board inspection
NASA Astrophysics Data System (ADS)
Van Dop, Erik R.; Regtien, Paul P. L.
1996-10-01
Conventional computer vision approaches towards object recognition and pose estimation employ 2D grey-value or color imaging. As a consequence these images contain information about projections of a 3D scene only. The subsequent image processing will then be difficult, because the object coordinates are represented with just image coordinates. Only complicated low-level vision modules like depth from stereo or depth from shading can recover some of the surface geometry of the scene. Recent advances in fast range imaging have however paved the way towards 3D computer vision, since range data of the scene can now be obtained with sufficient accuracy and speed for object recognition and pose estimation purposes. This article proposes the coded-light range-imaging method together with superquadric segmentation to approach this task. Superquadric segments are volumetric primitives that describe global object properties with 5 parameters, which provide the main features for object recognition. Besides, the principle axes of a superquadric segment determine the phase of an object in the scene. The volumetric segmentation of a range image can be used to detect missing, false or badly placed components on assembled printed circuit boards. Furthermore, this approach will be useful to recognize and extract valuable or toxic electronic components on printed circuit boards scrap that currently burden the environment during electronic waste processing. Results on synthetic range images with errors constructed according to a verified noise model illustrate the capabilities of this approach.
NASA Technical Reports Server (NTRS)
UijtdeHaag, Maarten; Thomas, Robert; Rankin, James R.
2004-01-01
The report discusses the architecture and the flight test results of a 3-Dimensional Cockpit Display of Traffic and terrain Information (3D-CDTI). The presented 3D-CDTI is a perspective display format that combines existing Synthetic Vision System (SVS) research and Automatic Dependent Surveillance-Broadcast (ADS-B) technology to improve the pilot's situational awareness. The goal of the 3D-CDTI is to contribute to the development of new display concepts for NASA's Small Aircraft Transportation System research program. Papers were presented at the PLANS 2002 meeting and the ION-GPS 2002 meeting. The contents of this report are derived from the results discussed in those papers.
Sensorimotor recovery following spaceflight may be due to frequent square-wave saccadic intrusions
NASA Technical Reports Server (NTRS)
Reschke, Millard; Somers, Jeffrey T.; Leigh, R. John; Krnavek, Jody M.; Kornilova, Ludmila; Kozlovskaya, Inessa; Bloomberg, Jacob J.; Paloski, William H.
2004-01-01
Square-wave jerks (SWJs) are small, involuntary saccades that disrupt steady fixation. We report the case of an astronaut (approximately 140 d on orbit) who showed frequent SWJs, especially postflight, but who showed no impairment of vision or decrement of postflight performance. These data support the view that SWJs do not impair vision because they are paired movements, consisting of a small saccade away from the fixation position followed, within 200 ms, by a corrective saccade that brings the eye back on target. Since many returning astronauts show a decrement of dynamic visual function during postflight locomotion, it seems possible that frequent SWJs improved this astronaut's visual function by providing postsaccadic enhancement of visual fixation, which aided postflight performance. Certainly, frequent SWJs did not impair performance in this astronaut, who had no other neurological disorder.
1994-02-15
0. Faugeras. Three dimensional vision, a geometric viewpoint. MIT Press, 1993. [19] 0 . D. Faugeras and S. Maybank . Motion from point mathces...multiplicity of solutions. Int. J. of Computer Vision, 1990. 1201 0.D. Faugeras, Q.T. Luong, and S.J. Maybank . Camera self-calibration: theory and...Kalrnan filter-based algorithms for estimating depth from image sequences. Int. J. of computer vision, 1989. [41] S. Maybank . Theory of
Usability of stereoscopic view in teleoperation
NASA Astrophysics Data System (ADS)
Boonsuk, Wutthigrai
2015-03-01
Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.
Random-Profiles-Based 3D Face Recognition System
Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee
2014-01-01
In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101