Sample records for laser-based vision system

  1. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  2. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    PubMed

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  3. A laser-based vision system for weld quality inspection.

    PubMed

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved.

  4. A Laser-Based Vision System for Weld Quality Inspection

    PubMed Central

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308

  5. Data acquisition and analysis of range-finding systems for spacing construction

    NASA Technical Reports Server (NTRS)

    Shen, C. N.

    1981-01-01

    For space missions of future, completely autonomous robotic machines will be required to free astronauts from routine chores of equipment maintenance, servicing of faulty systems, etc. and to extend human capabilities in hazardous environments full of cosmic and other harmful radiations. In places of high radiation and uncontrollable ambient illuminations, T.V. camera based vision systems cannot work effectively. However, a vision system utilizing directly measured range information with a time of flight laser rangefinder, can successfully operate under these environments. Such a system will be independent of proper illumination conditions and the interfering effects of intense radiation of all kinds will be eliminated by the tuned input of the laser instrument. Processing the range data according to certain decision, stochastic estimation and heuristic schemes, the laser based vision system will recognize known objects and thus provide sufficient information to the robot's control system which can develop strategies for various objectives.

  6. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  7. Color image processing and vision system for an automated laser paint-stripping system

    NASA Astrophysics Data System (ADS)

    Hickey, John M., III; Hise, Lawson

    1994-10-01

    Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.

  8. Update on laser vision correction using wavefront analysis with the CustomCornea system and LADARVision 193-nm excimer laser

    NASA Astrophysics Data System (ADS)

    Maguen, Ezra I.; Salz, James J.; McDonald, Marguerite B.; Pettit, George H.; Papaioannou, Thanassis; Grundfest, Warren S.

    2002-06-01

    A study was undertaken to assess whether results of laser vision correction with the LADARVISION 193-nm excimer laser (Alcon-Autonomous technologies) can be improved with the use of wavefront analysis generated by a proprietary system including a Hartman-Schack sensor and expressed using Zernicke polynomials. A total of 82 eyes underwent LASIK in several centers with an improved algorithm, using the CustomCornea system. A subgroup of 48 eyes of 24 patients was randomized so that one eye undergoes conventional treatment and one eye undergoes treatment based on wavefront analysis. Treatment parameters were equal for each type of refractive error. 83% of all eyes had uncorrected vision of 20/20 or better and 95% were 20/25 or better. In all groups, uncorrected visual acuities did not improve significantly in eyes treated with wavefront analysis compared to conventional treatments. Higher order aberrations were consistently better corrected in eyes undergoing treatment based on wavefront analysis for LASIK at 6 months postop. In addition, the number of eyes with reduced RMS was significantly higher in the subset of eyes treated with a wavefront algorithm (38% vs. 5%). Wavefront technology may improve the outcomes of laser vision correction with the LADARVISION excimer laser. Further refinements of the technology and clinical trials will contribute to this goal.

  9. Function-based design process for an intelligent ground vehicle vision system

    NASA Astrophysics Data System (ADS)

    Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.

    2010-10-01

    An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.

  10. Control electronics for a multi-laser/multi-detector scanning system

    NASA Technical Reports Server (NTRS)

    Kennedy, W.

    1980-01-01

    The Mars Rover Laser Scanning system uses a precision laser pointing mechanism, a photodetector array, and the concept of triangulation to perform three dimensional scene analysis. The system is used for real time terrain sensing and vision. The Multi-Laser/Multi-Detector laser scanning system is controlled by a digital device called the ML/MD controller. A next generation laser scanning system, based on the Level 2 controller, is microprocessor based. The new controller capabilities far exceed those of the ML/MD device. The first draft circuit details and general software structure are presented.

  11. Laser electro-optic system for rapid three-dimensional /3-D/ topographic mapping of surfaces

    NASA Technical Reports Server (NTRS)

    Altschuler, M. D.; Altschuler, B. R.; Taboada, J.

    1981-01-01

    It is pointed out that the generic utility of a robot in a factory/assembly environment could be substantially enhanced by providing a vision capability to the robot. A standard videocamera for robot vision provides a two-dimensional image which contains insufficient information for a detailed three-dimensional reconstruction of an object. Approaches which supply the additional information needed for the three-dimensional mapping of objects with complex surface shapes are briefly considered and a description is presented of a laser-based system which can provide three-dimensional vision to a robot. The system consists of a laser beam array generator, an optical image recorder, and software for controlling the required operations. The projection of a laser beam array onto a surface produces a dot pattern image which is viewed from one or more suitable perspectives. Attention is given to the mathematical method employed, the space coding technique, the approaches used for obtaining the transformation parameters, the optics for laser beam array generation, the hardware for beam array coding, and aspects of image acquisition.

  12. Laser technologies in ophthalmic surgery

    NASA Astrophysics Data System (ADS)

    Atezhev, V. V.; Barchunov, B. V.; Vartapetov, S. K.; Zav'yalov, A. S.; Lapshin, K. E.; Movshev, V. G.; Shcherbakov, I. A.

    2016-08-01

    Excimer and femtosecond lasers are widely used in ophthalmology to correct refraction. Laser systems for vision correction are based on versatile technical solutions and include multiple hard- and software components. Laser characteristics, properties of laser beam delivery system, algorithms for cornea treatment, and methods of pre-surgical diagnostics determine the surgical outcome. Here we describe the scientific and technological basis for laser systems for refractive surgery developed at the Physics Instrumentation Center (PIC) at the Prokhorov General Physics Institute (GPI), Russian Academy of Sciences.

  13. Welding technology transfer task/laser based weld joint tracking system for compressor girth welds

    NASA Technical Reports Server (NTRS)

    Looney, Alan

    1991-01-01

    Sensors to control and monitor welding operations are currently being developed at Marshall Space Flight Center. The laser based weld bead profiler/torch rotation sensor was modified to provide a weld joint tracking system for compressor girth welds. The tracking system features a precision laser based vision sensor, automated two-axis machine motion, and an industrial PC controller. The system benefits are elimination of weld repairs caused by joint tracking errors which reduces manufacturing costs and increases production output, simplification of tooling, and free costly manufacturing floor space.

  14. The Recovery of Optical Quality after Laser Vision Correction

    PubMed Central

    Jung, Hyeong-Gi

    2013-01-01

    Purpose To evaluate the optical quality after laser in situ keratomileusis (LASIK) or serial photorefractive keratectomy (PRK) using a double-pass system and to follow the recovery of optical quality after laser vision correction. Methods This study measured the visual acuity, manifest refraction and optical quality before and one day, one week, one month, and three months after laser vision correction. Optical quality parameters including the modulation transfer function, Strehl ratio and intraocular scattering were evaluated with a double-pass system. Results This study included 51 eyes that underwent LASIK and 57 that underwent PRK. The optical quality three months post-surgery did not differ significantly between these laser vision correction techniques. Furthermore, the preoperative and postoperative optical quality did not differ significantly in either group. Optical quality recovered within one week after LASIK but took between one and three months to recover after PRK. The optical quality of patients in the PRK group seemed to recover slightly more slowly than their uncorrected distance visual acuity. Conclusions Optical quality recovers to the preoperative level after laser vision correction, so laser vision correction is efficacious for correcting myopia. The double-pass system is a useful tool for clinical assessment of optical quality. PMID:23908570

  15. Virtual environment assessment for laser-based vision surface profiling

    NASA Astrophysics Data System (ADS)

    ElSoussi, Adnane; Al Alami, Abed ElRahman; Abu-Nabah, Bassam A.

    2015-03-01

    Oil and gas businesses have been raising the demand from original equipment manufacturers (OEMs) to implement a reliable metrology method in assessing surface profiles of welds before and after grinding. This certainly mandates the deviation from the commonly used surface measurement gauges, which are not only operator dependent, but also limited to discrete measurements along the weld. Due to its potential accuracy and speed, the use of laser-based vision surface profiling systems have been progressively rising as part of manufacturing quality control. This effort presents a virtual environment that lends itself for developing and evaluating existing laser vision sensor (LVS) calibration and measurement techniques. A combination of two known calibration techniques is implemented to deliver a calibrated LVS system. System calibration is implemented virtually and experimentally to scan simulated and 3D printed features of known profiles, respectively. Scanned data is inverted and compared with the input profiles to validate the virtual environment capability for LVS surface profiling and preliminary assess the measurement technique for weld profiling applications. Moreover, this effort brings 3D scanning capability a step closer towards robust quality control applications in a manufacturing environment.

  16. Vision and spectroscopic sensing for joint tracing in narrow gap laser butt welding

    NASA Astrophysics Data System (ADS)

    Nilsen, Morgan; Sikström, Fredrik; Christiansson, Anna-Karin; Ancona, Antonio

    2017-11-01

    The automated laser beam butt welding process is sensitive to positioning the laser beam with respect to the joint because a small offset may result in detrimental lack of sidewall fusion. This problem is even more pronounced in case of narrow gap butt welding, where most of the commercial automatic joint tracing systems fail to detect the exact position and size of the gap. In this work, a dual vision and spectroscopic sensing approach is proposed to trace narrow gap butt joints during laser welding. The system consists of a camera with suitable illumination and matched optical filters and a fast miniature spectrometer. An image processing algorithm of the camera recordings has been developed in order to estimate the laser spot position relative to the joint position. The spectral emissions from the laser induced plasma plume have been acquired by the spectrometer, and based on the measurements of the intensities of selected lines of the spectrum, the electron temperature signal has been calculated and correlated to variations of process conditions. The individual performances of these two systems have been experimentally investigated and evaluated offline by data from several welding experiments, where artificial abrupt as well as gradual deviations of the laser beam out of the joint were produced. Results indicate that a combination of the information provided by the vision and spectroscopic systems is beneficial for development of a hybrid sensing system for joint tracing.

  17. An imaging system based on laser optical feedback for fog vision applications

    NASA Astrophysics Data System (ADS)

    Belin, E.; Boucher, V.

    2008-08-01

    The Laboratoire Régional des Ponts et Chaussées d'Angers - LRPC of Angers is currently studying the feasability of applying an optical technique based on the principle of the laser optical feedback to long distance fog vision. Optical feedback set up allows the creation of images on roadsigns. To create artificial fog conditions we used a vibrating cell that produces a micro-spray of water according to the principle of acoustic cavitation. To scale the sensitivity of the system under duplicatible conditions we also used optical densities linked to first-sight visibility distances. The current system produces, in a few seconds, 200 × 200 pixel images of a roadsign seen through dense artificial fog.

  18. A modeled economic analysis of a digital tele-ophthalmology system as used by three federal health care agencies for detecting proliferative diabetic retinopathy.

    PubMed

    Whited, John D; Datta, Santanu K; Aiello, Lloyd M; Aiello, Lloyd P; Cavallerano, Jerry D; Conlin, Paul R; Horton, Mark B; Vigersky, Robert A; Poropatich, Ronald K; Challa, Pratap; Darkins, Adam W; Bursell, Sven-Erik

    2005-12-01

    The objective of this study was to compare, using a 12-month time frame, the cost-effectiveness of a non-mydriatic digital tele-ophthalmology system (Joslin Vision Network) versus traditional clinic-based ophthalmoscopy examinations with pupil dilation to detect proliferative diabetic retinopathy and its consequences. Decision analysis techniques, including Monte Carlo simulation, were used to model the use of the Joslin Vision Network versus conventional clinic-based ophthalmoscopy among the entire diabetic populations served by the Indian Health Service, the Department of Veterans Affairs, and the active duty Department of Defense. The economic perspective analyzed was that of each federal agency. Data sources for costs and outcomes included the published literature, epidemiologic data, administrative data, market prices, and expert opinion. Outcome measures included the number of true positive cases of proliferative diabetic retinopathy detected, the number of patients treated with panretinal laser photocoagulation, and the number of cases of severe vision loss averted. In the base-case analyses, the Joslin Vision Network was the dominant strategy in all but two of the nine modeled scenarios, meaning that it was both less costly and more effective. In the active duty Department of Defense population, the Joslin Vision Network would be more effective but cost an extra 1,618 dollars per additional patient treated with panretinal laser photo-coagulation and an additional 13,748 dollars per severe vision loss event averted. Based on our economic model, the Joslin Vision Network has the potential to be more effective than clinic-based ophthalmoscopy for detecting proliferative diabetic retinopathy and averting cases of severe vision loss, and may do so at lower cost.

  19. Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor

    PubMed Central

    Lee, Jong Kwang; Kim, Kiho; Lee, Yongseok; Jeong, Taikyeong

    2011-01-01

    In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality. PMID:22164104

  20. Human Detection from a Mobile Robot Using Fusion of Laser and Vision Information

    PubMed Central

    Fotiadis, Efstathios P.; Garzón, Mario; Barrientos, Antonio

    2013-01-01

    This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method. PMID:24008280

  1. Human detection from a mobile robot using fusion of laser and vision information.

    PubMed

    Fotiadis, Efstathios P; Garzón, Mario; Barrientos, Antonio

    2013-09-04

    This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method.

  2. The 3D laser radar vision processor system

    NASA Astrophysics Data System (ADS)

    Sebok, T. M.

    1990-10-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  3. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  4. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  5. Simple laser vision sensor calibration for surface profiling applications

    NASA Astrophysics Data System (ADS)

    Abu-Nabah, Bassam A.; ElSoussi, Adnane O.; Al Alami, Abed ElRahman K.

    2016-09-01

    Due to the relatively large structures in the Oil and Gas industry, original equipment manufacturers (OEMs) have been implementing custom-designed laser vision sensor (LVS) surface profiling systems as part of quality control in their manufacturing processes. The rough manufacturing environment and the continuous movement and misalignment of these custom-designed tools adversely affect the accuracy of laser-based vision surface profiling applications. Accordingly, Oil and Gas businesses have been raising the demand from the OEMs to implement practical and robust LVS calibration techniques prior to running any visual inspections. This effort introduces an LVS calibration technique representing a simplified version of two known calibration techniques, which are commonly implemented to obtain a calibrated LVS system for surface profiling applications. Both calibration techniques are implemented virtually and experimentally to scan simulated and three-dimensional (3D) printed features of known profiles, respectively. Scanned data is transformed from the camera frame to points in the world coordinate system and compared with the input profiles to validate the introduced calibration technique capability against the more complex approach and preliminarily assess the measurement technique for weld profiling applications. Moreover, the sensitivity to stand-off distances is analyzed to illustrate the practicality of the presented technique.

  6. A Virtual Blind Cane Using a Line Laser-Based Vision System and an Inertial Measurement Unit

    PubMed Central

    Dang, Quoc Khanh; Chee, Youngjoon; Pham, Duy Duong; Suh, Young Soo

    2016-01-01

    A virtual blind cane system for indoor application, including a camera, a line laser and an inertial measurement unit (IMU), is proposed in this paper. Working as a blind cane, the proposed system helps a blind person find the type of obstacle and the distance to it. The distance from the user to the obstacle is estimated by extracting the laser coordinate points on the obstacle, as well as tracking the system pointing angle. The paper provides a simple method to classify the obstacle’s type by analyzing the laser intersection histogram. Real experimental results are presented to show the validity and accuracy of the proposed system. PMID:26771618

  7. SU-F-T-525: Monitordeep-Inspiratory Breathhold with a Laser Sensor for Radiation Therapy of Left Breast Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tai, A; Currey, A; Li, X Allen

    2016-06-15

    Purpose: Radiation therapy (RT) of left sided breast cancers with deep-inspiratory breathhold (DIBH) can reduce the dose to heart. The purpose of this study is to develop and test a new laser-based tool to improve ease of RT delivery using DIBH. Methods: A laser sensor together with breathing monitor device (Anzai Inc., Japan) was used to record the surface breathing motion of phantom/volunteers. The device projects a laser beam to the chestwall and the reflected light creates a focal spot on a light detecting element. The position change of the focal spot correlates with the patient’s breathing motion and ismore » measured through the change of current in the light detecting element. The signal is amplified and displayed on a computer screen, which is used to trigger radiation gating. The laser sensor can be easily mounted to the simulation/treatment couch with a fixing plate and a magnet base, and has a sensitivity range of 10 to 40 cm from the patient. The correlation of breathing signals detected by laser sensor and visionRT is also investigated. Results: It is found that the measured breathing signal from the laser sensor is stable and reproducible and has no noticeable delay. It correlates well with the VisionRT surface imaging system. The DIBH reference level does not change with movement of the couch because the laser sensor and couch move together. Conclusion: The Anzai laser sensor provides a cost-effective way to improve beam gating with DIBH for treating left breast cancer. It can be used alone or together with VisionRT to determine the correct DIBH level during the radiation treatment of left breast cancer with DIBH.« less

  8. A Detailed Evaluation of a Laser Triangulation Ranging System for Mobile Robots

    DTIC Science & Technology

    1983-08-01

    System Accuracy Factors ..................10 2.1.2 Detector "Cone of Vision" Problem ..................... 10 2. 1.3 Laser Triangulation Justification... product of these advances. Since 1968, when the effort began under a NASA grant, the project has undergone many changes both in the design goals and in...MD Vision System Accuracy Factors The accuracy of the data obtained by triangulation system depends on essentially three independent factors . They

  9. Laser Opto-Electronic Correlator for Robotic Vision Automated Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville

    1995-01-01

    A compact laser opto-electronic correlator for pattern recognition has been designed, fabricated, and tested. Specifically it is a translation sensitivity adjustable compact optical correlator (TSACOC) utilizing convergent laser beams for the holographic filter. Its properties and performance, including the location of the correlation peak and the effects of lateral and longitudinal displacements for both filters and input images, are systematically analyzed based on the nonparaxial approximation for the reference beam. The theoretical analyses have been verified in experiments. In applying the TSACOC to important practical problems including fingerprint identification, we have found that the tolerance of the system to the input lateral displacement can be conveniently increased by changing a geometric factor of the system. The system can be compactly packaged using the miniature laser diode sources and can be used in space by the National Aeronautics and Space Administration (NASA) and ground commercial applications which include robotic vision, and industrial inspection of automated quality control operations. The personnel of Standard International will work closely with the Jet Propulsion Laboratory (JPL) to transfer the technology to the commercial market. Prototype systems will be fabricated to test the market and perfect the product. Large production will follow after successful results are achieved.

  10. Design of noise barrier inspection system for high-speed railway

    NASA Astrophysics Data System (ADS)

    Liu, Bingqian; Shao, Shuangyun; Feng, Qibo; Ma, Le; Cholryong, Kim

    2016-10-01

    The damage of noise barriers will highly reduce the transportation safety of the high-speed railway. In this paper, an online inspection system of noise barrier based on laser vision for the safety of high-speed railway is proposed. The inspection system, mainly consisted of a fast camera and a line laser, installed in the first carriage of the high-speed CIT(Composited Inspection Train).A Laser line was projected on the surface of the noise barriers and the images of the light line were received by the camera while the train is running at high speed. The distance between the inspection system and the noise barrier can be obtained based on laser triangulation principle. The results of field tests show that the proposed system can meet the need of high speed and high accuracy to get the contour distortion of the noise barriers.

  11. A vision-based approach for the direct measurement of displacements in vibrating systems

    NASA Astrophysics Data System (ADS)

    Mazen Wahbeh, A.; Caffrey, John P.; Masri, Sami F.

    2003-10-01

    This paper reports the results of an analytical and experimental study to develop, calibrate, implement and evaluate the feasibility of a novel vision-based approach for obtaining direct measurements of the absolute displacement time history at selectable locations of dispersed civil infrastructure systems such as long-span bridges. The measurements were obtained using a highly accurate camera in conjunction with a laser tracking reference. Calibration of the vision system was conducted in the lab to establish performance envelopes and data processing algorithms to extract the needed information from the captured vision scene. Subsequently, the monitoring apparatus was installed in the vicinity of the Vincent Thomas Bridge in the metropolitan Los Angeles region. This allowed the deployment of the instrumentation system under realistic conditions so as to determine field implementation issues that need to be addressed. It is shown that the proposed approach has the potential of leading to an economical and robust system for obtaining direct, simultaneous, measurements at several locations of the displacement time histories of realistic infrastructure systems undergoing complex three-dimensional deformations.

  12. Multiparameter thermo-mechanical OCT-based characterization of laser-induced cornea reshaping

    NASA Astrophysics Data System (ADS)

    Zaitsev, Vladimir Yu.; Matveyev, Alexandr L.; Matveev, Lev A.; Gelikonov, Grigory V.; Vitkin, Alex; Omelchenko, Alexander I.; Baum, Olga I.; Shabanov, Dmitry V.; Sovetsky, Alexander A.; Sobol, Emil N.

    2017-02-01

    Phase-sensitive optical coherence tomography (OCT) is used for visualizing dynamic and cumulative strains and corneashape changes during laser-produced tissue heating. Such non-destructive (non-ablative) cornea reshaping can be used as a basis of emerging technologies of laser vision correction. In experiments with cartilaginous samples, polyacrilamide phantoms and excised rabbit eyes we demonstrate ability of the developed OCT system to simultaneously characterize transient and cumulated strain distributions, surface displacements, scattering tissue properties and possibility of temperature estimation via thermal-expansion measurements. The proposed approach can be implemented in perspective real-time OCT systems for ensuring safety of new methods of laser reshaping of cornea.

  13. Hi-Vision telecine system using pickup tube

    NASA Astrophysics Data System (ADS)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  14. Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle

    PubMed Central

    Chen, Long; Li, Qingquan; Li, Ming; Zhang, Liang; Mao, Qingzhou

    2012-01-01

    This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system.

  15. Wavefront Measurement in Ophthalmology

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl

    Wavefront sensing or aberration measurement in the eye is a key problem in refractive surgery and vision correction with laser. The accuracy of these measurements is critical for the outcome of the surgery. Practically all clinical methods use laser as a source of light. To better understand the background, we analyze the pre-laser techniques developed over centuries. They allowed new discoveries of the nature of the optical system of the eye, and many served as prototypes for laser-based wavefront sensing technologies. Hartmann's test was strengthened by Platt's lenslet matrix and the CCD two-dimensional photodetector acquired a new life as a Hartmann-Shack sensor in Heidelberg. Tscherning's aberroscope, invented in France, was transformed into a laser device known as a Dresden aberrometer, having seen its reincarnation in Germany with Seiler's help. The clinical ray tracing technique was brought to life by Molebny in Ukraine, and skiascopy was created by Fujieda in Japan. With the maturation of these technologies, new demands now arise for their wider implementation in optometry and vision correction with customized contact and intraocular lenses.

  16. Intelligent surgical laser system configuration and software implementation

    NASA Astrophysics Data System (ADS)

    Hsueh, Chi-Fu T.; Bille, Josef F.

    1992-06-01

    An intelligent surgical laser system, which can help the ophthalmologist to achieve higher precision and control during their procedures, has been developed by ISL as model CLS 4001. In addition to the laser and laser delivery system, the system is also equipped with a vision system (IPU), robotics motion control (MCU), and a tracking closed loop system (ETS) that tracks the eye in three dimensions (X, Y and Z). The initial patient setup is computer controlled with guidance from the vision system. The tracking system is automatically engaged when the target is in position. A multi-level tracking system is developed by integrating our vision and tracking systems which have been able to maintain our laser beam precisely on target. The capabilities of the automatic eye setup and the tracking in three dimensions provides for improved accuracy and measurement repeatability. The system is operated through the Surgical Control Unit (SCU). The SCU communicates with the IPU and the MCU through both ethernet and RS232. Various scanning pattern (i.e., line, curve, circle, spiral, etc.) can be selected with given parameters. When a warning is activated, a voice message is played that will normally require a panel touch acknowledgement. The reliability of the system is ensured in three levels: (1) hardware, (2) software real time monitoring, and (3) user. The system is currently under clinical validation.

  17. Optimization of dynamic envelope measurement system for high speed train based on monocular vision

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong

    2018-01-01

    The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.

  18. High-Speed Laser Image Analysis of Plume Angles for Pressurised Metered Dose Inhalers: The Effect of Nozzle Geometry.

    PubMed

    Chen, Yang; Young, Paul M; Murphy, Seamus; Fletcher, David F; Long, Edward; Lewis, David; Church, Tanya; Traini, Daniela

    2017-04-01

    The aim of this study is to investigate aerosol plume geometries of pressurised metered dose inhalers (pMDIs) using a high-speed laser image system with different actuator nozzle materials and designs. Actuators made from aluminium, PET and PTFE were manufactured with four different nozzle designs: cone, flat, curved cone and curved flat. Plume angles and spans generated using the designed actuator nozzles with four solution-based pMDI formulations were imaged using Oxford Lasers EnVision system and analysed using EnVision Patternate software. Reduced plume angles for all actuator materials and nozzle designs were observed with pMDI formulations containing drug with high co-solvent concentration (ethanol) due to the reduced vapour pressure. Significantly higher plume angles were observed with the PTFE flat nozzle across all formulations, which could be a result of the nozzle geometry and material's hydrophobicity. The plume geometry of pMDI aerosols can be influenced by the vapour pressure of the formulation, nozzle geometries and actuator material physiochemical properties.

  19. Femtosecond all-solid-state laser for refractive surgery

    NASA Astrophysics Data System (ADS)

    Zickler, Leander; Han, Meng; Giese, G.'nter; Loesel, Frieder H.; Bille, Josef F.

    2003-06-01

    Refractive surgery in the pursuit of perfect vision (e.g. 20/10) requires firstly an exact measurement of abberations induced by the eye and then a sophisticated surgical approach. A recent extension of wavefront measurement techniques and adaptive optics to ophthalmology has quantitatively characterized the quality of the human eye. The next milestone towards perfect vision is developing a more efficient and precise laser scalpel and evaluating minimal-invasive laser surgery strategies. Femtosecond all-solid-state MOPA lasers based on passive modelocking and chirped pulse amplification are excellent candidates for eye surgery due to their stability, ultra-high intensity and compact tabletop size. Furthermore, taking into account the peak emission in the near IR and diffraction limited focusing abilities, surgical laser systems performing precise intrastromal incisions for corneal flap resection and intrastromal corneal reshaping promise significant improvement over today's Photorefractive Keratectomy (PRK) and Laser Assisted In Situ Keratomileusis (LASIK) techniques which utilize UV excimer lasers. Through dispersion control and optimized regenerative amplification, a compact femtosecond all-solid-state laser with pulsed energy well above LIOB threshold and kHz repetition rate is constructed. After applying a pulse sequence to the eye, the modified corneal morphology is investigated by high resolution microscopy (Multi Photon/SHG Confocal Microscope).

  20. A Structured Light Sensor System for Tree Inventory

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong; Zemek, Michael C.

    2000-01-01

    Tree Inventory is referred to measurement and estimation of marketable wood volume in a piece of land or forest for purposes such as investment or for loan applications. Exist techniques rely on trained surveyor conducting measurements manually using simple optical or mechanical devices, and hence are time consuming subjective and error prone. The advance of computer vision techniques makes it possible to conduct automatic measurements that are more efficient, objective and reliable. This paper describes 3D measurements of tree diameters using a uniquely designed ensemble of two line laser emitters rigidly mounted on a video camera. The proposed laser camera system relies on a fixed distance between two parallel laser planes and projections of laser lines to calculate tree diameters. Performance of the laser camera system is further enhanced by fusion of information induced from structured lighting and that contained in video images. Comparison will be made between the laser camera sensor system and a stereo vision system previously developed for measurements of tree diameters.

  1. Development of collision avoidance system for useful UAV applications using image sensors with laser transmitter

    NASA Astrophysics Data System (ADS)

    Cheong, M. K.; Bahiki, M. R.; Azrad, S.

    2016-10-01

    The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.

  2. Optical Coherence Tomography–Based Corneal Power Measurement and Intraocular Lens Power Calculation Following Laser Vision Correction (An American Ophthalmological Society Thesis)

    PubMed Central

    Huang, David; Tang, Maolong; Wang, Li; Zhang, Xinbo; Armour, Rebecca L.; Gattey, Devin M.; Lombardi, Lorinna H.; Koch, Douglas D.

    2013-01-01

    Purpose: To use optical coherence tomography (OCT) to measure corneal power and improve the selection of intraocular lens (IOL) power in cataract surgeries after laser vision correction. Methods: Patients with previous myopic laser vision corrections were enrolled in this prospective study from two eye centers. Corneal thickness and power were measured by Fourier-domain OCT. Axial length, anterior chamber depth, and automated keratometry were measured by a partial coherence interferometer. An OCT-based IOL formula was developed. The mean absolute error of the OCT-based formula in predicting postoperative refraction was compared to two regression-based IOL formulae for eyes with previous laser vision correction. Results: Forty-six eyes of 46 patients all had uncomplicated cataract surgery with monofocal IOL implantation. The mean arithmetic prediction error of postoperative refraction was 0.05 ± 0.65 diopter (D) for the OCT formula, 0.14 ± 0.83 D for the Haigis-L formula, and 0.24 ± 0.82 D for the no-history Shammas-PL formula. The mean absolute error was 0.50 D for OCT compared to a mean absolute error of 0.67 D for Haigis-L and 0.67 D for Shammas-PL. The adjusted mean absolute error (average prediction error removed) was 0.49 D for OCT, 0.65 D for Haigis-L (P=.031), and 0.62 D for Shammas-PL (P=.044). For OCT, 61% of the eyes were within 0.5 D of prediction error, whereas 46% were within 0.5 D for both Haigis-L and Shammas-PL (P=.034). Conclusions: The predictive accuracy of OCT-based IOL power calculation was better than Haigis-L and Shammas-PL formulas in eyes after laser vision correction. PMID:24167323

  3. Laser projection positioning of spatial contour curves via a galvanometric scanner

    NASA Astrophysics Data System (ADS)

    Tu, Junchao; Zhang, Liyan

    2018-04-01

    The technology of laser projection positioning is widely applied in advanced manufacturing fields (e.g. composite plying, parts location and installation). In order to use it better, a laser projection positioning (LPP) system is designed and implemented. Firstly, the LPP system is built by a laser galvanometric scanning (LGS) system and a binocular vision system. Applying Single-hidden Layer Feed-forward Neural Network (SLFN), the system model is constructed next. Secondly, the LGS system and the binocular system, which are respectively independent, are integrated through a datadriven calibration method based on extreme learning machine (ELM) algorithm. Finally, a projection positioning method is proposed within the framework of the calibrated SLFN system model. A well-designed experiment is conducted to verify the viability and effectiveness of the proposed system. In addition, the accuracy of projection positioning are evaluated to show that the LPP system can achieves the good localization effect.

  4. Airborne laser-diode-array illuminator assessment for the night vision's airborne mine-detection arid test

    NASA Astrophysics Data System (ADS)

    Stetson, Suzanne; Weber, Hadley; Crosby, Frank J.; Tinsley, Kenneth; Kloess, Edmund; Nevis, Andrew J.; Holloway, John H., Jr.; Witherspoon, Ned H.

    2004-09-01

    The Airborne Littoral Reconnaissance Technologies (ALRT) project has developed and tested a nighttime operational minefield detection capability using commercial off-the-shelf high-power Laser Diode Arrays (LDAs). The Coastal System Station"s ALRT project, under funding from the Office of Naval Research (ONR), has been designing, developing, integrating, and testing commercial arrays using a Cessna airborne platform over the last several years. This has led to the development of the Airborne Laser Diode Array Illuminator wide field-of-view (ALDAI-W) imaging test bed system. The ALRT project tested ALDAI-W at the Army"s Night Vision Lab"s Airborne Mine Detection Arid Test. By participating in Night Vision"s test, ALRT was able to collect initial prototype nighttime operational data using ALDAI-W, showing impressive results and pioneering the way for final test bed demonstration conducted in September 2003. This paper describes the ALDAI-W Arid Test and results, along with processing steps used to generate imagery.

  5. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  6. Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions

    PubMed Central

    Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Mª; de la Escalera, Arturo

    2010-01-01

    The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle. PMID:22163639

  7. Identifying and tracking pedestrians based on sensor fusion and motion stability predictions.

    PubMed

    Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Maria; de la Escalera, Arturo

    2010-01-01

    The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.

  8. Multi-channel automotive night vision system

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  9. Implementation of a Multi-Robot Coverage Algorithm on a Two-Dimensional, Grid-Based Environment

    DTIC Science & Technology

    2017-06-01

    two planar laser range finders with a 180-degree field of view , color camera, vision beacons, and wireless communicator. In their system, the robots...Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF A MULTI -ROBOT COVERAGE ALGORITHM ON A TWO -DIMENSIONAL, GRID-BASED ENVIRONMENT 5. FUNDING NUMBERS...path planning coverage algorithm for a multi -robot system in a two -dimensional, grid-based environment. We assess the applicability of a topology

  10. The Right Track for Vision Correction

    NASA Technical Reports Server (NTRS)

    2003-01-01

    More and more people are putting away their eyeglasses and contact lenses as a result of laser vision correction surgery. LASIK, the most widely performed version of this surgical procedure, improves vision by reshaping the cornea, the clear front surface of the eye, using an excimer laser. One excimer laser system, Alcon s LADARVision 4000, utilizes a laser radar (LADAR) eye tracking device that gives it unmatched precision. During LASIK surgery, laser During LASIK surgery, laser pulses must be accurately placed to reshape the cornea. A challenge to this procedure is the patient s constant eye movement. A person s eyes make small, involuntary movements known as saccadic movements about 100 times per second. Since the saccadic movements will not stop during LASIK surgery, most excimer laser systems use an eye tracking device that measures the movements and guides the placement of the laser beam. LADARVision s eye tracking device stems from the LADAR technology originally developed through several Small Business Innovation Research (SBIR) contracts with NASA s Johnson Space Center and the U.S. Department of Defense s Ballistic Missile Defense Office (BMDO). In the 1980s, Johnson awarded Autonomous Technologies Corporation a Phase I SBIR contract to develop technology for autonomous rendezvous and docking of space vehicles to service satellites. During Phase II of the Johnson SBIR contract, Autonomous Technologies developed a prototype range and velocity imaging LADAR to demonstrate technology that could be used for this purpose.

  11. Review of technological advancements in calibration systems for laser vision correction

    NASA Astrophysics Data System (ADS)

    Arba-Mosquera, Samuel; Vinciguerra, Paolo; Verma, Shwetabh

    2018-02-01

    Using PubMed and our internal database, we extensively reviewed the literature on the technological advancements in calibration systems, with a motive to present an account of the development history, and latest developments in calibration systems used in refractive surgery laser systems. As a second motive, we explored the clinical impact of the error introduced due to the roughness in ablation and its corresponding effect on system calibration. The inclusion criterion for this review was strict relevance to the clinical questions under research. The existing calibration methods, including various plastic models, are highly affected by various factors involved in refractive surgery, such as temperature, airflow, and hydration. Surface roughness plays an important role in accurate measurement of ablation performance on calibration materials. The ratio of ablation efficiency between the human cornea and calibration material is very critical and highly dependent on the laser beam characteristics and test conditions. Objective evaluation of the calibration data and corresponding adjustment of the laser systems at regular intervals are essential for the continuing success and further improvements in outcomes of laser vision correction procedures.

  12. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  13. An Accurate Non-Cooperative Method for Measuring Textureless Spherical Target Based on Calibrated Lasers.

    PubMed

    Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning

    2016-12-09

    Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time.

  14. An Accurate Non-Cooperative Method for Measuring Textureless Spherical Target Based on Calibrated Lasers

    PubMed Central

    Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning

    2016-01-01

    Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time. PMID:27941705

  15. Nanosatellite Maneuver Planning for Point Cloud Generation With a Rangefinder

    DTIC Science & Technology

    2015-06-05

    aided active vision systems [11], dense stereo [12], and TriDAR [13]. However, these systems are unsuitable for a nanosatellite system from power, size...command profiles as well as improving the fidelity of gap detection with better filtering methods for background objects . For example, attitude...application of a single beam laser rangefinder (LRF) to point cloud generation, shape detection , and shape reconstruction for a space-based space

  16. Design of a dynamic test platform for autonomous robot vision systems

    NASA Technical Reports Server (NTRS)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  17. Quality inspection guided laser processing of irregular shape objects by stereo vision measurement: application in badminton shuttle manufacturing

    NASA Astrophysics Data System (ADS)

    Qi, Li; Wang, Shun; Zhang, Yixin; Sun, Yingying; Zhang, Xuping

    2015-11-01

    The quality inspection process is usually carried out after first processing of the raw materials such as cutting and milling. This is because the parts of the materials to be used are unidentified until they have been trimmed. If the quality of the material is assessed before the laser process, then the energy and efforts wasted on defected materials can be saved. We proposed a new production scheme that can achieve quantitative quality inspection prior to primitive laser cutting by means of three-dimensional (3-D) vision measurement. First, the 3-D model of the object is reconstructed by the stereo cameras, from which the spatial cutting path is derived. Second, collaborating with another rear camera, the 3-D cutting path is reprojected to both the frontal and rear views of the object and thus generates the regions-of-interest (ROIs) for surface defect analysis. An accurate visual guided laser process and reprojection-based ROI segmentation are enabled by a global-optimization-based trinocular calibration method. The prototype system was built and tested with the processing of raw duck feathers for high-quality badminton shuttle manufacture. Incorporating with a two-dimensional wavelet-decomposition-based defect analysis algorithm, both the geometrical and appearance features of the raw feathers are quantified before they are cut into small patches, which result in fully automatic feather cutting and sorting.

  18. Vision-based weld pool boundary extraction and width measurement during keyhole fiber laser welding

    NASA Astrophysics Data System (ADS)

    Luo, Masiyang; Shin, Yung C.

    2015-01-01

    In keyhole fiber laser welding processes, the weld pool behavior is essential to determining welding quality. To better observe and control the welding process, the accurate extraction of the weld pool boundary as well as the width is required. This work presents a weld pool edge detection technique based on an off axial green illumination laser and a coaxial image capturing system that consists of a CMOS camera and optic filters. According to the difference of image quality, a complete developed edge detection algorithm is proposed based on the local maximum gradient of greyness searching approach and linear interpolation. The extracted weld pool geometry and the width are validated by the actual welding width measurement and predictions by a numerical multi-phase model.

  19. The guidance methodology of a new automatic guided laser theodolite system

    NASA Astrophysics Data System (ADS)

    Zhang, Zili; Zhu, Jigui; Zhou, Hu; Ye, Shenghua

    2008-12-01

    Spatial coordinate measurement systems such as theodolites, laser trackers and total stations have wide application in manufacturing and certification processes. The traditional operation of theodolites is manual and time-consuming which does not meet the need of online industrial measurement, also laser trackers and total stations need reflective targets which can not realize noncontact and automatic measurement. A new automatic guided laser theodolite system is presented to achieve automatic and noncontact measurement with high precision and efficiency which is comprised of two sub-systems: the basic measurement system and the control and guidance system. The former system is formed by two laser motorized theodolites to accomplish the fundamental measurement tasks while the latter one consists of a camera and vision system unit mounted on a mechanical displacement unit to provide azimuth information of the measured points. The mechanical displacement unit can rotate horizontally and vertically to direct the camera to the desired orientation so that the camera can scan every measured point in the measuring field, then the azimuth of the corresponding point is calculated for the laser motorized theodolites to move accordingly to aim at it. In this paper the whole system composition and measuring principle are analyzed, and then the emphasis is laid on the guidance methodology for the laser points from the theodolites to move towards the measured points. The guidance process is implemented based on the coordinate transformation between the basic measurement system and the control and guidance system. With the view field angle of the vision system unit and the world coordinate of the control and guidance system through coordinate transformation, the azimuth information of the measurement area that the camera points at can be attained. The momentary horizontal and vertical changes of the mechanical displacement movement are also considered and calculated to provide real time azimuth information of the pointed measurement area by which the motorized theodolite will move accordingly. This methodology realizes the predetermined location of the laser points which is within the camera-pointed scope so that it accelerates the measuring process and implements the approximate guidance instead of manual operations. The simulation results show that the proposed method of automatic guidance is effective and feasible which provides good tracking performance of the predetermined location of laser points.

  20. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  1. Method of recognizing the high-speed railway noise barriers based on the distance image

    NASA Astrophysics Data System (ADS)

    Ma, Le; Shao, Shuangyun; Feng, Qibo; Liu, Bingqian; Kim, Chol Ryong

    2016-10-01

    The damage or lack of the noise barriers is one of the important hidden troubles endangering the safety of high-speed railway. In order to obtain the vibration information of the noise barriers, the online detection systems based on laser vision were proposed. The systems capture images of the laser stripe on the noise barriers and export data files containing distance information between the detection systems on the train and the noise barriers. The vibration status or damage of the noise barriers can be estimated depending on the distance information. In this paper, we focused on the method of separating the area of noise barrier from the background automatically. The test results showed that the proposed method is in good efficiency and accuracy.

  2. A flexible 3D laser scanning system using a robotic arm

    NASA Astrophysics Data System (ADS)

    Fei, Zixuan; Zhou, Xiang; Gao, Xiaofei; Zhang, Guanliang

    2017-06-01

    In this paper, we present a flexible 3D scanning system based on a MEMS scanner mounted on an industrial arm with a turntable. This system has 7-degrees of freedom and is able to conduct a full field scan from any angle, suitable for scanning object with the complex shape. The existing non-contact 3D scanning system usually uses laser scanner that projects fixed stripe mounted on the Coordinate Measuring Machine (CMM) or industrial robot. These existing systems can't perform path planning without CAD models. The 3D scanning system presented in this paper can scan the object without CAD models, and we introduced this path planning method in the paper. We also propose a practical approach to calibrating the hand-in-eye system based on binocular stereo vision and analyzes the errors of the hand-eye calibration.

  3. Gas flow parameters in laser cutting of wood- nozzle design

    Treesearch

    Kali Mukherjee; Tom Grendzwell; Parwaiz A.A. Khan; Charles McMillin

    1990-01-01

    The Automated Lumber Processing System (ALPS) is an ongoing team research effort to optimize the yield of parts in a furniture rough mill. The process is designed to couple aspects of computer vision, computer optimization of yield, and laser cutting. This research is focused on optimizing laser wood cutting. Laser machining of lumber has the advantage over...

  4. Laser Spot Tracking Based on Modified Circular Hough Transform and Motion Pattern Analysis

    PubMed Central

    Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan

    2014-01-01

    Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas–Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development. PMID:25350502

  5. Laser spot tracking based on modified circular Hough transform and motion pattern analysis.

    PubMed

    Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan

    2014-10-27

    Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas-Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development.

  6. Integration of a synthetic vision system with airborne laser range scanner-based terrain referenced navigation for precision approach guidance

    NASA Astrophysics Data System (ADS)

    Uijt de Haag, Maarten; Campbell, Jacob; van Graas, Frank

    2005-05-01

    Synthetic Vision Systems (SVS) provide pilots with a virtual visual depiction of the external environment. When using SVS for aircraft precision approach guidance systems accurate positioning relative to the runway with a high level of integrity is required. Precision approach guidance systems in use today require ground-based electronic navigation components with at least one installation at each airport, and in many cases multiple installations to service approaches to all qualifying runways. A terrain-referenced approach guidance system is envisioned to provide precision guidance to an aircraft without the use of ground-based electronic navigation components installed at the airport. This autonomy makes it a good candidate for integration with an SVS. At the Ohio University Avionics Engineering Center (AEC), work has been underway in the development of such a terrain referenced navigation system. When used in conjunction with an Inertial Measurement Unit (IMU) and a high accuracy/resolution terrain database, this terrain referenced navigation system can provide navigation and guidance information to the pilot on a SVS or conventional instruments. The terrain referenced navigation system, under development at AEC, operates on similar principles as other terrain navigation systems: a ground sensing sensor (in this case an airborne laser scanner) gathers range measurements to the terrain; this data is then matched in some fashion with an onboard terrain database to find the most likely position solution and used to update an inertial sensor-based navigator. AEC's system design differs from today's common terrain navigators in its use of a high resolution terrain database (~1 meter post spacing) in conjunction with an airborne laser scanner which is capable of providing tens of thousands independent terrain elevation measurements per second with centimeter-level accuracies. When combined with data from an inertial navigator the high resolution terrain database and laser scanner system is capable of providing near meter-level horizontal and vertical position estimates. Furthermore, the system under development capitalizes on 1) The position and integrity benefits provided by the Wide Area Augmentation System (WAAS) to reduce the initial search space size and; 2) The availability of high accuracy/resolution databases. This paper presents results from flight tests where the terrain reference navigator is used to provide guidance cues for a precision approach.

  7. A Novel Laser Refractive Surgical Treatment for Presbyopia: Optics-Based Customization for Improved Clinical Outcome.

    PubMed

    Pajic, Bojan; Pajic-Eggspuehler, Brigitte; Mueller, Joerg; Cvejic, Zeljka; Studer, Harald

    2017-06-13

    Laser Assisted in Situ Keratomileusis (LASIK) is a proven treatment method for corneal refractive surgery. Surgically induced higher order optical aberrations were a major reason why the method was only rarely used to treat presbyopia, an age-related near-vision loss. In this study, a novel customization algorithm for designing multifocal ablation patterns, thereby minimizing induced optical aberrations, was used to treat 36 presbyopic subjects. Results showed that most candidates went from poor visual acuity to uncorrected 20/20 vision or better for near (78%), intermediate (92%), and for distance (86%) vision, six months after surgery. All subjects were at 20/25 or better for distance and intermediate vision, and a majority (94%) were also better for near vision. Even though further studies are necessary, our results suggest that the employed methodology is a safe, reliable, and predictable refractive surgical treatment for presbyopia.

  8. A Novel Laser Refractive Surgical Treatment for Presbyopia: Optics-Based Customization for Improved Clinical Outcome

    PubMed Central

    Pajic, Bojan; Pajic-Eggspuehler, Brigitte; Mueller, Joerg; Cvejic, Zeljka; Studer, Harald

    2017-01-01

    Laser Assisted in Situ Keratomileusis (LASIK) is a proven treatment method for corneal refractive surgery. Surgically induced higher order optical aberrations were a major reason why the method was only rarely used to treat presbyopia, an age-related near-vision loss. In this study, a novel customization algorithm for designing multifocal ablation patterns, thereby minimizing induced optical aberrations, was used to treat 36 presbyopic subjects. Results showed that most candidates went from poor visual acuity to uncorrected 20/20 vision or better for near (78%), intermediate (92%), and for distance (86%) vision, six months after surgery. All subjects were at 20/25 or better for distance and intermediate vision, and a majority (94%) were also better for near vision. Even though further studies are necessary, our results suggest that the employed methodology is a safe, reliable, and predictable refractive surgical treatment for presbyopia. PMID:28608800

  9. To See Anew: New Technologies Are Moving Rapidly Toward Restoring or Enabling Vision in the Blind.

    PubMed

    Grifantini, Kristina

    2017-01-01

    Humans have been using technology to improve their vision for many decades. Eyeglasses, contact lenses, and, more recently, laser-based surgeries are commonly employed to remedy vision problems, both minor and major. But options are far fewer for those who have not seen since birth or who have reached stages of blindness in later life.

  10. Active solution of homography for pavement crack recovery with four laser lines.

    PubMed

    Xu, Guan; Chen, Fang; Wu, Guangwei; Li, Xiaotao

    2018-05-08

    An active solution method of the homography, which is derived from four laser lines, is proposed to recover the pavement cracks captured by the camera to the real-dimension cracks in the pavement plane. The measurement system, including a camera and four laser projectors, captures the projection laser points on the 2D reference in different positions. The projection laser points are reconstructed in the camera coordinate system. Then, the laser lines are initialized and optimized by the projection laser points. Moreover, the plane-indicated Plücker matrices of the optimized laser lines are employed to model the laser projection points of the laser lines on the pavement. The image-pavement homography is actively determined by the solutions of the perpendicular feet of the projection laser points. The pavement cracks are recovered by the active solution of homography in the experiments. The recovery accuracy of the active solution method is verified by the 2D dimension-known reference. The test case with the measurement distance of 700 mm and the relative angle of 8° achieves the smallest recovery error of 0.78 mm in the experimental investigations, which indicates the application potentials in the vision-based pavement inspection.

  11. Spectral ophthalmoscopy based on supercontinuum

    NASA Astrophysics Data System (ADS)

    Cheng, Yueh-Hung; Yu, Jiun-Yann; Wu, Han-Hsuan; Huang, Bo-Jyun; Chu, Shi-Wei

    2010-02-01

    Confocal scanning laser ophthalmoscope (CSLO) has been established to be an important diagnostic tool for retinopathies like age-related macular degeneration, glaucoma and diabetes. Compared to a confocal laser scanning microscope, CSLO is also capable of providing optical sectioning on retina with the aid of a pinhole, but the microscope objective is replaced by the optics of eye. Since optical spectrum is the fingerprint of local chemical composition, it is attractive to incorporate spectral acquisition into CSLO. However, due to the limitation of laser bandwidth and chromatic/geometric aberration, the scanning systems in current CSLO are not compatible with spectral imaging. Here we demonstrate a spectral CSLO by combining a diffraction-limited broadband scanning system and a supercontinuum laser source. Both optical sectioning capability and sub-cellular resolution are demonstrated on zebrafish's retina. To our knowledge, it is also the first time that CSLO is applied onto the study of fish vision. The versatile spectral CSLO system will be useful to retinopathy diagnosis and neuroscience research.

  12. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  13. Reviews Equipment: Chameleon Nano Flakes Book: Requiem for a Species Equipment: Laser Sound System Equipment: EasySense VISION Equipment: UV Flash Kit Book: The Demon-Haunted World Book: Nonsense on Stilts Book: How to Think about Weird Things Web Watch

    NASA Astrophysics Data System (ADS)

    2011-03-01

    WE RECOMMEND Requiem for a Species This book delivers a sober message about climate change Laser Sound System Sound kit is useful for laser demonstrations EasySense VISION Data Harvest produces another easy-to-use data logger UV Flash Kit Useful equipment captures shadows on film The Demon-Haunted World World-famous astronomer attacks pseudoscience in this book Nonsense on Stilts A thought-provoking analysis of hard and soft sciences How to Think about Weird Things This book explores the credibility of astrologers and their ilk WORTH A LOOK Chameleon Nano Flakes Product lacks good instructions and guidelines WEB WATCH Amateur scientists help out researchers with a variety of online projects

  14. Recent advances in the development and transfer of machine vision technologies for space

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P.; Pendleton, Thomas

    1991-01-01

    Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.

  15. Automatic brightness control of laser spot vision inspection system

    NASA Astrophysics Data System (ADS)

    Han, Yang; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin

    2009-10-01

    The laser spot detection system aims to locate the center of the laser spot after long-distance transmission. The accuracy of positioning laser spot center depends very much on the system's ability to control brightness. In this paper, an automatic brightness control system with high-performance is designed using the device of FPGA. The brightness is controlled by combination of auto aperture (video driver) and adaptive exposure algorithm, and clear images with proper exposure are obtained under different conditions of illumination. Automatic brightness control system creates favorable conditions for positioning of the laser spot center later, and experiment results illuminate the measurement accuracy of the system has been effectively guaranteed. The average error of the spot center is within 0.5mm.

  16. Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation

    PubMed Central

    Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin

    2014-01-01

    Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780

  17. Interferometric optical online dosimetry for selective retina treatment (SRT)

    NASA Astrophysics Data System (ADS)

    Stoehr, Hardo; Ptaszynski, Lars; Fritz, Andreas; Brinkmann, Ralf

    2007-07-01

    Selective retina treatment (SRT) is a new laser based method to treat retinal diseases associated with disorders of the retinal pigment epithelium (RPE). Applying microsecond laser pulses tissue damage spatially confined to the retinal pigment epithelium (RPE) is achieved. The RPE cell damage is caused by transient microbubbles emerging at the strongly absorbing melanin granules inside the RPE cells. Due to the spatial confinement to the RPE the photoreceptors can be spared and vision can be maintained in the treated retinal areas. A drawback for effective clinical SRT is that the laser induced lesions are ophthalmoscopically invisible. Therefore, a real-time feedback system for dosimetry is necessary in order to avoid undertreatment or unwanted collateral damage to the adjacent tissue. We develop a dosimetry system which uses optical interferometry for the detection of the transient microbubbles. The system is based on an optical fiber interferometer operated with a laser diode at 830nm. We present current results obtained with a laser slit lamp using porcine RPE explants in vitro and complete porcine eye globes ex vivo. The RPE cell damage is determined by Calcein fluorescence viability assays. With a threshold criterium for RPE cell death derived from the measured interferometric signal transients good agreement with the results of the viability assays is achieved.

  18. Comparison of 2 wavefront-guided excimer lasers for myopic laser in situ keratomileusis: one-year results.

    PubMed

    Yu, Charles Q; Manche, Edward E

    2014-03-01

    To compare laser in situ keratomileusis (LASIK) outcomes between 2 wavefront-guided excimer laser systems in the treatment of myopia. University eye clinic, Palo Alto, California, USA. Prospective comparative case series. One eye of patients was treated with the Allegretto Wave Eye-Q system (small-spot scanning laser) and the fellow eye with the Visx Star Customvue S4 IR system (variable-spot scanning laser). Evaluations included measurement of uncorrected visual acuity, corrected visual acuity, and wavefront aberrometry. One hundred eyes (50 patients) were treated. The mean preoperative spherical equivalent (SE) refraction was -3.89 diopters (D) ± 1.67 (SD) and -4.18 ± 1.73 D in the small-spot scanning laser group and variable-spot scanning laser group, respectively. There were no significant differences in preoperative higher-order aberrations (HOAs) between the groups. Twelve months postoperatively, all eyes in the small-spot scanning laser group and 92% in the variable-spot scanning laser group were within ±0.50 D of the intended correction (P = .04). At that time, the small-spot scanning laser group had significantly less spherical aberration (0.12 versus 0.15) (P = .04) and significantly less mean total higher-order root mean square (0.33 μm versus 0.40 μm) (P = .01). Subjectively, patients reported that the clarity of night and day vision was significantly better in the eye treated with the small-spot scanning laser. The predictability and self-reported clarity of vision of wavefront-guided LASIK were better with the small-spot scanning laser. Eyes treated with the small-spot scanning laser had significantly fewer HOAs. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  19. Ultrashort-pulse lasers treating the crystalline lens: will they cause vision-threatening cataract? (An American Ophthalmological Society thesis).

    PubMed

    Krueger, Ronald R; Uy, Harvey; McDonald, Jared; Edwards, Keith

    2012-12-01

    To demonstrate that ultrashort-pulse laser treatment in the crystalline lens does not form a focal, progressive, or vision-threatening cataract. An Nd:vanadate picosecond laser (10 ps) with prototype delivery system was used. Primates: 11 rhesus monkey eyes were prospectively treated at the University of Wisconsin (energy 25-45 μJ/pulse and 2.0-11.3M pulses per lens). Analysis of lens clarity and fundus imaging was assessed postoperatively for up to 4½ years (5 eyes). Humans: 80 presbyopic patients were prospectively treated in one eye at the Asian Eye Institute in the Philippines (energy 10 μJ/pulse and 0.45-1.45M pulses per lens). Analysis of lens clarity, best-corrected visual acuity, and subjective symptoms was performed at 1 month, prior to elective lens extraction. Bubbles were immediately seen, with resolution within the first 24 to 48 hours. Afterwards, the laser pattern could be seen with faint, noncoalescing, pinpoint micro-opacities in both primate and human eyes. In primates, long-term follow-up at 4½ years showed no focal or progressive cataract, except in 2 eyes with preexisting cataract. In humans, <25% of patients with central sparing (0.75 and 1.0 mm radius) lost 2 or more lines of best spectacle-corrected visual acuity at 1 month, and >70% reported acceptable or better distance vision and no or mild symptoms. Meanwhile, >70% without sparing (0 and 0.5 mm radius) lost 2 or more lines, and most reported poor or severe vision and symptoms. Focal, progressive, and vision-threatening cataracts can be avoided by lowering the laser energy, avoiding prior cataract, and sparing the center of the lens.

  20. Ultrashort-Pulse Lasers Treating the Crystalline Lens: Will They Cause Vision-Threatening Cataract? (An American Ophthalmological Society Thesis)

    PubMed Central

    Krueger, Ronald R.; Uy, Harvey; McDonald, Jared; Edwards, Keith

    2012-01-01

    Purpose: To demonstrate that ultrashort-pulse laser treatment in the crystalline lens does not form a focal, progressive, or vision-threatening cataract. Methods: An Nd:vanadate picosecond laser (10 ps) with prototype delivery system was used. Primates: 11 rhesus monkey eyes were prospectively treated at the University of Wisconsin (energy 25–45 μJ/pulse and 2.0–11.3M pulses per lens). Analysis of lens clarity and fundus imaging was assessed postoperatively for up to 4½ years (5 eyes). Humans: 80 presbyopic patients were prospectively treated in one eye at the Asian Eye Institute in the Philippines (energy 10 μJ/pulse and 0.45–1.45M pulses per lens). Analysis of lens clarity, best-corrected visual acuity, and subjective symptoms was performed at 1 month, prior to elective lens extraction. Results: Bubbles were immediately seen, with resolution within the first 24 to 48 hours. Afterwards, the laser pattern could be seen with faint, noncoalescing, pinpoint micro-opacities in both primate and human eyes. In primates, long-term follow-up at 4½ years showed no focal or progressive cataract, except in 2 eyes with preexisting cataract. In humans, <25% of patients with central sparing (0.75 and 1.0 mm radius) lost 2 or more lines of best spectacle-corrected visual acuity at 1 month, and >70% reported acceptable or better distance vision and no or mild symptoms. Meanwhile, >70% without sparing (0 and 0.5 mm radius) lost 2 or more lines, and most reported poor or severe vision and symptoms. Conclusions: Focal, progressive, and vision-threatening cataracts can be avoided by lowering the laser energy, avoiding prior cataract, and sparing the center of the lens. PMID:23818739

  1. The Light Plane Calibration Method of the Laser Welding Vision Monitoring System

    NASA Astrophysics Data System (ADS)

    Wang, B. G.; Wu, M. H.; Jia, W. P.

    2018-03-01

    According to the aerospace and automobile industry, the sheet steels are the very important parts. In the recent years, laser welding technique had been used to weld the sheet steel part. The seam width between the two parts is usually less than 0.1mm. Because the error of the fixture fixed can’t be eliminated, the welding parts quality can be greatly affected. In order to improve the welding quality, the line structured light is employed in the vision monitoring system to plan the welding path before welding. In order to improve the weld precision, the vision system is located on Z axis of the computer numerical control (CNC) tool. The planar pattern is placed on the X-Y plane of the CNC tool, and the structured light is projected on the planar pattern. The vision system stay at three different positions along the Z axis of the CNC tool, and the camera shoot the image of the planar pattern at every position. Using the calculated the sub-pixel center line of the structure light, the world coordinate of the center light line can be calculated. Thus, the structured light plane can be calculated by fitting the structured light line. Experiment result shows the effective of the proposed method.

  2. The next generation of LASIK patients.

    PubMed

    Freeman, J Christopher; Chuck, Roy S

    2009-07-01

    With baby boomers aging, and despite a growing global population, there is a decreasing number of potential laser vision correction patients. Some believe that the worldwide economic downturn of these times will limit the number of potential patients as well. This article highlights looking to an alternative segment of the population to identify potential laser vision correction patients and the limitations of reaching this group. The group known as generation Y contains a large number of individuals who may be candidates for laser vision correction. Traditional marketing efforts present challenges in reaching this particular population segment. Many individuals in this group are already patients of eye doctors for contact lenses and glasses and can be reached by these eye doctors to address candidacy and education of laser vision correction. Generation Y represents a large population segment that contains technology-embracing individuals who, although hard to reach with traditional marketing efforts, may be reached by fellow eye doctors already managing these patients. There are many in this age group who would be good laser vision correction candidates.

  3. Combined measurement system for double shield tunnel boring machine guidance based on optical and visual methods.

    PubMed

    Lin, Jiarui; Gao, Kai; Gao, Yang; Wang, Zheng

    2017-10-01

    In order to detect the position of the cutting shield at the head of a double shield tunnel boring machine (TBM) during the excavation, this paper develops a combined measurement system which is mainly composed of several optical feature points, a monocular vision sensor, a laser target sensor, and a total station. The different elements of the combined system are mounted on the TBM in suitable sequence, and the position of the cutting shield in the reference total station frame is determined by coordinate transformations. Subsequently, the structure of the feature points and matching technique for them are expounded, the position measurement method based on monocular vision is presented, and the calibration methods for the unknown relationships among different parts of the system are proposed. Finally, a set of experimental platforms to simulate the double shield TBM is established, and accuracy verification experiments are conducted. Experimental results show that the mean deviation of the system is 6.8 mm, which satisfies the requirements of double shield TBM guidance.

  4. The 3-D vision system integrated dexterous hand

    NASA Technical Reports Server (NTRS)

    Luo, Ren C.; Han, Youn-Sik

    1989-01-01

    Most multifingered hands use a tendon mechanism to minimize the size and weight of the hand. Such tendon mechanisms suffer from the problems of striction and friction of the tendons resulting in a reduction of control accuracy. A design for a 3-D vision system integrated dexterous hand with motor control is described which overcomes these problems. The proposed hand is composed of three three-jointed grasping fingers with tactile sensors on their tips, a two-jointed eye finger with a cross-shaped laser beam emitting diode in its distal part. The two non-grasping fingers allow 3-D vision capability and can rotate around the hand to see and measure the sides of grasped objects and the task environment. An algorithm that determines the range and local orientation of the contact surface using a cross-shaped laser beam is introduced along with some potential applications. An efficient method for finger force calculation is presented which uses the measured contact surface normals of an object.

  5. Computer vision based method and system for online measurement of geometric parameters of train wheel sets.

    PubMed

    Zhang, Zhi-Feng; Gao, Zhan; Liu, Yuan-Yuan; Jiang, Feng-Chun; Yang, Yan-Li; Ren, Yu-Fen; Yang, Hong-Jun; Yang, Kun; Zhang, Xiao-Dong

    2012-01-01

    Train wheel sets must be periodically inspected for possible or actual premature failures and it is very significant to record the wear history for the full life of utilization of wheel sets. This means that an online measuring system could be of great benefit to overall process control. An online non-contact method for measuring a wheel set's geometric parameters based on the opto-electronic measuring technique is presented in this paper. A charge coupled device (CCD) camera with a selected optical lens and a frame grabber was used to capture the image of the light profile of the wheel set illuminated by a linear laser. The analogue signals of the image were transformed into corresponding digital grey level values. The 'mapping function method' is used to transform an image pixel coordinate to a space coordinate. The images of wheel sets were captured when the train passed through the measuring system. The rim inside thickness and flange thickness were measured and analyzed. The spatial resolution of the whole image capturing system is about 0.33 mm. Theoretic and experimental results show that the online measurement system based on computer vision can meet wheel set measurement requirements.

  6. Laser assisted robotic surgery in cornea transplantation

    NASA Astrophysics Data System (ADS)

    Rossi, Francesca; Micheletti, Filippo; Magni, Giada; Pini, Roberto; Menabuoni, Luca; Leoni, Fabio; Magnani, Bernardo

    2017-03-01

    Robotic surgery is a reality in several surgical fields, such as in gastrointestinal surgery. In ophthalmic surgery the required high spatial precision is limiting the application of robotic system, and even if several attempts have been designed in the last 10 years, only some application in retinal surgery were tested in animal models. The combination of photonics and robotics can really open new frontiers in minimally invasive surgery, improving the precision, reducing tremor, amplifying scale of motion, and automating the procedure. In this manuscript we present the preliminary results in developing a vision guided robotic platform for laser-assisted anterior eye surgery. The robotic console is composed by a robotic arm equipped with an "end effector" designed to deliver laser light to the anterior corneal surface. The main intended application is for laser welding of corneal tissue in laser assisted penetrating keratoplasty and endothelial keratoplasty. The console is equipped with an integrated vision system. The experiment originates from a clear medical demand in order to improve the efficacy of different surgical procedures: when the prototype will be optimized, other surgical areas will be included in its application, such as neurosurgery, urology and spinal surgery.

  7. A robotic platform for laser welding of corneal tissue

    NASA Astrophysics Data System (ADS)

    Rossi, Francesca; Micheletti, Filippo; Magni, Giada; Pini, Roberto; Menabuoni, Luca; Leoni, Fabio; Magnani, Bernardo

    2017-07-01

    Robotic surgery is a reality in several surgical fields, such as in gastrointestinal surgery. In ophthalmic surgery the required high spatial precision is limiting the application of robotic system, and even if several attempts have been designed in the last 10 years, only some application in retinal surgery were tested in animal models. The combination of photonics and robotics can really open new frontiers in minimally invasive surgery, improving the precision, reducing tremor, amplifying scale of motion, and automating the procedure. In this manuscript we present the preliminary results in developing a vision guided robotic platform for laser-assisted anterior eye surgery. The robotic console is composed by a robotic arm equipped with an "end effector" designed to deliver laser light to the anterior corneal surface. The main intended application is for laser welding of corneal tissue in laser assisted penetrating keratoplasty and endothelial keratoplasty. The console is equipped with an integrated vision system. The experiment originates from a clear medical demand in order to improve the efficacy of different surgical procedures: when the prototype will be optimized, other surgical areas will be included in its application, such as neurosurgery, urology and spinal surgery.

  8. Laser vision seam tracking system based on image processing and continuous convolution operator tracker

    NASA Astrophysics Data System (ADS)

    Zou, Yanbiao; Chen, Tao

    2018-06-01

    To address the problem of low welding precision caused by the poor real-time tracking performance of common welding robots, a novel seam tracking system with excellent real-time tracking performance and high accuracy is designed based on the morphological image processing method and continuous convolution operator tracker (CCOT) object tracking algorithm. The system consists of a six-axis welding robot, a line laser sensor, and an industrial computer. This work also studies the measurement principle involved in the designed system. Through the CCOT algorithm, the weld feature points are determined in real time from the noise image during the welding process, and the 3D coordinate values of these points are obtained according to the measurement principle to control the movement of the robot and the torch in real time. Experimental results show that the sensor has a frequency of 50 Hz. The welding torch runs smoothly with a strong arc light and splash interference. Tracking error can reach ±0.2 mm, and the minimal distance between the laser stripe and the welding molten pool can reach 15 mm, which can significantly fulfill actual welding requirements.

  9. Three-dimensional dynamic deformation monitoring using a laser-scanning system

    NASA Astrophysics Data System (ADS)

    Al-Hanbali, Nedal N.; Teskey, William F.

    1994-10-01

    Non-contact dynamic deformation monitoring (e.g. with a laser scanning system) is very useful in monitoring changes in alignment and changes in size and shape of coupled operating machines. If relative movements between coupled operating machines are large, excessive wear in the machines or unplanned shutdowns due to machinery failure will occur. The purpose of non-contact dynamic deformation monitoring is to identify the causes of large movements and point to remedial action that can be taken to prevent them. The laser scanning system is a laser-based 3D vision system. The system-technique is based on an auto- synchronized triangulation scanning scheme. The system provides accurate, fast, and reliable 3D measurements and can measure objects between 0.5 m to 100 m with a field of view of 40 degree(s) X 50 degree(s). The system is flexible in terms of providing control over the scanned area and depth. The system also provides the user with the intensity image in addition to the depth coded image. This paper reports on the preliminary testing of this system to monitor surface movements and target (point) movements. The monitoring resolution achieved for an operating motorized alignment test rig in the lab was 1 mm for surface movements and 0.50 m for target movements. Raw data manipulation, local calibration, and the method of relating measurements to control points will be discussed. Possibilities for improving the resolution and recommendations for future development will also be presented.

  10. High-accuracy microassembly by intelligent vision systems and smart sensor integration

    NASA Astrophysics Data System (ADS)

    Schilp, Johannes; Harfensteller, Mark; Jacob, Dirk; Schilp, Michael

    2003-10-01

    Innovative production processes and strategies from batch production to high volume scale are playing a decisive role in generating microsystems economically. In particular assembly processes are crucial operations during the production of microsystems. Due to large batch sizes many microsystems can be produced economically by conventional assembly techniques using specialized and highly automated assembly systems. At laboratory stage microsystems are mostly assembled by hand. Between these extremes there is a wide field of small and middle sized batch production wherefore common automated solutions rarely are profitable. For assembly processes at these batch sizes a flexible automated assembly system has been developed at the iwb. It is based on a modular design. Actuators like grippers, dispensers or other process tools can easily be attached due to a special tool changing system. Therefore new joining techniques can easily be implemented. A force-sensor and a vision system are integrated into the tool head. The automated assembly processes are based on different optical sensors and smart actuators like high-accuracy robots or linear-motors. A fiber optic sensor is integrated in the dispensing module to measure contactless the clearance between the dispense needle and the substrate. Robot vision systems using the strategy of optical pattern recognition are also implemented as modules. In combination with relative positioning strategies, an assembly accuracy of the assembly system of less than 3 μm can be realized. A laser system is used for manufacturing processes like soldering.

  11. Optimum Laser Beam Characteristics for Achieving Smoother Ablations in Laser Vision Correction.

    PubMed

    Verma, Shwetabh; Hesser, Juergen; Arba-Mosquera, Samuel

    2017-04-01

    Controversial opinions exist regarding optimum laser beam characteristics for achieving smoother ablations in laser-based vision correction. The purpose of the study was to outline a rigorous simulation model for simulating shot-by-shot ablation process. The impact of laser beam characteristics like super Gaussian order, truncation radius, spot geometry, spot overlap, and lattice geometry were tested on ablation smoothness. Given the super Gaussian order, the theoretical beam profile was determined following Lambert-Beer model. The intensity beam profile originating from an excimer laser was measured with a beam profiler camera. For both, the measured and theoretical beam profiles, two spot geometries (round and square spots) were considered, and two types of lattices (reticular and triangular) were simulated with varying spot overlaps and ablated material (cornea or polymethylmethacrylate [PMMA]). The roughness in ablation was determined by the root-mean-square per square root of layer depth. Truncating the beam profile increases the roughness in ablation, Gaussian profiles theoretically result in smoother ablations, round spot geometries produce lower roughness in ablation compared to square geometry, triangular lattices theoretically produce lower roughness in ablation compared to the reticular lattice, theoretically modeled beam profiles show lower roughness in ablation compared to the measured beam profile, and the simulated roughness in ablation on PMMA tends to be lower than on human cornea. For given input parameters, proper optimum parameters for minimizing the roughness have been found. Theoretically, the proposed model can be used for achieving smoothness with laser systems used for ablation processes at relatively low cost. This model may improve the quality of results and could be directly applied for improving postoperative surface quality.

  12. Development of an Advanced Aidman Vision Screener (AVS) for selective assessment of outer and inner laser induced retinal injury

    NASA Astrophysics Data System (ADS)

    Boye, Michael W.; Zwick, Harry; Stuck, Bruce E.; Edsall, Peter R.; Akers, Andre

    2007-02-01

    The need for tools that can assist in evaluating visual function is an essential and a growing requirement as lasers on the modern battlefield mature and proliferate. The requirement for rapid and sensitive vision assessment under field conditions produced the USAMRD Aidman Vision Screener (AVS), designed to be used as a field diagnostic tool for assessing laser induced retinal damage. In this paper, we describe additions to the AVS designed to provide a more sensitive assessment of laser induced retinal dysfunction. The AVS incorporates spectral LogMar Acuity targets without and with neural opponent chromatic backgrounds. Thus, it provides the capability of detecting selective photoreceptor damage and its functional consequences at the level of both the outer and inner retina. Modifications to the original achromatic AVS have been implemented to detect selective cone system dysfunction by providing LogMar acuity Landolt rings associated with the peak spectral absorption regions of the S (short), M (middle), and L (long) wavelength cone photoreceptor systems. Evaluation of inner retinal dysfunction associated with selective outer cone damage employs LogMar spectral acuity charts with backgrounds that are neurally opponent. Thus, the AVS provides the capability to assess the effect of selective cone dysfunction on the normal neural balance at the level of the inner retinal interactions. Test and opponent background spectra have been optimized by using color space metrics. A minimal number of three AVS evaluations will be utilized to provide an estimate of false alarm level.

  13. Enhanced optical discrimination system based on switchable retroreflective films

    NASA Astrophysics Data System (ADS)

    Schultz, Phillip; Heikenfeld, Jason

    2016-04-01

    Reported herein is the design, characterization, and demonstration of a laser interrogation and response optical discrimination system based on large-area corner-cube retroreflective films. The switchable retroreflective films use light-scattering liquid crystal to modulate retroreflected intensity. The system can operate with multiple wavelengths (visible to infrared) and includes variable divergence optics for irradiance adjustments and ease of system alignment. The electronic receiver and switchable retroreflector offer low-power operation (<4 mW standby) on coin cell batteries with rapid interrogation to retroreflected signal reception response times (<15 ms). The entire switchable retroreflector film is <1 mm thick and is flexible for optimal placement and increased angular response. The system was demonstrated in high ambient lighting conditions (daylight, 18k lux) with a visible 10-mW output 635-nm source out to a distance of 400 m (naked eye detection). Nighttime demonstrations were performed using a 1.5-mW, 850-nm infrared laser diode out to a distance of 400 m using a night vision camera. This system could have tagging and conspicuity applications in commercial or military settings.

  14. Agent-based station for on-line diagnostics by self-adaptive laser Doppler vibrometry

    NASA Astrophysics Data System (ADS)

    Serafini, S.; Paone, N.; Castellini, P.

    2013-12-01

    A self-adaptive diagnostic system based on laser vibrometry is proposed for quality control of mechanical defects by vibration testing; it is developed for appliances at the end of an assembly line, but its characteristics are generally suited for testing most types of electromechanical products. It consists of a laser Doppler vibrometer, equipped with scanning mirrors and a camera, which implements self-adaptive bahaviour for optimizing the measurement. The system is conceived as a Quality Control Agent (QCA) and it is part of a Multi Agent System that supervises all the production line. The QCA behaviour is defined so to minimize measurement uncertainty during the on-line tests and to compensate target mis-positioning under guidance of a vision system. Best measurement conditions are reached by maximizing the amplitude of the optical Doppler beat signal (signal quality) and consequently minimize uncertainty. In this paper, the optimization strategy for measurement enhancement achieved by the down-hill algorithm (Nelder-Mead algorithm) and its effect on signal quality improvement is discussed. Tests on a washing machine in controlled operating conditions allow to evaluate the efficacy of the method; significant reduction of noise on vibration velocity spectra is observed. Results from on-line tests are presented, which demonstrate the potential of the system for industrial quality control.

  15. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Artificial intelligence concepts are applied to robotics. Artificial neural networks, expert systems and laser imaging techniques for autonomous space robots are being studied. A computer graphics laser range finder simulator developed by Wu has been used by Weiland and Norwood to study use of artificial neural networks for path planning and obstacle avoidance. Interest is expressed in applications of CLIPS, NETS, and Fuzzy Control. These applications are applied to robot navigation.

  16. Real-time image processing of TOF range images using a reconfigurable processor system

    NASA Astrophysics Data System (ADS)

    Hussmann, S.; Knoll, F.; Edeler, T.

    2011-07-01

    During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.

  17. Enabling Autonomous Navigation for Affordable Scooters.

    PubMed

    Liu, Kaikai; Mulky, Rajathswaroop

    2018-06-05

    Despite the technical success of existing assistive technologies, for example, electric wheelchairs and scooters, they are still far from effective enough in helping those in need navigate to their destinations in a hassle-free manner. In this paper, we propose to improve the safety and autonomy of navigation by designing a cutting-edge autonomous scooter, thus allowing people with mobility challenges to ambulate independently and safely in possibly unfamiliar surroundings. We focus on indoor navigation scenarios for the autonomous scooter where the current location, maps, and nearby obstacles are unknown. To achieve semi-LiDAR functionality, we leverage the gyros-based pose data to compensate the laser motion in real time and create synthetic mapping of simple environments with regular shapes and deep hallways. Laser range finders are suitable for long ranges with limited resolution. Stereo vision, on the other hand, provides 3D structural data of nearby complex objects. To achieve simultaneous fine-grained resolution and long range coverage in the mapping of cluttered and complex environments, we dynamically fuse the measurements from the stereo vision camera system, the synthetic laser scanner, and the LiDAR. We propose solutions to self-correct errors in data fusion and create a hybrid map to assist the scooter in achieving collision-free navigation in an indoor environment.

  18. Structured-Light Based 3d Laser Scanning of Semi-Submerged Structures

    NASA Astrophysics Data System (ADS)

    van der Lucht, J.; Bleier, M.; Leutert, F.; Schilling, K.; Nüchter, A.

    2018-05-01

    In this work we look at 3D acquisition of semi-submerged structures with a triangulation based underwater laser scanning system. The motivation is that we want to simultaneously capture data above and below water to create a consistent model without any gaps. The employed structured light scanner consist of a machine vision camera and a green line laser. In order to reconstruct precise surface models of the object it is necessary to model and correct for the refraction of the laser line and camera rays at the water-air boundary. We derive a geometric model for the refraction at the air-water interface and propose a method for correcting the scans. Furthermore, we show how the water surface is directly estimated from sensor data. The approach is verified using scans captured with an industrial manipulator to achieve reproducible scanner trajectories with different incident angles. We show that the proposed method is effective for refractive correction and that it can be applied directly to the raw sensor data without requiring any external markers or targets.

  19. The design and realization of a three-dimensional video system by means of a CCD array

    NASA Astrophysics Data System (ADS)

    Boizard, J. L.

    1985-12-01

    Design features and principles and initial tests of a prototype three-dimensional robot vision system based on a laser source and a CCD detector array is described. The use of a laser as a coherent illumination source permits the determination of the relief using one emitter since the location of the source is a known quantity with low distortion. The CCD signal detector array furnishes an acceptable signal/noise ratio and, when wired to an appropriate signal processing system, furnishes real-time data on the return signals, i.e., the characteristic points of an object being scanned. Signal processing involves integration of 29 kB of data per 100 samples, with sampling occurring at a rate of 5 MHz (the CCDs) and yielding an image every 12 msec. Algorithms for filtering errors from the data stream are discussed.

  20. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    PubMed Central

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  1. Robotic vision. [process control applications

    NASA Technical Reports Server (NTRS)

    Williams, D. S.; Wilf, J. M.; Cunningham, R. T.; Eskenazi, R.

    1979-01-01

    Robotic vision, involving the use of a vision system to control a process, is discussed. Design and selection of active sensors employing radiation of radio waves, sound waves, and laser light, respectively, to light up unobservable features in the scene are considered, as are design and selection of passive sensors, which rely on external sources of illumination. The segmentation technique by which an image is separated into different collections of contiguous picture elements having such common characteristics as color, brightness, or texture is examined, with emphasis on the edge detection technique. The IMFEX (image feature extractor) system performing edge detection and thresholding at 30 frames/sec television frame rates is described. The template matching and discrimination approach to recognize objects are noted. Applications of robotic vision in industry for tasks too monotonous or too dangerous for the workers are mentioned.

  2. Alternative Refractive Surgery Procedures

    MedlinePlus

    ... alternative refractive surgery procedures to LASIK . Wavefront-Guided LASIK Before surgery, the excimer laser is programmed with ... precise "sculpting" of each unique cornea. In conventional LASIK , this programming is based on the patient's vision ...

  3. Rationale for the Diabetic Retinopathy Clinical Research Network Treatment Protocol for Center-involved Diabetic Macular Edema

    PubMed Central

    Aiello, Lloyd Paul; Beck, Roy W; Bressler, Neil M.; Browning, David J.; Chalam, KV; Davis, Matthew; Ferris, Frederick L; Glassman, Adam; Maturi, Raj; Stockdale, Cynthia R.; Topping, Trexler

    2011-01-01

    Objective Describe the underlying principles used to develop a web-based algorithm that incorporated intravitreal anti-vascular endothelial growth factor (anti-VEGF) treatment for diabetic macular edema (DME) in a Diabetic Retinopathy Clinical Research Network (DRCR.net) randomized clinical trial. Design Discussion of treatment protocol for DME. Participants Subjects with vision loss from DME involving the center of the macula. Methods The DRCR.net created an algorithm incorporating anti-VEGF injections in a comparative effectiveness randomized clinical trial evaluating intravitreal ranibizumab with prompt or deferred (≥24 weeks) focal/grid laser in eyes with vision loss from center-involved DME. Results confirmed that intravitreal ranibizumab with prompt or deferred laser provides superior visual acuity outcomes, compared with prompt laser alone through at least 2 years. Duplication of this algorithm may not be practical for clinical practice. In order to share their opinion on how ophthalmologists might emulate the study protocol, participating DRCR.net investigators developed guidelines based on the algorithm's underlying rationale. Main Outcome Measures Clinical guidelines based on a DRCR.net protocol. Results The treatment protocol required real time feedback from a web-based data entry system for intravitreal injections, focal/grid laser, and follow-up intervals. Guidance from this system indicated whether treatment was required or given at investigator discretion and when follow-up should be scheduled. Clinical treatment guidelines, based on the underlying clinical rationale of the DRCR.net protocol, include repeating treatment monthly as long as there is improvement in edema compared with the previous month, or until the retina is no longer thickened. If thickening recurs or worsens after discontinuing treatment, treatment is resumed. Conclusions Duplication of the approach used in the DRCR.net randomized clinical trial to treat DME involving the center of the macula with intravitreal ranibizumab may not be practical in clinical practice, but likely can be emulated based on an understanding of the underlying rationale for the study protocol. Inherent differences between a web-based treatment algorithm and a clinical approach may lead to differences in outcomes that are impossible to predict. The closer the clinical approach is to the algorithm used in the study, the more likely the outcomes will be similar to those published. PMID:22136692

  4. Rationale for the diabetic retinopathy clinical research network treatment protocol for center-involved diabetic macular edema.

    PubMed

    Aiello, Lloyd Paul; Beck, Roy W; Bressler, Neil M; Browning, David J; Chalam, K V; Davis, Matthew; Ferris, Frederick L; Glassman, Adam R; Maturi, Raj K; Stockdale, Cynthia R; Topping, Trexler M

    2011-12-01

    To describe the underlying principles used to develop a web-based algorithm that incorporated intravitreal anti-vascular endothelial growth factor (anti-VEGF) treatment for diabetic macular edema (DME) in a Diabetic Retinopathy Clinical Research Network (DRCR.net) randomized clinical trial. Discussion of treatment protocol for DME. Subjects with vision loss resulting from DME involving the center of the macula. The DRCR.net created an algorithm incorporating anti-VEGF injections in a comparative effectiveness randomized clinical trial evaluating intravitreal ranibizumab with prompt or deferred (≥24 weeks) focal/grid laser treatment in eyes with vision loss resulting from center-involved DME. Results confirmed that intravitreal ranibizumab with prompt or deferred laser provides superior visual acuity outcomes compared with prompt laser alone through at least 2 years. Duplication of this algorithm may not be practical for clinical practice. To share their opinion on how ophthalmologists might emulate the study protocol, participating DRCR.net investigators developed guidelines based on the algorithm's underlying rationale. Clinical guidelines based on a DRCR.net protocol. The treatment protocol required real-time feedback from a web-based data entry system for intravitreal injections, focal/grid laser treatment, and follow-up intervals. Guidance from this system indicated whether treatment was required or given at investigator discretion and when follow-up should be scheduled. Clinical treatment guidelines, based on the underlying clinical rationale of the DRCR.net protocol, include repeating treatment monthly as long as there is improvement in edema compared with the previous month or until the retina is no longer thickened. If thickening recurs or worsens after discontinuing treatment, treatment is resumed. Duplication of the approach used in the DRCR.net randomized clinical trial to treat DME involving the center of the macula with intravitreal ranibizumab may not be practical in clinical practice, but likely can be emulated based on an understanding of the underlying rationale for the study protocol. Inherent differences between a web-based treatment algorithm and a clinical approach may lead to differences in outcomes that are impossible to predict. The closer the clinical approach is to the algorithm used in the study, the more likely the outcomes will be similar to those published. Proprietary or commercial disclosure may be found after the references. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  5. Precision targeting with a tracking adaptive optics scanning laser ophthalmoscope

    NASA Astrophysics Data System (ADS)

    Hammer, Daniel X.; Ferguson, R. Daniel; Bigelow, Chad E.; Iftimia, Nicusor V.; Ustun, Teoman E.; Noojin, Gary D.; Stolarski, David J.; Hodnett, Harvey M.; Imholte, Michelle L.; Kumru, Semih S.; McCall, Michelle N.; Toth, Cynthia A.; Rockwell, Benjamin A.

    2006-02-01

    Precise targeting of retinal structures including retinal pigment epithelial cells, feeder vessels, ganglion cells, photoreceptors, and other cells important for light transduction may enable earlier disease intervention with laser therapies and advanced methods for vision studies. A novel imaging system based upon scanning laser ophthalmoscopy (SLO) with adaptive optics (AO) and active image stabilization was designed, developed, and tested in humans and animals. An additional port allows delivery of aberration-corrected therapeutic/stimulus laser sources. The system design includes simultaneous presentation of non-AO, wide-field (~40 deg) and AO, high-magnification (1-2 deg) retinal scans easily positioned anywhere on the retina in a drag-and-drop manner. The AO optical design achieves an error of <0.45 waves (at 800 nm) over +/-6 deg on the retina. A MEMS-based deformable mirror (Boston Micromachines Inc.) is used for wave-front correction. The third generation retinal tracking system achieves a bandwidth of greater than 1 kHz allowing acquisition of stabilized AO images with an accuracy of ~10 μm. Normal adult human volunteers and animals with previously-placed lesions (cynomolgus monkeys) were tested to optimize the tracking instrumentation and to characterize AO imaging performance. Ultrafast laser pulses were delivered to monkeys to characterize the ability to precisely place lesions and stimulus beams. Other advanced features such as real-time image averaging, automatic highresolution mosaic generation, and automatic blink detection and tracking re-lock were also tested. The system has the potential to become an important tool to clinicians and researchers for early detection and treatment of retinal diseases.

  6. The research on calibration methods of dual-CCD laser three-dimensional human face scanning system

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong

    2013-09-01

    In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.

  7. Scale Estimation and Correction of the Monocular Simultaneous Localization and Mapping (SLAM) Based on Fusion of 1D Laser Range Finder and Vision Data.

    PubMed

    Zhang, Zhuang; Zhao, Rujin; Liu, Enhai; Yan, Kun; Ma, Yuebo

    2018-06-15

    This article presents a new sensor fusion method for visual simultaneous localization and mapping (SLAM) through integration of a monocular camera and a 1D-laser range finder. Such as a fusion method provides the scale estimation and drift correction and it is not limited by volume, e.g., the stereo camera is constrained by the baseline and overcomes the limited depth range problem associated with SLAM for RGBD cameras. We first present the analytical feasibility for estimating the absolute scale through the fusion of 1D distance information and image information. Next, the analytical derivation of the laser-vision fusion is described in detail based on the local dense reconstruction of the image sequences. We also correct the scale drift of the monocular SLAM using the laser distance information which is independent of the drift error. Finally, application of this approach to both indoor and outdoor scenes is verified by the Technical University of Munich dataset of RGBD and self-collected data. We compare the effects of the scale estimation and drift correction of the proposed method with the SLAM for a monocular camera and a RGBD camera.

  8. 3D Geometrical Inspection of Complex Geometry Parts Using a Novel Laser Triangulation Sensor and a Robot

    PubMed Central

    Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge

    2011-01-01

    This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly “coupled” as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a “zero” or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy. PMID:22346569

  9. Image-Based Modeling Techniques for Architectural Heritage 3d Digitalization: Limits and Potentialities

    NASA Astrophysics Data System (ADS)

    Santagati, C.; Inzerillo, L.; Di Paola, F.

    2013-07-01

    3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.

  10. Comparison of solar and laser macula retinal injury using scanning laser ophthalmoscopy spectral imaging

    NASA Astrophysics Data System (ADS)

    Zwick, Harry; Gagliano, Donald A.; Stuck, Bruce E.; Lund, David J.

    1994-07-01

    Both solar and laser sources may induce punctate foveal retinal damage. Unprotected viewing of the sun or bright blue sky represent potential solar radiation causes of photic maculopathy that may induce punctate foveal damage. Laser induced macular retinal damage is another more recent kind of photic maculopathy. Most documented cases of laser photic maculopathy have involved acute laser exposure generally from Q-switched visible or nonvisible near IR laser systems. In our comparison of these types of photic maculopathies, we have employed conventional as well as spectral and confocal scanning laser ophthalomoscopy to evaluate the depth of the photic maculopathy. Functionally, we have observed a tritan color vision loss present in nearly all photic maculopathies.

  11. Stereoscopic Machine-Vision System Using Projected Circles

    NASA Technical Reports Server (NTRS)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a template in processing terrain images. During operation on terrain, the images acquired by the left and right cameras are analyzed. The analysis includes (1) computation of the horizontal and vertical dimensions and the aspect ratios of rectangles that bound the circle images and (2) comparison of these aspect ratios with those of the template. Coordinates of distortions of the circles are used to identify and locate objects. If the analysis leads to identification of an object of significant size, then stereoscopicvision algorithms are used to estimate the distance to the object. The time taken in performing this analysis on a single pair of images acquired by the left and right cameras in this system is a fraction of the time taken in processing the many pairs of images acquired in a sweep of the laser stripe across the field of view in the prior system. The results of the analysis include data on sizes and shapes of, and distances and directions to, objects. Coordinates of objects are updated as the vehicle moves so that intelligent decisions regarding speed and direction can be made. The results of the analysis are utilized in a computational decision-making process that generates obstacle-avoidance data and feeds those data to the control system of the robotic vehicle.

  12. Real-Time Measurement of Width and Height of Weld Beads in GMAW Processes.

    PubMed

    Pinto-Lopera, Jesús Emilio; S T Motta, José Mauricio; Absi Alfaro, Sadek Crisostomo

    2016-09-15

    Associated to the weld quality, the weld bead geometry is one of the most important parameters in welding processes. It is a significant requirement in a welding project, especially in automatic welding systems where a specific width, height, or penetration of weld bead is needed. This paper presents a novel technique for real-time measuring of the width and height of weld beads in gas metal arc welding (GMAW) using a single high-speed camera and a long-pass optical filter in a passive vision system. The measuring method is based on digital image processing techniques and the image calibration process is based on projective transformations. The measurement process takes less than 3 milliseconds per image, which allows a transfer rate of more than 300 frames per second. The proposed methodology can be used in any metal transfer mode of a gas metal arc welding process and does not have occlusion problems. The responses of the measurement system, presented here, are in a good agreement with off-line data collected by a common laser-based 3D scanner. Each measurement is compare using a statistical Welch's t-test of the null hypothesis, which, in any case, does not exceed the threshold of significance level α = 0.01, validating the results and the performance of the proposed vision system.

  13. VLSI chips for vision-based vehicle guidance

    NASA Astrophysics Data System (ADS)

    Masaki, Ichiro

    1994-02-01

    Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.

  14. Three-Dimensional Images For Robot Vision

    NASA Astrophysics Data System (ADS)

    McFarland, William D.

    1983-12-01

    Robots are attracting increased attention in the industrial productivity crisis. As one significant approach for this nation to maintain technological leadership, the need for robot vision has become critical. The "blind" robot, while occupying an economical niche at present is severely limited and job specific, being only one step up from the numerical controlled machines. To successfully satisfy robot vision requirements a three dimensional representation of a real scene must be provided. Several image acquistion techniques are discussed with more emphasis on the laser radar type instruments. The autonomous vehicle is also discussed as a robot form, and the requirements for these applications are considered. The total computer vision system requirement is reviewed with some discussion of the major techniques in the literature for three dimensional scene analysis.

  15. Review on short-wavelength infrared laser gated-viewing at Fraunhofer IOSB

    NASA Astrophysics Data System (ADS)

    Göhler, Benjamin; Lutzmann, Peter

    2017-03-01

    This paper reviews the work that has been done at Fraunhofer IOSB (and its predecessor institutes) in the past ten years in the area of laser gated-viewing (GV) in the short-wavelength infrared (SWIR) band. Experimental system demonstrators in various configurations have been built up to show the potential for different applications and to investigate specific topics. The wavelength of the pulsed illumination laser is 1.57 μm and lies in the invisible, retina-safe region allowing much higher pulse energies than for wavelengths in the visible or near-infrared band concerning eye safety. All systems built up, consist of gated Intevac LIVAR® cameras based on EBCCD/EBCMOS detectors sensitive in the SWIR band. This review comprises military and civilian applications in maritime and land domain-in particular vision enhancement in bad visibility, long-range applications, silhouette imaging, 3-D imaging by sliding gates and slope method, bistatic GV imaging, and looking through windows. In addition, theoretical studies that were conducted-e.g., estimating 3-D accuracy or modeling range performance-are presented. Finally, an outlook for future work in the area of SWIR laser GV at Fraunhofer IOSB is given.

  16. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    NASA Astrophysics Data System (ADS)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  17. Creating photorealistic virtual model with polarization-based vision system

    NASA Astrophysics Data System (ADS)

    Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi

    2005-08-01

    Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.

  18. Different lasers and techniques for proliferative diabetic retinopathy.

    PubMed

    Moutray, Tanya; Evans, Jennifer R; Lois, Noemi; Armstrong, David J; Peto, Tunde; Azuara-Blanco, Augusto

    2018-03-15

    Diabetic retinopathy (DR) is a chronic progressive disease of the retinal microvasculature associated with prolonged hyperglycaemia. Proliferative DR (PDR) is a sight-threatening complication of DR and is characterised by the development of abnormal new vessels in the retina, optic nerve head or anterior segment of the eye. Argon laser photocoagulation has been the gold standard for the treatment of PDR for many years, using regimens evaluated by the Early Treatment of Diabetic Retinopathy Study (ETDRS). Over the years, there have been modifications of the technique and introduction of new laser technologies. To assess the effects of different types of laser, other than argon laser, and different laser protocols, other than those established by the ETDRS, for the treatment of PDR. We compared different wavelengths; power and pulse duration; pattern, number and location of burns versus standard argon laser undertaken as specified by the ETDRS. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2017, Issue 5); Ovid MEDLINE; Ovid Embase; LILACS; the ISRCTN registry; ClinicalTrials.gov and the ICTRP. The date of the search was 8 June 2017. We included randomised controlled trials (RCTs) of pan-retinal photocoagulation (PRP) using standard argon laser for treatment of PDR compared with any other laser modality. We excluded studies of lasers that are not in common use, such as the xenon arc, ruby or Krypton laser. We followed Cochrane guidelines and graded the certainty of evidence using the GRADE approach. We identified 11 studies from Europe (6), the USA (2), the Middle East (1) and Asia (2). Five studies compared different types of laser to argon: Nd:YAG (2 studies) or diode (3 studies). Other studies compared modifications to the standard argon laser PRP technique. The studies were poorly reported and we judged all to be at high risk of bias in at least one domain. The sample size varied from 20 to 270 eyes but the majority included 50 participants or fewer.Nd:YAG versus argon laser (2 studies): very low-certainty evidence on vision loss, vision gain, progression and regression of PDR, pain during laser treatment and adverse effects.Diode versus argon laser (3 studies): very-low certainty evidence on vision loss, vision gain, progression and regression of PDR and adverse effects; moderate-certainty evidence that diode laser was more painful (risk ratio (RR) troublesome pain during laser treatment (RR 3.12, 95% CI 2.16 to 4.51; eyes = 202; studies = 3; I 2 = 0%).0.5 second versus 0.1 second exposure (1 study): low-certainty evidence of lower chance of vision loss with 0.5 second compared with 0.1 second exposure but estimates were imprecise and compatible with no difference or an increased chance of vision loss (RR 0.42, 95% CI 0.08 to 2.04, 44 eyes, 1 RCT); low-certainty evidence that people treated with 0.5 second exposure were more likely to gain vision (RR 2.22, 95% CI 0.68 to 7.28, 44 eyes, 1 RCT) but again the estimates were imprecise . People given 0.5 second exposure were more likely to have regression of PDR compared with 0.1 second laser PRP again with imprecise estimate (RR 1.17, 95% CI 0.92 to 1.48, 32 eyes, 1 RCT). There was very low-certainty evidence on progression of PDR and adverse effects.'Light intensity' PRP versus classic PRP (1 study): vision loss or gain was not reported but the mean difference in logMAR acuity at 1 year was -0.09 logMAR (95% CI -0.22 to 0.04, 65 eyes, 1 RCT); and low-certainty evidence that fewer patients had pain during light PRP compared with classic PRP with an imprecise estimate compatible with increased or decreased pain (RR 0.23, 95% CI 0.03 to 1.93, 65 eyes, 1 RCT).'Mild scatter' (laser pattern limited to 400 to 600 laser burns in one sitting) PRP versus standard 'full' scatter PRP (1 study): very low-certainty evidence on vision and visual field loss. No information on adverse effects.'Central' (a more central PRP in addition to mid-peripheral PRP) versus 'peripheral' standard PRP (1 study): low-certainty evidence that people treated with central PRP were more likely to lose 15 or more letters of BCVA compared with peripheral laser PRP (RR 3.00, 95% CI 0.67 to 13.46, 50 eyes, 1 RCT); and less likely to gain 15 or more letters (RR 0.25, 95% CI 0.03 to 2.08) with imprecise estimates compatible with increased or decreased risk.'Centre sparing' PRP (argon laser distribution limited to 3 disc diameters from the upper temporal and lower margin of the fovea) versus standard 'full scatter' PRP (1 study): low-certainty evidence that people treated with 'centre sparing' PRP were less likely to lose 15 or more ETDRS letters of BCVA compared with 'full scatter' PRP (RR 0.67, 95% CI 0.30 to 1.50, 53 eyes). Low-certainty evidence of similar risk of regression of PDR between groups (RR 0.96, 95% CI 0.73 to 1.27, 53 eyes). Adverse events were not reported.'Extended targeted' PRP (to include the equator and any capillary non-perfusion areas between the vascular arcades) versus standard PRP (1 study): low-certainty evidence that people in the extended group had similar or slightly reduced chance of loss of 15 or more letters of BCVA compared with the standard PRP group (RR 0.94, 95% CI 0.70 to 1.28, 270 eyes). Low-certainty evidence that people in the extended group had a similar or slightly increased chance of regression of PDR compared with the standard PRP group (RR 1.11, 95% CI 0.95 to 1.31, 270 eyes). Very low-certainty information on adverse effects. Modern laser techniques and modalities have been developed to treat PDR. However there is limited evidence available with respect to the efficacy and safety of alternative laser systems or strategies compared with the standard argon laser as described in ETDRS.

  19. Multi-Sensor Person Following in Low-Visibility Scenarios

    PubMed Central

    Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier

    2010-01-01

    Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment. PMID:22163506

  20. Vision requirements for Space Station applications

    NASA Technical Reports Server (NTRS)

    Crouse, K. R.

    1985-01-01

    Problems which will be encountered by computer vision systems in Space Station operations are discussed, along with solutions be examined at Johnson Space Station. Lighting cannot be controlled in space, nor can the random presence of reflective surfaces. Task-oriented capabilities are to include docking to moving objects, identification of unexpected objects during autonomous flights to different orbits, and diagnoses of damage and repair requirements for autonomous Space Station inspection robots. The approaches being examined to provide these and other capabilities are television IR sensors, advanced pattern recognition programs feeding on data from laser probes, laser radar for robot eyesight and arrays of SMART sensors for automated location and tracking of target objects. Attention is also being given to liquid crystal light valves for optical processing of images for comparisons with on-board electronic libraries of images.

  1. Multi-sensor person following in low-visibility scenarios.

    PubMed

    Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier

    2010-01-01

    Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment.

  2. Welding of Thin Steel Plates by Hybrid Welding Process Combined TIG Arc with YAG Laser

    NASA Astrophysics Data System (ADS)

    Kim, Taewon; Suga, Yasuo; Koike, Takashi

    TIG arc welding and laser welding are used widely in the world. However, these welding processes have some advantages and problems respectively. In order to improve problems and make use of advantages of the arc welding and the laser welding processes, hybrid welding process combined the TIG arc with the YAG laser was studied. Especially, the suitable welding conditions for thin steel plate welding were investigated to obtain sound weld with beautiful surface and back beads but without weld defects. As a result, it was confirmed that the shot position of the laser beam is very important to obtain sound welds in hybrid welding. Therefore, a new intelligent system to monitor the welding area using vision sensor is constructed. Furthermore, control system to shot the laser beam to a selected position in molten pool, which is formed by TIG arc, is constructed. As a result of welding experiments using these systems, it is confirmed that the hybrid welding process and the control system are effective on the stable welding of thin stainless steel plates.

  3. The Application of Lidar to Synthetic Vision System Integrity

    NASA Technical Reports Server (NTRS)

    Campbell, Jacob L.; UijtdeHaag, Maarten; Vadlamani, Ananth; Young, Steve

    2003-01-01

    One goal in the development of a Synthetic Vision System (SVS) is to create a system that can be certified by the Federal Aviation Administration (FAA) for use at various flight criticality levels. As part of NASA s Aviation Safety Program, Ohio University and NASA Langley have been involved in the research and development of real-time terrain database integrity monitors for SVS. Integrity monitors based on a consistency check with onboard sensors may be required if the inherent terrain database integrity is not sufficient for a particular operation. Sensors such as the radar altimeter and weather radar, which are available on most commercial aircraft, are currently being investigated for use in a real-time terrain database integrity monitor. This paper introduces the concept of using a Light Detection And Ranging (LiDAR) sensor as part of a real-time terrain database integrity monitor. A LiDAR system consists of a scanning laser ranger, an inertial measurement unit (IMU), and a Global Positioning System (GPS) receiver. Information from these three sensors can be combined to generate synthesized terrain models (profiles), which can then be compared to the stored SVS terrain model. This paper discusses an initial performance evaluation of the LiDAR-based terrain database integrity monitor using LiDAR data collected over Reno, Nevada. The paper will address the consistency checking mechanism and test statistic, sensitivity to position errors, and a comparison of the LiDAR-based integrity monitor to a radar altimeter-based integrity monitor.

  4. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  5. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  6. Real-Time Measurement of Width and Height of Weld Beads in GMAW Processes

    PubMed Central

    Pinto-Lopera, Jesús Emilio; S. T. Motta, José Mauricio; Absi Alfaro, Sadek Crisostomo

    2016-01-01

    Associated to the weld quality, the weld bead geometry is one of the most important parameters in welding processes. It is a significant requirement in a welding project, especially in automatic welding systems where a specific width, height, or penetration of weld bead is needed. This paper presents a novel technique for real-time measuring of the width and height of weld beads in gas metal arc welding (GMAW) using a single high-speed camera and a long-pass optical filter in a passive vision system. The measuring method is based on digital image processing techniques and the image calibration process is based on projective transformations. The measurement process takes less than 3 milliseconds per image, which allows a transfer rate of more than 300 frames per second. The proposed methodology can be used in any metal transfer mode of a gas metal arc welding process and does not have occlusion problems. The responses of the measurement system, presented here, are in a good agreement with off-line data collected by a common laser-based 3D scanner. Each measurement is compare using a statistical Welch’s t-test of the null hypothesis, which, in any case, does not exceed the threshold of significance level α = 0.01, validating the results and the performance of the proposed vision system. PMID:27649198

  7. Real-time seam tracking control system based on line laser visions

    NASA Astrophysics Data System (ADS)

    Zou, Yanbiao; Wang, Yanbo; Zhou, Weilin; Chen, Xiangzhi

    2018-07-01

    A set of six-degree-of-freedom robotic welding automatic tracking platform was designed in this study to realize the real-time tracking of weld seams. Moreover, the feature point tracking method and the adaptive fuzzy control algorithm in the welding process were studied and analyzed. A laser vision sensor and its measuring principle were designed and studied, respectively. Before welding, the initial coordinate values of the feature points were obtained using morphological methods. After welding, the target tracking method based on Gaussian kernel was used to extract the real-time feature points of the weld. An adaptive fuzzy controller was designed to input the deviation value of the feature points and the change rate of the deviation into the controller. The quantization factors, scale factor, and weight function were adjusted in real time. The input and output domains, fuzzy rules, and membership functions were constantly updated to generate a series of smooth bias robot voltage. Three groups of experiments were conducted on different types of curve welds in a strong arc and splash noise environment using the welding current of 120 A short-circuit Metal Active Gas (MAG) Arc Welding. The tracking error was less than 0.32 mm and the sensor's metrical frequency can be up to 20 Hz. The end of the torch run smooth during welding. Weld trajectory can be tracked accurately, thereby satisfying the requirements of welding applications.

  8. Short- and medium-range 3D sensing for space applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J. A.; Blais, Francois; Rioux, Marc; Cournoyer, Luc; Laurin, Denis G.; MacLean, Steve G.

    1997-07-01

    This paper focuses on the characteristics and performance of a laser range scanner (LARS) with short and medium range 3D sensing capabilities for space applications. This versatile laser range scanner is a precision measurement tool intended to complement the current Canadian Space Vision System (CSVS). Together, these vision systems are intended to be used during the construction of the International Space Station (ISS). Integration of the LARS to the CSVS will allow 3D surveying of a robotic work-site, identification of known objects from registered range and intensity images, and object detection and tracking relative to the orbiter and ISS. The data supplied by the improved CSVS will be invaluable in Orbiter rendez-vous and in assisting the Orbiter/ISS Remote Manipulator System operators. The major advantages of the LARS over conventional video-based imaging are its ability to operate with sunlight shining directly into the scanner and its immunity to spurious reflections and shadows which occur frequently in space. Because the LARS is equipped with two high-speed galvanometers to steer the laser beam, any spatial location within the field of view of the camera can be addressed. This level of versatility enables the LARS to operate in two basic scan pattern modes: (1) variable scan resolution mode and (2) raster scan mode. In the variable resolution mode, the LARS can search and track targets and geometrical features on objects located within a field of view of 30 degrees X 30 degrees and with corresponding range from about 0.5 m to 2000 m. This flexibility allows implementations of practical search and track strategies based on the use of Lissajous patterns for multiple targets. The tracking mode can reach a refresh rate of up to 137 Hz. The raster mode is used primarily for the measurement of registered range and intensity information of large stationary objects. It allows among other things: target-based measurements, feature-based measurements, and, image-based measurements like differential inspection in 3D space and surface reflectance monitoring. The digitizing and modeling of human subjects, cargo payloads, and environments are also possible with the LARS. A number of examples illustrating the many capabilities of the LARS are presented in this paper.

  9. LASIK eye surgery

    MedlinePlus

    ... Assisted In Situ Keratomileusis; Laser vision correction; Nearsightedness - Lasik; Myopia - Lasik ... cornea (curvature) and the length of the eye. LASIK uses an excimer laser (an ultraviolet laser) to ...

  10. Automated vision occlusion-timing instrument for perception-action research.

    PubMed

    Brenton, John; Müller, Sean; Rhodes, Robbie; Finch, Brad

    2018-02-01

    Vision occlusion spectacles are a highly valuable instrument for visual-perception-action research in a variety of disciplines. In sports, occlusion spectacles have enabled invaluable knowledge to be obtained about the superior capability of experts to use visual information to guide actions within in-situ settings. Triggering the spectacles to occlude a performer's vision at a precise time in an opponent's action or object flight has been problematic, due to experimenter error in using a manual buttonpress approach. This article describes a new laser curtain wireless trigger for vision occlusion spectacles that is portable and fast in terms of its transmission time. The laser curtain can be positioned in a variety of orientations to accept a motion trigger, such as a cricket bowler's arm that distorts the lasers, which then activates a wireless signal for the occlusion spectacles to change from transparent to opaque, which occurs in only 8 ms. Results are reported from calculations done in an electronics laboratory, as well as from tests in a performance laboratory with a cricket bowler and a baseball pitcher, which verified this short time delay before vision occlusion. In addition, our results show that occlusion consistently occurred when it was intended-that is, near ball release and during mid-ball-flight. Only 8% of the collected data trials were unusable. The laser curtain improves upon the limitations of existing vision occlusion spectacle triggers, indicating that it is a valuable instrument for perception-action research in a variety of disciplines.

  11. Automatic vision system for analysis of microscopic behavior of flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Dickenson, Eric; Daemi, M. Farhang

    1997-10-01

    This paper describes the development of a novel automated and efficient vision system to obtain velocity and concentration measurement within a porous medium. An aqueous fluid lace with a fluorescent dye to microspheres flows through a transparent, refractive-index-matched column packed with transparent crystals. For illumination purposes, a planar sheet of laser passes through the column as a CCD camera records all the laser illuminated planes. Detailed microscopic velocity and concentration fields have been computed within a 3D volume of the column. For measuring velocities, while the aqueous fluid, laced with fluorescent microspheres, flows through the transparent medium, a CCD camera records the motions of the fluorescing particles by a video cassette recorder. The recorded images are acquired automatically frame by frame and transferred to the computer for processing, by using a frame grabber an written relevant algorithms through an RS-232 interface. Since the grabbed image is poor in this stage, some preprocessings are used to enhance particles within images. Finally, these enhanced particles are monitored to calculate velocity vectors in the plane of the beam. For concentration measurements, while the aqueous fluid, laced with a fluorescent organic dye, flows through the transparent medium, a CCD camera sweeps back and forth across the column and records concentration slices on the planes illuminated by the laser beam traveling simultaneously with the camera. Subsequently, these recorded images are transferred to the computer for processing in similar fashion to the velocity measurement. In order to have a fully automatic vision system, several detailed image processing techniques are developed to match exact images that have different intensities values but the same topological characteristics. This results in normalized interstitial chemical concentrations as a function of time within the porous column.

  12. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  13. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  14. Active imaging system performance model for target acquisition

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Teaney, Brian; Nguyen, Quang; Jacobs, Eddie L.; Halford, Carl E.; Tofsted, David H.

    2007-04-01

    The U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate has developed a laser-range-gated imaging system performance model for the detection, recognition, and identification of vehicle targets. The model is based on the established US Army RDECOM CERDEC NVESD sensor performance models of the human system response through an imaging system. The Java-based model, called NVLRG, accounts for the effect of active illumination, atmospheric attenuation, and turbulence effects relevant to LRG imagers, such as speckle and scintillation, and for the critical sensor and display components. This model can be used to assess the performance of recently proposed active SWIR systems through various trade studies. This paper will describe the NVLRG model in detail, discuss the validation of recent model components, present initial trade study results, and outline plans to validate and calibrate the end-to-end model with field data through human perception testing.

  15. The contribution of stereo vision to the control of braking.

    PubMed

    Tijtgat, Pieter; Mazyn, Liesbeth; De Laey, Christophe; Lenoir, Matthieu

    2008-03-01

    In this study the contribution of stereo vision to the control of braking in front of a stationary target vehicle was investigated. Participants with normal (StereoN) and weak (StereoW) stereo vision drove a go-cart along a linear track towards a stationary vehicle. They could start braking from a distance of 4, 7, or 10m from the vehicle. Deceleration patterns were measured by means of a laser. A lack of stereo vision was associated with an earlier onset of braking, but the duration of the braking manoeuvre was similar. During the deceleration, the time of peak deceleration occurred earlier in drivers with weak stereo vision. Stopping distance was greater in those lacking in stereo vision. A lack of stereo vision was associated with a more prudent brake behaviour, in which the driver took into account a larger safety margin. This compensation might be caused either by an unconscious adaptation of the human perceptuo-motor system, or by a systematic underestimation of distance remaining due to the lack of stereo vision. In general, a lack of stereo vision did not seem to increase the risk of rear-end collisions.

  16. Changes in vision related quality of life in patients with diabetic macular edema: ranibizumab or laser treatment?

    PubMed

    Turkoglu, Elif Betul; Celık, Erkan; Aksoy, Nilgun; Bursalı, Ozlem; Ucak, Turgay; Alagoz, Gursoy

    2015-01-01

    To compare the changes in vision related quality of life (VR-QoL) in patients with diabetic macular edema (DME) undergoing intravitreal ranibizumab (IVR) injection or focal/grid laser. In this prospective study, 70 patients with clinically significant macular edema (CSME) were randomized to undergo IVR injection (n=35) and focal/grid laser (n=35). If necessary, the laser or ranibizumab injections were repeated. Distance and near visual acuities, central retinal thickness (CRT) and The 25-item Visual Function Questionnaire (VFQ-25) were used to measure the effectiveness of treatments and VR-QoL before and after 6 months following IVR or laser treatment. The demographic and clinical findings before the treatments were similar in both main groups. The improvements in distance and near visual acuities were higher in IVR group than the laser group (p<0.01). The reduction in CRT in IVR group was higher than that in laser treatment group (p<0.01). In both groups, the VFQ-25 composite score tended to improve from baseline to 6 months. And at 6th month, the changes in composite score were significantly higher in IVR group than in laser group (p<0.05). The improvements in overall composite scores were 6.3 points for the IVR group compared with 3.0 points in the laser group. Patients treated with IVR and laser had large improvements in composite scores, general vision, near and distance visual acuities in VFQ-25 at 6 months, in comparison with baseline scores, and also mental health subscale in IVR group. Our study revealed that IVR improved not only visual acuity or CRT, but also vision related quality of life more than laser treatment in DME. And these patient-reported outcomes may play an important role in the treatment choice in DME for clinicians. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  18. Close coupling of pre- and post-processing vision stations using inexact algorithms

    NASA Astrophysics Data System (ADS)

    Shih, Chi-Hsien V.; Sherkat, Nasser; Thomas, Peter D.

    1996-02-01

    Work has been reported using lasers to cut deformable materials. Although the use of laser reduces material deformation, distortion due to mechanical feed misalignment persists. Changes in the lace patten are also caused by the release of tension in the lace structure as it is cut. To tackle the problem of distortion due to material flexibility, the 2VMethod together with the Piecewise Error Compensation Algorithm incorporating the inexact algorithms, i.e., fuzzy logic, neural networks and neural fuzzy technique, are developed. A spring mounted pen is used to emulate the distortion of the lace pattern caused by tactile cutting and feed misalignment. Using pre- and post-processing vision systems, it is possible to monitor the scalloping process and generate on-line information for the artificial intelligence engines. This overcomes the problems of lace distortion due to the trimming process. Applying the algorithms developed, the system can produce excellent results, much better than a human operator.

  19. Vision Systems with the Human in the Loop

    NASA Astrophysics Data System (ADS)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  20. Garment Counting in a Textile Warehouse by Means of a Laser Imaging System

    PubMed Central

    Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban

    2013-01-01

    Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%. PMID:23628760

  1. Garment counting in a textile warehouse by means of a laser imaging system.

    PubMed

    Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban

    2013-04-29

    Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%.

  2. Comparing wavefront-optimized, wavefront-guided and topography-guided laser vision correction: clinical outcomes using an objective decision tree.

    PubMed

    Stonecipher, Karl; Parrish, Joseph; Stonecipher, Megan

    2018-05-18

    This review is intended to update and educate the reader on the currently available options for laser vision correction, more specifically, laser-assisted in-situ keratomileusis (LASIK). In addition, some related clinical outcomes data from over 1000 cases performed over a 1-year are presented to highlight some differences between the various treatment profiles currently available including the rapidity of visual recovery. The cases in question were performed on the basis of a decision tree to segregate patients on the basis of anatomical, topographic and aberrometry findings; the decision tree was formulated based on the data available in some of the reviewed articles. Numerous recent studies reported in the literature provide data related to the risks and benefits of LASIK; alternatives to a laser refractive procedure are also discussed. The results from these studies have been used to prepare a decision tree to assist the surgeon in choosing the best option for the patient based on the data from several standard preoperative diagnostic tests. The data presented here should aid surgeons in understanding the effects of currently available LASIK treatment profiles. Surgeons should also be able to appreciate how the findings were used to create a decision tree to help choose the most appropriate treatment profile for patients. Finally, the retrospective evaluation of clinical outcomes based on the decision tree should provide surgeons with a realistic expectation for their own outcomes should they adopt such a decision tree in their own practice.

  3. 3D shape measurement with thermal pattern projection

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Reetz, Edgar; Schindwolf, Simon; Correns, Martin; Kühmstedt, Peter; Notni, Gunther

    2016-12-01

    Structured light projection techniques are well-established optical methods for contactless and nondestructive three-dimensional (3D) measurements. Most systems operate in the visible wavelength range (VIS) due to commercially available projection and detection technology. For example, the 3D reconstruction can be done with a stereo-vision setup by finding corresponding pixels in both cameras followed by triangulation. Problems occur, if the properties of object materials disturb the measurements, which are based on the measurement of diffuse light reflections. For example, there are existing materials in the VIS range that are too transparent, translucent, high absorbent, or reflective and cannot be recorded properly. To overcome these challenges, we present an alternative thermal approach that operates in the infrared (IR) region of the electromagnetic spectrum. For this purpose, we used two cooled mid-wave (MWIR) cameras (3-5 μm) to detect emitted heat patterns, which were introduced by a CO2 laser. We present a thermal 3D system based on a GOBO (GOes Before Optics) wheel projection unit and first 3D analyses for different system parameters and samples. We also show a second alternative approach based on an incoherent (heat) source, to overcome typical disadvantages of high-power laser-based systems, such as industrial health and safety considerations, as well as high investment costs. Thus, materials like glass or fiber-reinforced composites can be measured contactless and without the need of additional paintings.

  4. Laser speckle imaging for lesion detection on tooth

    NASA Astrophysics Data System (ADS)

    Gavinho, Luciano G.; Silva, João. V. P.; Damazio, João. H.; Sfalcin, Ravana A.; Araujo, Sidnei A.; Pinto, Marcelo M.; Olivan, Silvia R. G.; Prates, Renato A.; Bussadori, Sandra K.; Deana, Alessandro M.

    2018-02-01

    Computer vision technologies for diagnostic imaging applied to oral lesions, specifically, carious lesions of the teeth, are in their early years of development. The relevance of this public problem, dental caries, worries countries around the world, as it affects almost the entire population, at least once in the life of each individual. The present work demonstrates current techniques for obtaining information about lesions on teeth by segmentation laser speckle imagens (LSI). Laser speckle image results from laser light reflection on a rough surface, and it was considered a noise but has important features that carry information about the illuminated surface. Even though these are basic images, only a few works have analyzed it by application of computer vision methods. In this article, we present the latest results of our group, in which Computer vision techniques were adapted to segment laser speckle images for diagnostic purposes. These methods are applied to the segmentation of images between healthy and lesioned regions of the tooth. These methods have proven to be effective in the diagnosis of early-stage lesions, often imperceptible in traditional diagnostic methods in the clinical practice. The first method uses first-order statistical models, segmenting the image by comparing the mean and standard deviation of the intensity of the pixels. The second method is based on the distance of the chi-square (χ2 ) between the histograms of the image, bringing a significant improvement in the precision of the diagnosis, while a third method introduces the use of fractal geometry, exposing, through of the fractal dimension, more precisely the difference between lesioned areas and healthy areas of a tooth compared to other methods of segmentation. So far, we can observe efficiency in the segmentation of the carious regions. A software was developed for the execution and demonstration of the applicability of the models

  5. A review of infrared laser energy absorption and subsequent healing in the cornea

    NASA Astrophysics Data System (ADS)

    Saunders, Latica L.; Johnson, Thomas E.; Neal, Thomas A.

    2004-07-01

    The purpose of this review is to compile information on the optical and healing properties of the cornea when exposed to infrared lasers. Our long-term goal is to optimize the treatment parameters for corneal injuries after exposure to infrared laser systems. The majority of the information currently available in the literature focuses on corneal healing after therapeutic vision correction surgery with LASIK or PRK. Only a limited amount of information is available on corneal healing after injury with an infrared laser system. In this review we will speculate on infrared photon energy absorption in corneal injury and healing to include the role of the tear layer. The aim of this review is to gain a better understanding of infrared energy absorption in the cornea and how it might impact healing.

  6. Corneal reshaping using a pulsed UV solid-state laser

    NASA Astrophysics Data System (ADS)

    Ren, Qiushi; Simon, Gabriel; Parel, Jean-Marie A.; Shen, Jin-Hui; Takesue, Yoshiko

    1993-06-01

    Replacing the gas ArF (193 nm) excimer laser with a solid state laser source in the far-UV spectrum region would eliminate the hazards of a gas laser and would reduce its size which is desirable for photo-refractive keratectomy (PRK). In this study, we investigated corneal reshaping using a frequency-quintupled (213 nm) pulsed (10 ns) Nd:YAG laser coupled to a computer-controlled optical scanning delivery system. Corneal topographic measurements showed myopic corrections ranging from 2.3 to 6.1 diopters. Post-operative examination with the slit-lamp and operating microscope demonstrated a smoothly ablated surface without corneal haze. Histological results showed a smoothly sloping surface without recognizable steps. The surface quality and cellular effects were similar to that of previously described excimer PRK. Our study demonstrated that a UV solid state laser coupled to an optical scanning delivery system is capable of reshaping the corneal surface with the advantage of producing customized, aspheric corrections without corneal haze which may improve the quality of vision following PRK.

  7. Handheld laser scanner automatic registration based on random coding

    NASA Astrophysics Data System (ADS)

    He, Lei; Yu, Chun-ping; Wang, Li

    2011-06-01

    Current research on Laser Scanner often focuses mainly on the static measurement. Little use has been made of dynamic measurement, that are appropriate for more problems and situations. In particular, traditional Laser Scanner must Keep stable to scan and measure coordinate transformation parameters between different station. In order to make the scanning measurement intelligently and rapidly, in this paper ,we developed a new registration algorithm for handleheld laser scanner based on the positon of target, which realize the dynamic measurement of handheld laser scanner without any more complex work. the double camera on laser scanner can take photograph of the artificial target points to get the three-dimensional coordinates, this points is designed by random coding. And then, a set of matched points is found from control points to realize the orientation of scanner by the least-square common points transformation. After that the double camera can directly measure the laser point cloud in the surface of object and get the point cloud data in an unified coordinate system. There are three major contributions in the paper. Firstly, a laser scanner based on binocular vision is designed with double camera and one laser head. By those, the real-time orientation of laser scanner is realized and the efficiency is improved. Secondly, the coding marker is introduced to solve the data matching, a random coding method is proposed. Compared with other coding methods,the marker with this method is simple to match and can avoid the shading for the object. Finally, a recognition method of coding maker is proposed, with the use of the distance recognition, it is more efficient. The method present here can be used widely in any measurement from small to huge obiect, such as vehicle, airplane which strengthen its intelligence and efficiency. The results of experiments and theory analzing demonstrate that proposed method could realize the dynamic measurement of handheld laser scanner. Theory analysis and experiment shows the method is reasonable and efficient.

  8. Real-Time Detection and Tracking of Multiple People in Laser Scan Frames

    NASA Astrophysics Data System (ADS)

    Cui, J.; Song, X.; Zhao, H.; Zha, H.; Shibasaki, R.

    This chapter presents an approach to detect and track multiple people ro bustly in real time using laser scan frames. The detection and tracking of people in real time is a problem that arises in a variety of different contexts. Examples in clude intelligent surveillance for security purposes, scene analysis for service robot, and crowd behavior analysis for human behavior study. Over the last several years, an increasing number of laser-based people-tracking systems have been developed in both mobile robotics platforms and fixed platforms using one or multiple laser scanners. It has been proved that processing on laser scanner data makes the tracker much faster and more robust than a vision-only based one in complex situations. In this chapter, we present a novel robust tracker to detect and track multiple people in a crowded and open area in real time. First, raw data are obtained that measures two legs for each people at a height of 16 cm from horizontal ground with multiple registered laser scanners. A stable feature is extracted using accumulated distribu tion of successive laser frames. In this way, the noise that generates split and merged measurements is smoothed well, and the pattern of rhythmic swinging legs is uti lized to extract each leg. Second, a probabilistic tracking model is presented, and then a sequential inference process using a Bayesian rule is described. A sequential inference process is difficult to compute analytically, so two strategies are presented to simplify the computation. In the case of independent tracking, the Kalman fil ter is used with a more efficient measurement likelihood model based on a region coherency property. Finally, to deal with trajectory fragments we present a concise approach to fuse just a little visual information from synchronized video camera to laser data. Evaluation with real data shows that the proposed method is robust and effective. It achieves a significant improvement compared with existing laser-based trackers.

  9. ND:YAG laser for preretinal hemorrhage in diabetic retinopathy.

    PubMed

    Karagiannis, Dimitrios; Kontadakis, Georgios A; Flanagan, Declan

    2018-06-01

    To present fundus images of a case with severe preretinal hemorrhage in diabetic retinopathy that was treated with posterior hyaloidotomy with an Nd:YAG laser. A 35-year-old diabetic patient presented with sudden painless loss of vision due to severe preretinal hemorrhage over the macular area and high risk proliferative diabetic retinopathy. Her visual acuity was counting fingers. Posterior hyaloid face was treated with Nd:YAG laser (posterior hyaloidotomy). Panretinal laser photocoagulation was first performed to control the proliferative diabetic retinopathy. Blood drained inferiorly into the vitreous cavity with clearance of the premacular area. Prompt treatment with Panretinal laser photocoagulation followed by posterior hyaloidotomy with the YAG laser is a viable option in order to avoid further proliferative diabetic retinopathy complications and vision loss. The current image clearly depicts treatment efficacy.

  10. Vehicle-based vision sensors for intelligent highway systems

    NASA Astrophysics Data System (ADS)

    Masaki, Ichiro

    1989-09-01

    This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.

  11. A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.

    PubMed

    Vivacqua, Rafael; Vassallo, Raquel; Martins, Felipe

    2017-10-16

    Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.

  12. A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application

    PubMed Central

    Vassallo, Raquel

    2017-01-01

    Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle’s backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation. PMID:29035334

  13. Three-dimensional tracking and imaging laser scanner for space operations

    NASA Astrophysics Data System (ADS)

    Laurin, Denis G.; Beraldin, J. A.; Blais, Francois; Rioux, Marc; Cournoyer, Luc

    1999-05-01

    This paper presents the development of a laser range scanner (LARS) as a three-dimensional sensor for space applications. The scanner is a versatile system capable of doing surface imaging, target ranging and tracking. It is capable of short range (0.5 m to 20 m) and long range (20 m to 10 km) sensing using triangulation and time-of-flight (TOF) methods respectively. At short range (1 m), the resolution is sub-millimeter and drops gradually with distance (2 cm at 10 m). For long range, the TOF provides a constant resolution of plus or minus 3 cm, independent of range. The LARS could complement the existing Canadian Space Vision System (CSVS) for robotic manipulation. As an active vision system, the LARS is immune to sunlight and adverse lighting; this is a major advantage over the CSVS, as outlined in this paper. The LARS could also replace existing radar systems used for rendezvous and docking. There are clear advantages of an optical system over a microwave radar in terms of size, mass, power and precision. Equipped with two high-speed galvanometers, the laser can be steered to address any point in a 30 degree X 30 degree field of view. The scanning can be continuous (raster scan, Lissajous) or direct (random). This gives the scanner the ability to register high-resolution 3D images of range and intensity (up to 4000 X 4000 pixels) and to perform point target tracking as well as object recognition and geometrical tracking. The imaging capability of the scanner using an eye-safe laser is demonstrated. An efficient fiber laser delivers 60 mW of CW or 3 (mu) J pulses at 20 kHz for TOF operation. Implementation of search and track of multiple targets is also demonstrated. For a single target, refresh rates up to 137 Hz is possible. Considerations for space qualification of the scanner are discussed. Typical space operations, such as docking, object attitude tracking, and inspections are described.

  14. Effectiveness of an Eyelid Thermal Pulsation Procedure to Treat Recalcitrant Dry Eye Symptoms After Laser Vision Correction.

    PubMed

    Schallhorn, Craig S; Schallhorn, Julie M; Hannan, Stephen; Schallhorn, Steven C

    2017-01-01

    To provide an initial retrospective evaluation of the effectiveness of a thermal pulsation system to treat intractable patient-reported dye eye symptoms following laser vision correction. A total of 109 eyes of 57 patients underwent thermal pulsation therapy (LipiFlow; TearScience, Morrisville, NC) for the treatment of dry eye symptoms following laser vision correction. A standardized dry eye questionnaire, the Standard Patient Evaluation of Eye Dryness (SPEED II), was administered to all patients before and after thermal pulsation therapy. The primary outcome was patient-reported dry eye symptoms as measured by this questionnaire. The mean patient age was 49 years (interquartile range [IQR]: 38 to 60), 70% were female, and the primary refractive procedure was LASIK (n = 91, 83%) or photorefractive keratectomy (PRK) (n = 18, 17%). Patients underwent thermal pulsation therapy at a mean of 40.5 months (IQR: 27.6 to 55.0) after the primary procedure. The mean pre-therapy SPEED II questionnaire score was 17.5 (IQR: 14 to 21), with a reduced mean post-therapy score of 10.2 (IQR: 6 to 14; 95% confidence interval [CI]: 8.8 to 11.5, P < .001). Patients with PRK tended to report more improvement. At the follow-up clinical evaluation, objective improvements were noted in tear break-up time (+1.9 sec; 95% CI: 1.3 to 2.5), reduction in grade of meibomian gland dysfunction (-0.69; 95% CI: -0.54 to -0.84), and corneal staining (-0.74; 95% CI: -0.57 to -0.91). In this initial retrospective evaluation, a significant improvement in patient-reported dry eye symptoms was observed following thermal pulsation therapy. This treatment modality may have utility in the management of dry eye symptoms following laser vision correction, but further study is needed to define its role. [J Refract Surg. 2017;33(1):30-36.]. Copyright 2017, SLACK Incorporated.

  15. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  16. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  17. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  18. Economics of cutting hardwood dimension parts with an automated system

    Treesearch

    Henry A. Huber; Steve Ruddell; Kalinath Mukherjee; Charles W. McMillin

    1989-01-01

    A financial analysis using discounted cash-flow decision methods was completed to determine the economic feasibility of replacing a conventional roughmill crosscut and rip operation with a proposed automated computer vision and laser cutting system. Red oak and soft maple lumber were cut at production levels of 30 thousand board feet (MBF)/day and 5 MBF/day to produce...

  19. Detecting Motion from a Moving Platform; Phase 3: Unification of Control and Sensing for More Advanced Situational Awareness

    DTIC Science & Technology

    2011-11-01

    RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica

  20. Lane Marking Detection and Reconstruction with Line-Scan Imaging Data.

    PubMed

    Li, Lin; Luo, Wenting; Wang, Kelvin C P

    2018-05-20

    A bstract: Lane marking detection and localization are crucial for autonomous driving and lane-based pavement surveys. Numerous studies have been done to detect and locate lane markings with the purpose of advanced driver assistance systems, in which image data are usually captured by vision-based cameras. However, a limited number of studies have been done to identify lane markings using high-resolution laser images for road condition evaluation. In this study, the laser images are acquired with a digital highway data vehicle (DHDV). Subsequently, a novel methodology is presented for the automated lane marking identification and reconstruction, and is implemented in four phases: (1) binarization of the laser images with a new threshold method (multi-box segmentation based threshold method); (2) determination of candidate lane markings with closing operations and a marching square algorithm; (3) identification of true lane marking by eliminating false positives (FPs) using a linear support vector machine method; and (4) reconstruction of the damaged and dash lane marking segments to form a continuous lane marking based on the geometry features such as adjacent lane marking location and lane width. Finally, a case study is given to validate effects of the novel methodology. The findings indicate the new strategy is robust in image binarization and lane marking localization. This study would be beneficial in road lane-based pavement condition evaluation such as lane-based rutting measurement and crack classification.

  1. Selection of the thermal imaging approach for the XM29 combat rifle fire control system

    NASA Astrophysics Data System (ADS)

    Brindley, Eric; Lillie, Jack; Plocki, Peter; Volz, Robert T.

    2003-09-01

    The paper briefly describes the XM29 (formerly OICW) weapon, its fire control system and the requirements for thermal imaging. System level constraints on the in-hand weight dictate the need for a high degree of integration with other elements of the system such as the laser rangefinder, direct view optics and daylight video, all operating at different wavelengths. The available Focal Plane Array technology choices are outlined and the evaluation process is described, including characterization at the US Army Night Vision and Electronic Sensors Directorate (NVESD) and recent field-testing at Quantico USMC base, Virginia. This paper addresses the trade study, technology assessment and test-bed effort. The relationship between field and lab testing performance is compared and path forward recommended.

  2. Ultraviolet-Blocking Lenses Protect, Enhance Vision

    NASA Technical Reports Server (NTRS)

    2010-01-01

    To combat the harmful properties of light in space, as well as that of artificial radiation produced during laser and welding work, Jet Propulsion Laboratory (JPL) scientists developed a lens capable of absorbing, filtering, and scattering the dangerous light while not obstructing vision. SunTiger Inc. now Eagle Eyes Optics, of Calabasas, California was formed to market a full line of sunglasses based on the JPL discovery that promised 100-percent elimination of harmful wavelengths and enhanced visual clarity. The technology was recently inducted into the Space Technology Hall of Fame.

  3. Adaptive optics; Proceedings of the Meeting, Arlington, VA, April 10, 11, 1985

    NASA Astrophysics Data System (ADS)

    Ludman, J. E.

    Papers are presented on the directed energy program for ballistic missile defense, a self-referencing wavefront interferometer for laser sources, the effects of mirror grating distortions on diffraction spots at wavefront sensors, and the optical design of an all-reflecting, high-resolution camera for active-optics on ground-based telescopes. Also considered are transverse coherence length observations, time dependent statistics of upper atmosphere optical turbulence, high altitude acoustic soundings, and the Cramer-Rao lower bound on wavefront sensor error. Other topics include wavefront reconstruction from noisy slope or difference data using the discrete Fourier transform, acoustooptic adaptive signal processing, the recording of phase deformations on a PLZT wafer for holographic and spatial light modulator applications, and an optical phase reconstructor using a multiplier-accumulator approach. Papers are also presented on an integrated optics wavefront measurement sensor, a new optical preprocessor for automatic vision systems, a model for predicting infrared atmospheric emission fluctuations, and optical logic gates and flip-flops based on polarization-bistable semiconductor lasers.

  4. Refractive Changes Induced by Spherical Aberration in Laser Correction Procedures: An Adaptive Optics Study.

    PubMed

    Amigó, Alfredo; Martinez-Sorribes, Paula; Recuerda, Margarita

    2017-07-01

    To study the effect on vision of induced negative and positive spherical aberration within the range of laser vision correction procedures. In 10 eyes (mean age: 35.8 years) under cyclopegic conditions, spherical aberration values from -0.75 to +0.75 µm in 0.25-µm steps were induced by an adaptive optics system. Astigmatism and spherical refraction were corrected, whereas the other natural aberrations remained untouched. Visual acuity, depth of focus defined as the interval of vision for which the target was still perceived acceptable, contrast sensitivity, and change in spherical refraction associated with the variation in pupil diameter from 6 to 2.5 mm were measured. A refractive change of 1.60 D/µm of induced spherical aberration was obtained. Emmetropic eyes became myopic when positive spherical aberration was induced and hyperopic when negative spherical aberration was induced (R 2 = 81%). There were weak correlations between spherical aberration and visual acuity or depth of focus (R 2 = 2% and 3%, respectively). Contrast sensitivity worsened with the increment of spherical aberration (R 2 = 59%). When pupil size decreased, emmetropic eyes became hyperopic when preexisting spherical aberration was positive and myopic when spherical aberration was negative, with an average refractive change of 0.60 D/µm of spherical aberration (R 2 = 54%). An inverse linear correlation exists between the refractive state of the eye and spherical aberration induced within the range of laser vision correction. Small values of spherical aberration do not worsen visual acuity or depth of focus, but positive spherical aberration may induce night myopia. In addition, the changes in spherical refraction when the pupil constricts may worsen near vision when positive spherical aberration is induced or improve it when spherical aberration is negative. [J Refract Surg. 2017;33(7):470-474.]. Copyright 2017, SLACK Incorporated.

  5. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review.

    PubMed

    Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F

    2016-03-05

    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.

  6. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review

    PubMed Central

    Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F.

    2016-01-01

    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works. PMID:26959030

  7. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  8. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  9. Survey on Ranging Sensors and Cooperative Techniques for Relative Positioning of Vehicles

    PubMed Central

    de Ponte Müller, Fabian

    2017-01-01

    Future driver assistance systems will rely on accurate, reliable and continuous knowledge on the position of other road participants, including pedestrians, bicycles and other vehicles. The usual approach to tackle this requirement is to use on-board ranging sensors inside the vehicle. Radar, laser scanners or vision-based systems are able to detect objects in their line-of-sight. In contrast to these non-cooperative ranging sensors, cooperative approaches follow a strategy in which other road participants actively support the estimation of the relative position. The limitations of on-board ranging sensors regarding their detection range and angle of view and the facility of blockage can be approached by using a cooperative approach based on vehicle-to-vehicle communication. The fusion of both, cooperative and non-cooperative strategies, seems to offer the largest benefits regarding accuracy, availability and robustness. This survey offers the reader a comprehensive review on different techniques for vehicle relative positioning. The reader will learn the important performance indicators when it comes to relative positioning of vehicles, the different technologies that are both commercially available and currently under research, their expected performance and their intrinsic limitations. Moreover, the latest research in the area of vision-based systems for vehicle detection, as well as the latest work on GNSS-based vehicle localization and vehicular communication for relative positioning of vehicles, are reviewed. The survey also includes the research work on the fusion of cooperative and non-cooperative approaches to increase the reliability and the availability. PMID:28146129

  10. Treatment of moderate-to-high hyperopia with the WaveLight Allegretto 400 and EX500 excimer laser systems

    PubMed Central

    Motwani, Manoj; Pei, Ronald

    2017-01-01

    Purpose To evaluate the efficacy of treating patients with +3.00 diopters (D) to +6.00 D of hyperopia via laser-assisted in situ keratomileusis (LASIK) with the WaveLight Allegretto 400 and EX500 excimer laser systems. Setting Private clinical ophthalmology practice. Patients and methods This was a retrospective study of patients undergoing LASIK treatments of +3.00 to +6.00 D on two different WaveLight laser systems: 163 eyes on the 400 (Hertz) Hz system and 54 eyes on the 500 Hz system. The duration of follow-up was 6 months postoperation. Data were evaluated for uncorrected distance visual acuity, corrected distance visual acuity (CDVA), spherical equivalents (SEQs), and changes in these parameters (eg, loss of vision, regression over time). Results Treatment with both lasers was safe and effective, with loss of one line of CDVA in four of 162 eyes using the 400 Hz laser system, and none of the 54 eyes with the 500 Hz laser system. Overall, regression ≥0.75 D from goal at 6 months was observed in 11.7% (19/163) of eyes in the 400 Hz laser group and 9.26% (5/54) of eyes in the 500 Hz laser group (regression ≥0.50 D =77.9% [127/163] and 77.8% [42/54], respectively). The mean SEQ regressions for all eyes with moderate hyperopia were 0.10 and 0.18 D for those with high hyperopia. Conclusions Both the 400 and 500 Hz excimer laser systems were safe and effective for the LASIK treatment of moderate-to-high hyperopia. The overall rate of regression was low and the amount of regression was relatively small with both systems. PMID:28579751

  11. Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri

    2002-01-01

    The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.

  12. 3D reconstruction of laser projective point with projection invariant generated from five points on 2D target.

    PubMed

    Xu, Guan; Yuan, Jing; Li, Xiaotao; Su, Jian

    2017-08-01

    Vision measurement on the basis of structured light plays a significant role in the optical inspection research. The 2D target fixed with a line laser projector is designed to realize the transformations among the world coordinate system, the camera coordinate system and the image coordinate system. The laser projective point and five non-collinear points that are randomly selected from the target are adopted to construct a projection invariant. The closed form solutions of the 3D laser points are solved by the homogeneous linear equations generated from the projection invariants. The optimization function is created by the parameterized re-projection errors of the laser points and the target points in the image coordinate system. Furthermore, the nonlinear optimization solutions of the world coordinates of the projection points, the camera parameters and the lens distortion coefficients are contributed by minimizing the optimization function. The accuracy of the 3D reconstruction is evaluated by comparing the displacements of the reconstructed laser points with the actual displacements. The effects of the image quantity, the lens distortion and the noises are investigated in the experiments, which demonstrate that the reconstruction approach is effective to contribute the accurate test in the measurement system.

  13. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  14. Adaptive Optics for the Human Eye

    NASA Astrophysics Data System (ADS)

    Williams, D. R.

    2000-05-01

    Adaptive optics can extend not only the resolution of ground-based telescopes, but also the human eye. Both static and dynamic aberrations in the cornea and lens of the normal eye limit its optical quality. Though it is possible to correct defocus and astigmatism with spectacle lenses, higher order aberrations remain. These aberrations blur vision and prevent us from seeing at the fundamental limits set by the retina and brain. They also limit the resolution of cameras to image the living retina, cameras that are a critical for the diagnosis and treatment of retinal disease. I will describe an adaptive optics system that measures the wave aberration of the eye in real time and compensates for it with a deformable mirror, endowing the human eye with unprecedented optical quality. This instrument provides fresh insight into the ultimate limits on human visual acuity, reveals for the first time images of the retinal cone mosaic responsible for color vision, and points the way to contact lenses and laser surgical methods that could enhance vision beyond what is currently possible today. Supported by the NSF Science and Technology Center for Adaptive Optics, the National Eye Institute, and Bausch and Lomb, Inc.

  15. Use of market segmentation to identify untapped consumer needs in vision correction surgery for future growth.

    PubMed

    Loarie, Thomas M; Applegate, David; Kuenne, Christopher B; Choi, Lawrence J; Horowitz, Diane P

    2003-01-01

    Market segmentation analysis identifies discrete segments of the population whose beliefs are consistent with exhibited behaviors such as purchase choice. This study applies market segmentation analysis to low myopes (-1 to -3 D with less than 1 D cylinder) in their consideration and choice of a refractive surgery procedure to discover opportunities within the market. A quantitative survey based on focus group research was sent to a demographically balanced sample of myopes using contact lenses and/or glasses. A variable reduction process followed by a clustering analysis was used to discover discrete belief-based segments. The resulting segments were validated both analytically and through in-market testing. Discontented individuals who wear contact lenses are the primary target for vision correction surgery. However, 81% of the target group is apprehensive about laser in situ keratomileusis (LASIK). They are nervous about the procedure and strongly desire reversibility and exchangeability. There exists a large untapped opportunity for vision correction surgery within the low myope population. Market segmentation analysis helped determine how to best meet this opportunity through repositioning existing procedures or developing new vision correction technology, and could also be applied to identify opportunities in other vision correction populations.

  16. Dual Use of Image Based Tracking Techniques: Laser Eye Surgery and Low Vision Prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.; Barton, R. Shane

    1994-01-01

    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.

  17. Dual use of image based tracking techniques: Laser eye surgery and low vision prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1994-01-01

    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.

  18. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  19. Intravitreal Aflibercept Injection in Eyes With Substantial Vision Loss After Laser Photocoagulation for Diabetic Macular Edema: Subanalysis of the VISTA and VIVID Randomized Clinical Trials.

    PubMed

    Wykoff, Charles C; Marcus, Dennis M; Midena, Edoardo; Korobelnik, Jean-François; Saroj, Namrata; Gibson, Andrea; Vitti, Robert; Berliner, Alyson J; Williams Liu, Zinaria; Zeitz, Oliver; Metzig, Carola; Schmelter, Thomas; Heier, Jeffrey S

    2016-12-22

    Information on the effect of anti-vascular endothelial growth factor therapy in eyes with diabetic macular edema (DME) with vision loss after macular laser photocoagulation is clinically valuable. To evaluate visual and anatomic outcomes in a subgroup of macular laser photocoagulation treatment control (hereafter laser control) eyes with substantial vision loss receiving treatment with intravitreal aflibercept injection. This investigation was a post hoc analysis of a subgroup of laser control eyes in 2 phase 3 trials-VISTA (Study of Intravitreal Aflibercept Injection in Patients With Diabetic Macular Edema) and VIVID (Intravitreal Aflibercept Injection in Vision Impairment Due to DME)-in a multicenter setting. One hundred nine laser control eyes with center-involving DME were included. Treatment with intravitreal aflibercept injection (2 mg) every 8 weeks after 5 monthly doses with sham injections on nontreatment visits starting at week 24 was initiated on meeting prespecified criteria of at least a 10-letter visual acuity loss at 2 consecutive visits or at least a 15-letter visual acuity loss from the best previous measurement at 1 visit and vision not better than at baseline. Visual and anatomic outcomes in a subgroup of laser control eyes receiving treatment with intravitreal aflibercept injection. Through week 100, a total of 63 of 154 eyes (40.9%) in VISTA and 46 of 133 eyes (34.6%) in VIVID initially randomized to laser control received treatment with intravitreal aflibercept injection. The median time from week 24 to the first intravitreal aflibercept injection treatment was 34.0 (VISTA) and 83.5 (VIVID) days. In this subgroup, the mean (SD) visual gain from baseline to week 100 was 2.2 (12.5) (VISTA) and 3.8 (10.1) (VIVID) letters. At the time of intravitreal aflibercept injection initiation, these eyes had a mean (SD) loss of 11.0 (10.1) (VISTA) and 10.0 (6.5) (VIVID) letters from baseline, and they subsequently gained a mean (SD) of 17.4 (9.7) (VISTA) and 13.6 (8.6) (VIVID) letters from the initiation of treatment with intravitreal aflibercept injection through week 100. There was a minimal mean change in central subfield thickness from baseline in these eyes at the time of intravitreal aflibercept injection initiation (an increase of 3.9 μm in VISTA and a decrease of 3.0 μm in VIVID), after which further mean (SD) reductions of 285.6 (202.6) μm (VISTA) and 313.4 (181.9) μm (VIVID) occurred through week 100. Intravitreal aflibercept injection improves visual and anatomic outcomes in eyes experiencing substantial vision loss after macular laser photocoagulation treatment for DME. clinicaltrials.gov Identifiers: NCT01363440 and NCT01331681.

  20. A Medical Manipulator System with Lasers in Photodynamic Therapy of Port Wine Stains

    PubMed Central

    Wang, Xingtao; Tian, Chunlai; Duan, Xingguang; Gu, Ying; Huang, Naiyan

    2014-01-01

    Port wine stains (PWS) are a congenital malformation and dilation of the superficial dermal capillary. Photodynamic therapy (PDT) with lasers is an effective treatment of PWS with good results. However, because the laser density is uneven and nonuniform, the treatment is carried out manually by a doctor thus providing little accuracy. Additionally, since the treatment of a single lesion can take between 30 and 60 minutes, the doctor can become fatigued after only a few applications. To assist the medical staff with this treatment method, a medical manipulator system (MMS) was built to operate the lasers. The manipulator holds the laser fiber and, using a combination of active and passive joints, the fiber can be operated automatically. In addition to the control input from the doctor over a human-computer interface, information from a binocular vision system is used to guide and supervise the operation. Clinical results are compared in nonparametric values between treatments with and without the use of the MMS. The MMS, which can significantly reduce the workload of doctors and improve the uniformity of laser irradiation, was safely and helpfully applied in PDT treatment of PWS with good therapeutic results. PMID:25302297

  1. Image formation simulation for computer-aided inspection planning of machine vision systems

    NASA Astrophysics Data System (ADS)

    Irgenfried, Stephan; Bergmann, Stephan; Mohammadikaji, Mahsa; Beyerer, Jürgen; Dachsbacher, Carsten; Wörn, Heinz

    2017-06-01

    In this work, a simulation toolset for Computer Aided Inspection Planning (CAIP) of systems for automated optical inspection (AOI) is presented along with a versatile two-robot-setup for verification of simulation and system planning results. The toolset helps to narrow down the large design space of optical inspection systems in interaction with a system expert. The image formation taking place in optical inspection systems is simulated using GPU-based real time graphics and high quality off-line-rendering. The simulation pipeline allows a stepwise optimization of the system, from fast evaluation of surface patch visibility based on real time graphics up to evaluation of image processing results based on off-line global illumination calculation. A focus of this work is on the dependency of simulation quality on measuring, modeling and parameterizing the optical surface properties of the object to be inspected. The applicability to real world problems is demonstrated by taking the example of planning a 3D laser scanner application. Qualitative and quantitative comparison results of synthetic and real images are presented.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitai, M S; Semchishen, A V; Semchishen, V A

    The optical quality of the eye cornea surface after performing the laser vision correction essentially depends on the characteristic roughness scale (CRS) of the ablated surface, which is mainly determined by the absorption coefficient of the cornea at the laser wavelength. Thus, in the case of using an excimer ArF laser (λ = 193 nm) the absorption coefficient is equal to 39000 cm{sup -1}, the darkening by the dissociation products takes place, and the depth of the roughness relief can be as large as 0.23 mm. Under irradiation with the Er : YAG laser (λ = 2940 nm) the clearingmore » is observed due to the rupture of hydrogen bonds in water, and the relief depth exceeds 1 μm. It is shown that the process of reepithelization that occurs after performing the laser vision correction leads to the improvement of the optical quality of the cornea surface. (interaction of laser radiation with matter)« less

  3. Comparative Geometrical Accuracy Investigations of Hand-Held 3d Scanning Systems - AN Update

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Lindstaedt, M.; Starosta, D.

    2018-05-01

    Hand-held 3D scanning systems are increasingly available on the market from several system manufacturers. These systems are deployed for 3D recording of objects with different size in diverse applications, such as industrial reverse engineering, and documentation of museum exhibits etc. Typical measurement distances range from 0.5 m to 4.5 m. Although they are often easy-to-use, the geometric performance of these systems, especially the precision and accuracy, are not well known to many users. First geometrical investigations of a variety of diverse hand-held 3D scanning systems were already carried out by the Photogrammetry & Laser Scanning Lab of the HafenCity University Hamburg (HCU Hamburg) in cooperation with two other universities in 2016. To obtain more information about the accuracy behaviour of the latest generation of hand-held 3D scanning systems, HCU Hamburg conducted further comparative geometrical investigations using structured light systems with speckle pattern (Artec Spider, Mantis Vision PocketScan 3D, Mantis Vision F5-SR, Mantis Vision F5-B, and Mantis Vision F6), and photogrammetric systems (Creaform HandySCAN 700 and Shining FreeScan X7). In the framework of these comparative investigations geometrically stable reference bodies were used. The appropriate reference data was acquired by measurements with two structured light projection systems (AICON smartSCAN and GOM ATOS I 2M). The comprehensive test results of the different test scenarios are presented and critically discussed in this contribution.

  4. Improving Vision-Based Motor Rehabilitation Interactive Systems for Users with Disabilities Using Mirror Feedback

    PubMed Central

    Martínez-Bueso, Pau; Moyà-Alcover, Biel

    2014-01-01

    Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (T s) and time-to-complete (T c)). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (T s = 7.09 (P < 0.001) and T c = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems. PMID:25295310

  5. High-power fused assemblies enabled by advances in fiber-processing technologies

    NASA Astrophysics Data System (ADS)

    Wiley, Robert; Clark, Brett

    2011-02-01

    The power handling capabilities of fiber lasers are limited by the technologies available to fabricate and assemble the key optical system components. Previous tools for the assembly, tapering, and fusion of fiber laser elements have had drawbacks with regard to temperature range, alignment capability, assembly flexibility and surface contamination. To provide expanded capabilities for fiber laser assembly, a wide-area electrical plasma heat source was used in conjunction with an optimized image analysis method and a flexible alignment system, integrated according to mechatronic principles. High-resolution imaging and vision-based measurement provided feedback to adjust assembly, fusion, and tapering process parameters. The system was used to perform assembly steps including dissimilar-fiber splicing, tapering, bundling, capillary bundling, and fusion of fibers to bulk optic devices up to several mm in diameter. A wide range of fiber types and diameters were tested, including extremely large diameters and photonic crystal fibers. The assemblies were evaluated for conformation to optical and mechanical design criteria, such as taper geometry and splice loss. The completed assemblies met the performance targets and exhibited reduced surface contamination compared to assemblies prepared on previously existing equipment. The imaging system and image analysis algorithms provided in situ fiber geometry measurement data that agreed well with external measurement. The ability to adjust operating parameters dynamically based on imaging was shown to provide substantial performance benefits, particularly in the tapering of fibers and bundles. The integrated design approach was shown to provide sufficient flexibility to perform all required operations with a minimum of reconfiguration.

  6. Tracking by Identification Using Computer Vision and Radio

    PubMed Central

    Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez

    2013-01-01

    We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485

  7. Expanding Dimensionality in Cinema Color: Impacting Observer Metamerism through Multiprimary Display

    NASA Astrophysics Data System (ADS)

    Long, David L.

    Television and cinema display are both trending towards greater ranges and saturation of reproduced colors made possible by near-monochromatic RGB illumination technologies. Through current broadcast and digital cinema standards work, system designs employing laser light sources, narrow-band LED, quantum dots and others are being actively endorsed in promotion of Wide Color Gamut (WCG). Despite artistic benefits brought to creative content producers, spectrally selective excitations of naturally different human color response functions exacerbate variability of observer experience. An exaggerated variation in color-sensing is explicitly counter to the exhaustive controls and calibrations employed in modern motion picture pipelines. Further, singular standard observer summaries of human color vision such as found in the CIE's 1931 and 1964 color matching functions and used extensively in motion picture color management are deficient in recognizing expected human vision variability. Many researchers have confirmed the magnitude of observer metamerism in color matching in both uniform colors and imagery but few have shown explicit color management with an aim of minimized difference in observer perception variability. This research shows that not only can observer metamerism influences be quantitatively predicted and confirmed psychophysically but that intentionally engineered multiprimary displays employing more than three primaries can offer increased color gamut with drastically improved consistency of experience. To this end, a seven-channel prototype display has been constructed based on observer metamerism models and color difference indices derived from the latest color vision demographic research. This display has been further proven in forced-choice paired comparison tests to deliver superior color matching to reference stimuli versus both contemporary standard RGB cinema projection and recently ratified standard laser projection across a large population of color-normal observers.

  8. Optical system for object detection and delineation in space

    NASA Astrophysics Data System (ADS)

    Handelman, Amir; Shwartz, Shoam; Donitza, Liad; Chaplanov, Loran

    2018-01-01

    Object recognition and delineation is an important task in many environments, such as in crime scenes and operating rooms. Marking evidence or surgical tools and attracting the attention of the surrounding staff to the marked objects can affect people's lives. We present an optical system comprising a camera, computer, and small laser projector that can detect and delineate objects in the environment. To prove the optical system's concept, we show that it can operate in a hypothetical crime scene in which a pistol is present and automatically recognize and segment it by various computer-vision algorithms. Based on such segmentation, the laser projector illuminates the actual boundaries of the pistol and thus allows the persons in the scene to comfortably locate and measure the pistol without holding any intermediator device, such as an augmented reality handheld device, glasses, or screens. Using additional optical devices, such as diffraction grating and a cylinder lens, the pistol size can be estimated. The exact location of the pistol in space remains static, even after its removal. Our optical system can be fixed or dynamically moved, making it suitable for various applications that require marking of objects in space.

  9. Data Fusion for a Vision-Radiological System for Source Tracking and Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Andreas; Koppal, Sanjeev

    2015-07-01

    A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for themore » purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system algorithms are used and demonstrated in various laboratory scenarios, and later in realistic tracking scenarios. The selection and testing of radiological and computer-vision sensors for the additional specific scenarios will be the subject of ongoing and future work. (authors)« less

  10. Method and system for assembling miniaturized devices

    DOEpatents

    Montesanti, Richard C.; Klingmann, Jeffrey L.; Seugling, Richard M.

    2013-03-12

    An apparatus for assembling a miniaturized device includes a manipulator system including six manipulators operable to position and orient components of the miniaturized device with submicron precision and micron-level accuracy. The manipulator system includes a first plurality of motorized axes, a second plurality of manual axes, and force and torque and sensors. Each of the six manipulators includes at least one translation stage, at least one rotation stage, tooling attached to the at least one translation stage or the at least one rotation stage, and an attachment mechanism disposed at a distal end of the tooling and operable to attach at least a portion of the miniaturized device to the tooling. The apparatus also includes an optical coordinate-measuring machine (OCMM) including a machine-vision system, a laser-based distance-measuring probe, and a touch probe. The apparatus also includes an operator control system coupled to the manipulator system and the OCMM.

  11. Ocular injuries from laser accidents

    NASA Astrophysics Data System (ADS)

    Sliney, David H.

    1996-04-01

    Ocular injuries resulting from exposure to laser beams are relatively uncommon since there is normally a low probability of a relatively small-diameter laser beam entering the pupil of an eye. This has been the accident experience to date with lasers used in the research laboratory and in industry. A review of the accident data suggests that at least one type of laser is responsible for the majority of accidental injuries that result in a visual loss in the exposed eye. This is the q-switched neodymium:YAG laser. Although a continuous-wave laser causes a thermal coagulation of tissue, a q-switched laser having a pulse of only nanoseconds duration disrupts tissue. A visible or near-infrared laser can be focused on the retina, resulting in a vitreous hemorrhage. Examples of laser ocular injuries will be presented. Despite macular injuries and an initially serious visual loss, the vision of many patients recovers surprisingly well. Others may have severe vision loss. Corneal injuries resulting from exposure to reflected laser energy in the far-infrared account for surprisingly few reported laser accidents. The explanation for this accident statistic is not really clear. However, with the increasing use of lasers operating at many new wavelengths in the ultraviolet, visible and infrared, the ophthalmologist may see more accidental injuries from lasers.

  12. Laser Vision Correction with Q Factor Modification for Keratoconus Management.

    PubMed

    Pahuja, Natasha Kishore; Shetty, Rohit; Sinha Roy, Abhijit; Thakkar, Maithil Mukesh; Jayadev, Chaitra; Nuijts, Rudy Mma; Nagaraja, Harsha

    2017-04-01

    To evaluate the outcomes of corneal laser ablation with Q factor modification for vision correction in patients with progressive keratoconus. In this prospective study, 50 eyes of 50 patients were divided into two groups based on Q factor (>-1 in Group I and ≤-1 in Group II). All patients underwent a detailed ophthalmic examination including uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), subjective acceptance and corneal topography using the Pentacam. The topolyzer was used to measure the corneal asphericity (Q). Ablation was performed based on the preoperative Q values and thinnest pachymetry to obtain a target of near normal Q. This was followed by corneal collagen crosslinking to stabilize the progression. Statistically significant improvement (p ≤ 0.05) was noticed in refractive, topographic, and Q values posttreatment in both groups. The improvement in higher-order aberrations and total aberrations were statistically significant in both groups; however, the spherical aberration showed statistically significant improvement only in Group II. Ablation based on the preoperative Q and pachymetry for a near normal postoperative Q value appears to be an effective method to improve the visual acuity and quality in patients with keratoconus.

  13. Combat Ocular Problems. Supplement,

    DTIC Science & Technology

    1982-04-01

    the macula , the area of central vision, and in the center of the macula lies the fovea, the area of most acute vision. Around the periphery of the... macula in this photograph are a series of small white spots. These are alterations to the tissue caused by laser exposure. The criterion for laser induced...distrilated in the retina: cones are more common in the macula and rods are more common in the extramacular retina. Figure 6 shows a lesion near the

  14. Mid-infrared photodetectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guyot-Sionnest, Philippe; Keuleyan, Sean E.; Lhuillier, Emmanuel

    2016-04-19

    Nanoparticles, methods of manufacture, devices comprising the nanoparticles, methods of their manufacture, and methods of their use are provided herein. The nanoparticles and devices having photoabsorptions in the range of 1.7 .mu.m to 12 .mu.m and can be used as photoconductors, photodiodes, phototransistors, charge-coupled devices (CCD), luminescent probes, lasers, thermal imagers, night-vision systems, and/or photodetectors.

  15. Lasers in clinical ophthalmology

    NASA Astrophysics Data System (ADS)

    Ribeiro, Paulo A.

    1992-03-01

    The clinical application of lasers in ophthalmology is schematized, showing for each anatomic eye structure, pathologies that may be treated through this procedure. In the cornea, the unusual laser practice for suture removals and the promising possibility of the excimer laser in refractive surgery are discussed. In the iris, the camerular angle, and the ciliary body, the laser application is essentially used to treat the glaucoma and other situations that are not so frequent. The capsulotomy with YAG LASER is used in the treatment of structures related with crystalline and, at least, the treatment of the retina and choroid pathology is expanded. A. A. explained the primordial interest and important of laser in the diabetic retinopathy treatment and some results in patients with more than 5 years of evolution are: 55 of the patients with proliferative diabetic retinopathy (RDP) treated for more than 5 years noticed their vision improved or stabilized; 5 years after treating patients with PDR, 49.3 had their vision stabilized or even improved, provided the diabetics had declared itself more than 20 years ago, versus 61.7 provided the diabetics had declared itself less than 20 years before; finally, 53.8 of the patients under 40-years-old when the diabetics was diagnosed, had their vision improved or at least stabilized 5 years after the beginning of the treatment. On the other side, when patients were over 40 years old when the diabetics was diagnosed percentage increased to 55.9. This study was established in the follow-up of 149 cases over 10 years.

  16. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  17. Profitability analysis of a femtosecond laser system for cataract surgery using a fuzzy logic approach.

    PubMed

    Trigueros, José Antonio; Piñero, David P; Ismail, Mahmoud M

    2016-01-01

    To define the financial and management conditions required to introduce a femtosecond laser system for cataract surgery in a clinic using a fuzzy logic approach. In the simulation performed in the current study, the costs associated to the acquisition and use of a commercially available femtosecond laser platform for cataract surgery (VICTUS, TECHNOLAS Perfect Vision GmbH, Bausch & Lomb, Munich, Germany) during a period of 5y were considered. A sensitivity analysis was performed considering such costs and the countable amortization of the system during this 5y period. Furthermore, a fuzzy logic analysis was used to obtain an estimation of the money income associated to each femtosecond laser-assisted cataract surgery (G). According to the sensitivity analysis, the femtosecond laser system under evaluation can be profitable if 1400 cataract surgeries are performed per year and if each surgery can be invoiced more than $500. In contrast, the fuzzy logic analysis confirmed that the patient had to pay more per surgery, between $661.8 and $667.4 per surgery, without considering the cost of the intraocular lens (IOL). A profitability of femtosecond laser systems for cataract surgery can be obtained after a detailed financial analysis, especially in those centers with large volumes of patients. The cost of the surgery for patients should be adapted to the real flow of patients with the ability of paying a reasonable range of cost.

  18. Improvement of Hungarian Joint Terminal Attack Program

    DTIC Science & Technology

    2013-06-13

    LST Laser Spot Tracker NVG Night Vision Goggle ROMAD Radio Operator Maintainer and Driver ROVER Remotely Operated Video Enhanced Receiver TACP...visual target designation. The other component consists of a laser spot tracker (LST), which identifies targets by tracking laser energy reflecting...capability for every type of night time missions, laser spot tracker for laser spot search missions, remotely operated video enhanced receiver

  19. Three-dimensional sensor system using multistripe laser and stereo camera for environment recognition of mobile robots

    NASA Astrophysics Data System (ADS)

    Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.

    2002-10-01

    In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.

  20. Non-optically combined multispectral source for IR, visible, and laser testing

    NASA Astrophysics Data System (ADS)

    Laveigne, Joe; Rich, Brian; McHugh, Steve; Chua, Peter

    2010-04-01

    Electro Optical technology continues to advance, incorporating developments in infrared and laser technology into smaller, more tightly-integrated systems that can see and discriminate military targets at ever-increasing distances. New systems incorporate laser illumination and ranging with gated sensors that allow unparalleled vision at a distance. These new capabilities augment existing all-weather performance in the mid-wave infrared (MWIR) and long-wave infrared (LWIR), as well as low light level visible and near infrared (VNIR), giving the user multiple means of looking at targets of interest. There is a need in the test industry to generate imagery in the relevant spectral bands, and to provide temporal stimulus for testing range-gated systems. Santa Barbara Infrared (SBIR) has developed a new means of combining a uniform infrared source with uniform laser and visible sources for electro-optics (EO) testing. The source has been designed to allow laboratory testing of surveillance systems incorporating an infrared imager and a range-gated camera; and for field testing of emerging multi-spectral/fused sensor systems. A description of the source will be presented along with performance data relating to EO testing, including output in pertinent spectral bands, stability and resolution.

  1. Flexible coordinate measurement system based on robot for industries

    NASA Astrophysics Data System (ADS)

    Guo, Yin; Yang, Xue-you; Liu, Chang-jie; Ye, Sheng-hua

    2010-10-01

    The flexible coordinate measurement system based on robot which is applicable to multi-model vehicle is designed to meet the needs of online measurement for current mainstream mixed body-in-white(BIW) production line. The moderate precision, good flexibility and no blind angle are the benefits of this measurement system. According to the measurement system, a monocular structured light vision sensor has been designed, which can measure not only edges, but also planes, apertures and other features. And a effective way to fast on-site calibration of the whole system using the laser tracker has also been proposed, which achieves the unity of various coordinate systems in industrial fields. The experimental results show satisfactory precision of +/-0.30mm of this measurement system, which is sufficient for the needs of online measurement for body-in-white(BIW) in the auto production line. The system achieves real-time detection and monitoring of the whole process of the car body's manufacture, and provides a complete data support in purpose of overcoming the manufacturing error immediately and accurately and improving the manufacturing precision.

  2. A novel weld seam detection method for space weld seam of narrow butt joint in laser welding

    NASA Astrophysics Data System (ADS)

    Shao, Wen Jun; Huang, Yu; Zhang, Yong

    2018-02-01

    Structured light measurement is widely used for weld seam detection owing to its high measurement precision and robust. However, there is nearly no geometrical deformation of the stripe projected onto weld face, whose seam width is less than 0.1 mm and without misalignment. So, it's very difficult to ensure an exact retrieval of the seam feature. This issue is raised as laser welding for butt joint of thin metal plate is widely applied. Moreover, measurement for the seam width, seam center and the normal vector of the weld face at the same time during welding process is of great importance to the welding quality but rarely reported. Consequently, a seam measurement method based on vision sensor for space weld seam of narrow butt joint is proposed in this article. Three laser stripes with different wave length are project on the weldment, in which two red laser stripes are designed and used to measure the three dimensional profile of the weld face by the principle of optical triangulation, and the third green laser stripe is used as light source to measure the edge and the centerline of the seam by the principle of passive vision sensor. The corresponding image process algorithm is proposed to extract the centerline of the red laser stripes as well as the seam feature. All these three laser stripes are captured and processed in a single image so that the three dimensional position of the space weld seam can be obtained simultaneously. Finally, the result of experiment reveals that the proposed method can meet the precision demand of space narrow butt joint.

  3. Subthreshold diode laser micropulse photocoagulation for the treatment of diabetic macular edema.

    PubMed

    Sivaprasad, Sobha; Dorin, Giorgio

    2012-03-01

    Diabetic macular edema (DME) is a sight-threatening complication of diabetic retinopathy, the leading cause of visual loss in the working-age population in the industrialized and emerging world. The standard of care for DME is focal/grid laser photocoagulation, which is proven effective in reducing the risk of vision loss, but inherently destructive and associated with tissue damage and collateral effects. Subthreshold diode laser micropulse photocoagulation is a nondestructive tissue-sparing laser procedure, which, in randomized controlled trials for the treatment of DME, has been found equally effective as conventional photocoagulation. Functional and anatomical outcomes from four independent randomized controlled trials provide level one evidence that vision stabilization/improvement and edema resolution/reduction can be elicited with less or no retinal damage, and with fewer or no complications. This review describes the principles of subthreshold diode laser micropulse photocoagulation, its treatment modalities and clinical outcomes in the context of standard laser treatments and of emerging nonlaser therapies for DME.

  4. A robust rotation-invariance displacement measurement method for a micro-/nano-positioning system

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Zhang, Xianmin; Wu, Heng; Li, Hai; Gan, Jinqiang

    2018-05-01

    A robust and high-precision displacement measurement method for a compliant mechanism-based micro-/nano-positioning system is proposed. The method is composed of an integer-pixel and a sub-pixel matching procedure. In the proposed algorithm (Pro-A), an improved ring projection transform (IRPT) and gradient information are used as features for approximating the coarse candidates and fine locations, respectively. Simulations are conducted and the results show that the Pro-A has the ability of rotation-invariance and strong robustness, with a theoretical accuracy of 0.01 pixel. To validate the practical performance, a series of experiments are carried out using a computer micro-vision and laser interferometer system (LIMS). The results demonstrate that both the LIMS and Pro-A can achieve high precision, while the Pro-A has better stability and adaptability.

  5. Camera-based micro interferometer for distance sensing

    NASA Astrophysics Data System (ADS)

    Will, Matthias; Schädel, Martin; Ortlepp, Thomas

    2017-12-01

    Interference of light provides a high precision, non-contact and fast method for measurement method for distances. Therefore this technology dominates in high precision systems. However, in the field of compact sensors capacitive, resistive or inductive methods dominates. The reason is, that the interferometric system has to be precise adjusted and needs a high mechanical stability. As a result, we have usual high-priced complex systems not suitable in the field of compact sensors. To overcome these we developed a new concept for a very small interferometric sensing setup. We combine a miniaturized laser unit, a low cost pixel detector and machine vision routines to realize a demonstrator for a Michelson type micro interferometer. We demonstrate a low cost sensor smaller 1cm3 including all electronics and demonstrate distance sensing up to 30 cm and resolution in nm range.

  6. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  7. Autonomous Robotic Inspection in Tunnels

    NASA Astrophysics Data System (ADS)

    Protopapadakis, E.; Stentoumis, C.; Doulamis, N.; Doulamis, A.; Loupos, K.; Makantasis, K.; Kopsiaftis, G.; Amditis, A.

    2016-06-01

    In this paper, an automatic robotic inspector for tunnel assessment is presented. The proposed platform is able to autonomously navigate within the civil infrastructures, grab stereo images and process/analyse them, in order to identify defect types. At first, there is the crack detection via deep learning approaches. Then, a detailed 3D model of the cracked area is created, utilizing photogrammetric methods. Finally, a laser profiling of the tunnel's lining, for a narrow region close to detected crack is performed; allowing for the deduction of potential deformations. The robotic platform consists of an autonomous mobile vehicle; a crane arm, guided by the computer vision-based crack detector, carrying ultrasound sensors, the stereo cameras and the laser scanner. Visual inspection is based on convolutional neural networks, which support the creation of high-level discriminative features for complex non-linear pattern classification. Then, real-time 3D information is accurately calculated and the crack position and orientation is passed to the robotic platform. The entire system has been evaluated in railway and road tunnels, i.e. in Egnatia Highway and London underground infrastructure.

  8. An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor

    NASA Astrophysics Data System (ADS)

    Liscombe, Michael

    3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.

  9. Simulator scene display evaluation device

    NASA Technical Reports Server (NTRS)

    Haines, R. F. (Inventor)

    1986-01-01

    An apparatus for aligning and calibrating scene displays in an aircraft simulator has a base on which all of the instruments for the aligning and calibrating are mounted. Laser directs beam at double right prism which is attached to pivoting support on base. The pivot point of the prism is located at the design eye point (DEP) of simulator during the aligning and calibrating. The objective lens in the base is movable on a track to follow the laser beam at different angles within the field of vision at the DEP. An eyepiece and a precision diopter are movable into a position behind the prism during the scene evaluation. A photometer or illuminometer is pivotable about the pivot into and out of position behind the eyepiece.

  10. Cataract Vision Simulator

    MedlinePlus

    ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Academy Publications EyeNet Ophthalmology ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Find an Ophthalmologist Advanced ...

  11. Adaptive ophthalmologic system

    DOEpatents

    Olivier, Scot S.; Thompson, Charles A.; Bauman, Brian J.; Jones, Steve M.; Gavel, Don T.; Awwal, Abdul A.; Eisenbies, Stephen K.; Haney, Steven J.

    2007-03-27

    A system for improving vision that can diagnose monochromatic aberrations within a subject's eyes, apply the wavefront correction, and then enable the patient to view the results of the correction. The system utilizes a laser for producing a beam of light; a corrector; a wavefront sensor; a testing unit; an optic device for directing the beam of light to the corrector, to the retina, from the retina to the wavefront sensor, and to the testing unit; and a computer operatively connected to the wavefront sensor and the corrector.

  12. Integrated scanning laser ophthalmoscopy and optical coherence tomography for quantitative multimodal imaging of retinal degeneration and autofluorescence

    NASA Astrophysics Data System (ADS)

    Issaei, Ali; Szczygiel, Lukasz; Hossein-Javaheri, Nima; Young, Mei; Molday, L. L.; Molday, R. S.; Sarunic, M. V.

    2011-03-01

    Scanning Laser Ophthalmoscopy (SLO) and Coherence Tomography (OCT) are complimentary retinal imaging modalities. Integration of SLO and OCT allows for both fluorescent detection and depth- resolved structural imaging of the retinal cell layers to be performed in-vivo. System customization is required to image rodents used in medical research by vision scientists. We are investigating multimodal SLO/OCT imaging of a rodent model of Stargardt's Macular Dystrophy which is characterized by retinal degeneration and accumulation of toxic autofluorescent lipofuscin deposits. Our new findings demonstrate the ability to track fundus autofluorescence and retinal degeneration concurrently.

  13. Real Time Target Tracking Using Dedicated Vision Hardware

    NASA Astrophysics Data System (ADS)

    Kambies, Keith; Walsh, Peter

    1988-03-01

    This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.

  14. Laser scatter feature of surface defect on apples

    NASA Astrophysics Data System (ADS)

    Rao, Xiuqin; Ying, Yibin; Cen, YiKe; Huang, Haibo

    2006-10-01

    A machine vision system for real-time fruit quality inspection was developed. The system consists of a chamber, a laser projector, a TMS-7DSP CCD camera (PULNIX Inc.), and a computer. A Meteor-II/MC frame grabber (Matrox Graphics Inc.) was inserted into the slot of the computer to grab fruit images. The laser projector and the camera were mounted at the ceiling of the chamber. An apple was put in the chamber, the spot of the laser projector was projected on the surface of the fruit, and an image was grabbed. 2 breed of apples was test, Each apple was imaged twice, one was imaged for the normal surface, and the other for the defect. The red component of the images was used to get the feature of the defect and the sound surface of the fruits. The average value, STD value and comentropy Value of red component of the laser scatter image were analyzed. The Standard Deviation value of red component of normal is more suitable to separate the defect surface from sound surface for the ShuijinFuji apples, but for bintang apples, there is more work need to do to separate the different surface with laser scatter image.

  15. Retinal Detachment Vision Simulator

    MedlinePlus

    ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Academy Publications EyeNet Ophthalmology ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Find an Ophthalmologist Advanced ...

  16. Eyeglasses for Vision Correction

    MedlinePlus

    ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Academy Publications EyeNet Ophthalmology ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Find an Ophthalmologist Advanced ...

  17. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  18. Laser hazards and safety in the military environment. Lecture series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Lecture Series is intended to provide an understanding of the safety problems associated with the military use of lasers. The most important hazard is the inadvertent irradiation of the eye and so the series will include contributions from the physical and biological sciences, as well as from ophthalmologists. Those involved with laser safety come from many backgrounds -- from physics to engineering and from vision physiology to clinical ophthalmology and it is essential that each understands the contribution of the other. The lectures include an introductory part and from this, the more advanced aspects of each subject are covered,more » leading to the issues involved in the design of safety codes and the control of laser hazards. The final session deals with medical surveillance of laser personnel. The Series is of value to both military and civilian personnel involved with safety, whether they are concerned with land, sea or airborne laser systems. (GRA)« less

  19. Human infrared vision is triggered by two-photon chromophore isomerization

    PubMed Central

    Palczewska, Grazyna; Vinberg, Frans; Stremplewski, Patrycjusz; Bircher, Martin P.; Salom, David; Komar, Katarzyna; Zhang, Jianye; Cascella, Michele; Wojtkowski, Maciej; Kefalov, Vladimir J.; Palczewski, Krzysztof

    2014-01-01

    Vision relies on photoactivation of visual pigments in rod and cone photoreceptor cells of the retina. The human eye structure and the absorption spectra of pigments limit our visual perception of light. Our visual perception is most responsive to stimulating light in the 400- to 720-nm (visible) range. First, we demonstrate by psychophysical experiments that humans can perceive infrared laser emission as visible light. Moreover, we show that mammalian photoreceptors can be directly activated by near infrared light with a sensitivity that paradoxically increases at wavelengths above 900 nm, and display quadratic dependence on laser power, indicating a nonlinear optical process. Biochemical experiments with rhodopsin, cone visual pigments, and a chromophore model compound 11-cis-retinyl-propylamine Schiff base demonstrate the direct isomerization of visual chromophore by a two-photon chromophore isomerization. Indeed, quantum mechanics modeling indicates the feasibility of this mechanism. Together, these findings clearly show that human visual perception of near infrared light occurs by two-photon isomerization of visual pigments. PMID:25453064

  20. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  1. Hybrid integration of VCSELs onto a silicon photonic platform for biosensing application

    NASA Astrophysics Data System (ADS)

    Lu, Huihui; Lee, Jun Su; Zhao, Yan; Cardile, Paolo; Daly, Aidan; Carroll, Lee; O'Brien, Peter

    2017-02-01

    This paper presents a technology of hybrid integration vertical cavity surface emitting lasers (VCSELs) directly on silicon photonics chip. By controlling the reflow of the solder balls used for electrical and mechanical bonding, the VCSELs were bonded at 10 degree to achieve the optimum angle-of-incidence to the planar grating coupler through vision based flip-chip techniques. The 1 dB discrepancy between optical loss values of flip-chip passive assembly and active alignment confirmed that the general purpose of the flip-chip design concept is achieved. This hybrid approach of integrating a miniaturized light source on chip opens the possibly of highly compact sensor system, which enable future portable and wearable diagnostics devices.

  2. A PIC microcontroller-based system for real-life interfacing of external peripherals with a mobile robot

    NASA Astrophysics Data System (ADS)

    Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan

    2010-02-01

    The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.

  3. Contact Lenses for Vision Correction

    MedlinePlus

    ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Academy Publications EyeNet Ophthalmology ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Find an Ophthalmologist Advanced ...

  4. Non-Proliferative Diabetic Retinopathy Vision Simulator

    MedlinePlus

    ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Academy Publications EyeNet Ophthalmology ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Find an Ophthalmologist Advanced ...

  5. Scorpion Hybrid Optical-based Inertial Tracker (HObIT) test results

    NASA Astrophysics Data System (ADS)

    Atac, Robert; Spink, Scott; Calloway, Tom; Foxlin, Eric

    2014-06-01

    High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.

  6. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    PubMed

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  7. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  8. Mobile Autonomous Humanoid Assistant

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.

    2004-01-01

    A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  9. Applications of lasers to production metrology, control, and machine 'Vision'

    NASA Astrophysics Data System (ADS)

    Pryor, T. R.; Erf, R. K.; Gara, A. D.

    1982-06-01

    General areas of laser application to production measurement and inspection are reviewed together with the associated laser measurement techniques. The topics discussed include dimensional gauging of part profiles using laser imaging or scanning techniques, laser triangulation for surface contour measurement, surface finish measurement and defect inspection, holography and speckle techniques, and strain measurement. The emerging field of robot guidance utilizing lasers and other sensing means is examined, and, finally, the use of laser marking and reading equipment is briefly discussed.

  10. Profitability analysis of a femtosecond laser system for cataract surgery using a fuzzy logic approach

    PubMed Central

    Trigueros, José Antonio; Piñero, David P; Ismail, Mahmoud M

    2016-01-01

    AIM To define the financial and management conditions required to introduce a femtosecond laser system for cataract surgery in a clinic using a fuzzy logic approach. METHODS In the simulation performed in the current study, the costs associated to the acquisition and use of a commercially available femtosecond laser platform for cataract surgery (VICTUS, TECHNOLAS Perfect Vision GmbH, Bausch & Lomb, Munich, Germany) during a period of 5y were considered. A sensitivity analysis was performed considering such costs and the countable amortization of the system during this 5y period. Furthermore, a fuzzy logic analysis was used to obtain an estimation of the money income associated to each femtosecond laser-assisted cataract surgery (G). RESULTS According to the sensitivity analysis, the femtosecond laser system under evaluation can be profitable if 1400 cataract surgeries are performed per year and if each surgery can be invoiced more than $500. In contrast, the fuzzy logic analysis confirmed that the patient had to pay more per surgery, between $661.8 and $667.4 per surgery, without considering the cost of the intraocular lens (IOL). CONCLUSION A profitability of femtosecond laser systems for cataract surgery can be obtained after a detailed financial analysis, especially in those centers with large volumes of patients. The cost of the surgery for patients should be adapted to the real flow of patients with the ability of paying a reasonable range of cost. PMID:27500115

  11. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision.

    PubMed

    Tu, Junchao; Zhang, Liyan

    2018-01-12

    A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.

  12. The Adaptive Optics Summer School Laboratory Activities

    NASA Astrophysics Data System (ADS)

    Ammons, S. M.; Severson, S.; Armstrong, J. D.; Crossfield, I.; Do, T.; Fitzgerald, M.; Harrington, D.; Hickenbotham, A.; Hunter, J.; Johnson, J.; Johnson, L.; Li, K.; Lu, J.; Maness, H.; Morzinski, K.; Norton, A.; Putnam, N.; Roorda, A.; Rossi, E.; Yelda, S.

    2010-12-01

    Adaptive Optics (AO) is a new and rapidly expanding field of instrumentation, yet astronomers, vision scientists, and general AO practitioners are largely unfamiliar with the root technologies crucial to AO systems. The AO Summer School (AOSS), sponsored by the Center for Adaptive Optics, is a week-long course for training graduate students and postdoctoral researchers in the underlying theory, design, and use of AO systems. AOSS participants include astronomers who expect to utilize AO data, vision scientists who will use AO instruments to conduct research, opticians and engineers who design AO systems, and users of high-bandwidth laser communication systems. In this article we describe new AOSS laboratory sessions implemented in 2006-2009 for nearly 250 students. The activity goals include boosting familiarity with AO technologies, reinforcing knowledge of optical alignment techniques and the design of optical systems, and encouraging inquiry into critical scientific questions in vision science using AO systems as a research tool. The activities are divided into three stations: Vision Science, Fourier Optics, and the AO Demonstrator. We briefly overview these activities, which are described fully in other articles in these conference proceedings (Putnam et al., Do et al., and Harrington et al., respectively). We devote attention to the unique challenges encountered in the design of these activities, including the marriage of inquiry-like investigation techniques with complex content and the need to tune depth to a graduate- and PhD-level audience. According to before-after surveys conducted in 2008, the vast majority of participants found that all activities were valuable to their careers, although direct experience with integrated, functional AO systems was particularly beneficial.

  13. Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  14. Serious adverse events and visual outcomes of rescue therapy using adjunct bevacizumab to laser and surgery for retinopathy of prematurity. The Indian Twin Cities Retinopathy of Prematurity Screening database Report number 5.

    PubMed

    Jalali, Subhadra; Balakrishnan, Divya; Zeynalova, Zarifa; Padhi, Tapas Ranjan; Rani, Padmaja Kumari

    2013-07-01

    To report serious adverse events and long-term outcomes of initial experience with intraocular bevacizumab in retinopathy of prematurity (ROP). Consecutive vascularly active ROP cases treated with bevacizumab, in addition to laser and surgery, were analysed retrospectively from a prospective computerised ROP database. Primary efficacy outcome was regression of new vessels. Secondary outcomes included the anatomic and visual status. Serious systemic and ocular adverse events were documented. 24 ROP eyes in 13 babies, received single intraocular bevacizumab for severe stage 3 plus after failed laser (seven eyes), stage 4A plus (eight eyes), and stage 4B/5 plus (nine eyes). Drug was injected intravitreally in 23 eyes and intracamerally in one eye. New vessels regressed in all eyes. Vision salvage in 14 of 24 eyes and no serious neurodevelopmental abnormalities were noted up to 60 months (mean 30.7 months) follow-up. Complications included macular hole and retinal breaks causing rhegmatogenous retinal detachment (one eye); bilateral, progressive vascular attenuation, perivascular exudation and optic atrophy in one baby, and progression of detachment bilaterally to stage 5 in one baby with missed follow-up. One baby who received intracameral injection developed hepatic dysfunction. One eye of this baby also showed a large choroidal rupture. Though intraocular bevacizumab, along with laser and surgery salvaged vision in many otherwise progressive cases of ROP, vigilance and reporting of serious adverse events is essential for future rationalised use of the drug. We report one systemic and four ocular adverse events that require consideration in future use of the drug.

  15. A high contrast 400-2500 nm hyperspectral checkerboard consisting of Acktar material cut with a femto second laser

    NASA Astrophysics Data System (ADS)

    Keresztes, Janos C.; Henrottin, Anne; Goodarzi, Mohammad; Wouters, Niels; van Roy, Jeroen; Saeys, Wouter

    2015-09-01

    Visible-near infrared (Vis-NIR) and short wave infrared (SWIR) hyperspectral imaging (HSI) are gaining interest in the food sorting industry. As for traditional machine vision (MV), spectral image registration is an important step which affects the quality of the sorting system. Unfortunately, it currently still remains challenging to accurately register the images acquired with the different imagers as this requires a reference with good contrast over the full spectral range. Therefore, the objective of this work was to develop an accurate high contrast checkerboard over the full spectral range. From the investigated white and dark materials, Teflon and Acktar were found to present very good contrast over the full spectral range from 400 to 2500 nm, with a minimal contrast ratio of 60% in the Vis-NIR and 98 % in the SWIR. The Metal Velvet self-adhesive coating from Acktar was selected as it also provides low specular reflectance. This was taped onto a near-Lambertian polished Teflon plate and one out of two squares were removed after laser cutting the dark coating with an accuracy below 0.1 mm. As standard technologies such as nano-second pulsed lasers generated unwanted damages on both materials, a pulsed femto-second laser setup from Lasea with 60µm accuracy was used to manufacture the checkerboard. This pattern was monitored with an Imec Vis-NIR and a Headwall SWIR HSI pushbroom hyperspectral camera. Good contrast was obtained over the full range of both HSI systems and the estimated effective focal length for the Vis-NIR HSI was determined with computer vision to be 0.5 mm, close to the lens model at high contrast.

  16. Numerical model and analysis of an energy-based system using microwaves for vision correction

    NASA Astrophysics Data System (ADS)

    Pertaub, Radha; Ryan, Thomas P.

    2009-02-01

    A treatment system was developed utilizing a microwave-based procedure capable of treating myopia and offering a less invasive alternative to laser vision correction without cutting the eye. Microwave thermal treatment elevates the temperature of the paracentral stroma of the cornea to create a predictable refractive change while preserving the epithelium and deeper structures of the eye. A pattern of shrinkage outside of the optical zone may be sufficient to flatten the central cornea. A numerical model was set up to investigate both the electromagnetic field and the resultant transient temperature distribution. A finite element model of the eye was created and the axisymmetric distribution of temperature calculated to characterize the combination of controlled power deposition combined with surface cooling to spare the epithelium, yet shrink the cornea, in a circularly symmetric fashion. The model variables included microwave power levels and pulse width, cooling timing, dielectric material and thickness, and electrode configuration and gap. Results showed that power is totally contained within the cornea and no significant temperature rise was found outside the anterior cornea, due to the near-field design of the applicator and limited thermal conduction with the short on-time. Target isothermal regions were plotted as a result of common energy parameters along with a variety of electrode shapes and sizes, which were compared. Dose plots showed the relationship between energy and target isothermic regions.

  17. Vertically integrated photonic multichip module architecture for vision applications

    NASA Astrophysics Data System (ADS)

    Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong

    2000-05-01

    The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.

  18. Multi-Purpose Avionic Architecture for Vision Based Navigation Systems for EDL and Surface Mobility Scenarios

    NASA Astrophysics Data System (ADS)

    Tramutola, A.; Paltro, D.; Cabalo Perucha, M. P.; Paar, G.; Steiner, J.; Barrio, A. M.

    2015-09-01

    Vision Based Navigation (VBNAV) has been identified as a valid technology to support space exploration because it can improve autonomy and safety of space missions. Several mission scenarios can benefit from the VBNAV: Rendezvous & Docking, Fly-Bys, Interplanetary cruise, Entry Descent and Landing (EDL) and Planetary Surface exploration. For some of them VBNAV can improve the accuracy in state estimation as additional relative navigation sensor or as absolute navigation sensor. For some others, like surface mobility and terrain exploration for path identification and planning, VBNAV is mandatory. This paper presents the general avionic architecture of a Vision Based System as defined in the frame of the ESA R&T study “Multi-purpose Vision-based Navigation System Engineering Model - part 1 (VisNav-EM-1)” with special focus on the surface mobility application.

  19. Vision correction via multi-layer pattern corneal surgery

    NASA Astrophysics Data System (ADS)

    Sun, Han-Yin; Wang, Hsiang-Chen; Yang, Shun-Fa

    2013-07-01

    With the rapid development of vision correction techniques, increasing numbers of people have undergone laser vision corrective surgery in recent years. The use of a laser scalpel instead of a traditional surgical knife reduces the size of the wound and quickens recovery after surgery. The primary objective of this article is to examine corneal surgery for vision correction via multi-layer swim-ring-shaped wave circles of the technique through optical simulations with the use of Monte-Carlo ray tracing method. Presbyopia stems from the loss of flexibility of crystalline lens due to aging of the eyeball. Diopter adjustment of a normal crystalline lens could reach 5 D; in the case of presbyopia, the adjustment was approximately 1 D, which made patients unable to see objects clearly in near distance. Corneal laser surgery with multi-layer swim-ring-shaped wave circles was performed, which ablated multiple circles on the cornea to improve flexibility of the crystalline lens. Simulation results showed that the ability of the crystalline lens to adjust increased tremendously from 1 D to 4 D. The method was also used to compare the images displayed on the retina before and after the treatment. The results clearly indicated a significant improvement in presbyopia symptoms with the use of this technique.

  20. Microwave vision for robots

    NASA Technical Reports Server (NTRS)

    Lewandowski, Leon; Struckman, Keith

    1994-01-01

    Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.

  1. A vision-based end-point control for a two-link flexible manipulator. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Obergfell, Klaus

    1991-01-01

    The measurement and control of the end-effector position of a large two-link flexible manipulator are investigated. The system implementation is described and an initial algorithm for static end-point positioning is discussed. Most existing robots are controlled through independent joint controllers, while the end-effector position is estimated from the joint positions using a kinematic relation. End-point position feedback can be used to compensate for uncertainty and structural deflections. Such feedback is especially important for flexible robots. Computer vision is utilized to obtain end-point position measurements. A look-and-move control structure alleviates the disadvantages of the slow and variable computer vision sampling frequency. This control structure consists of an inner joint-based loop and an outer vision-based loop. A static positioning algorithm was implemented and experimentally verified. This algorithm utilizes the manipulator Jacobian to transform a tip position error to a joint error. The joint error is then used to give a new reference input to the joint controller. The convergence of the algorithm is demonstrated experimentally under payload variation. A Landmark Tracking System (Dickerson, et al 1990) is used for vision-based end-point measurements. This system was modified and tested. A real-time control system was implemented on a PC and interfaced with the vision system and the robot.

  2. Wearable Improved Vision System for Color Vision Deficiency Correction

    PubMed Central

    Riccio, Daniel; Di Perna, Luigi; Sanniti Di Baja, Gabriella; De Nino, Maurizio; Rossi, Settimio; Testa, Francesco; Simonelli, Francesca; Frucci, Maria

    2017-01-01

    Color vision deficiency (CVD) is an extremely frequent vision impairment that compromises the ability to recognize colors. In order to improve color vision in a subject with CVD, we designed and developed a wearable improved vision system based on an augmented reality device. The system was validated in a clinical pilot study on 24 subjects with CVD (18 males and 6 females, aged 37.4 ± 14.2 years). The primary outcome was the improvement in the Ishihara Vision Test score with the correction proposed by our system. The Ishihara test score significantly improved (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$p = 0.03$ \\end{document}) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction. Almost all patients showed an improvement in color vision, as shown by the increased test scores. Moreover, with our system, 12 subjects (50%) passed the vision color test as normal vision subjects. The development and preliminary validation of the proposed platform confirm that a wearable augmented-reality device could be an effective aid to improve color vision in subjects with CVD. PMID:28507827

  3. Vision-based real-time position control of a semi-automated system for robot-assisted joint fracture surgery.

    PubMed

    Dagnino, Giulio; Georgilas, Ioannis; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2016-03-01

    Joint fracture surgery quality can be improved by robotic system with high-accuracy and high-repeatability fracture fragment manipulation. A new real-time vision-based system for fragment manipulation during robot-assisted fracture surgery was developed and tested. The control strategy was accomplished by merging fast open-loop control with vision-based control. This two-phase process is designed to eliminate the open-loop positioning errors by closing the control loop using visual feedback provided by an optical tracking system. Evaluation of the control system accuracy was performed using robot positioning trials, and fracture reduction accuracy was tested in trials on ex vivo porcine model. The system resulted in high fracture reduction reliability with a reduction accuracy of 0.09 mm (translations) and of [Formula: see text] (rotations), maximum observed errors in the order of 0.12 mm (translations) and of [Formula: see text] (rotations), and a reduction repeatability of 0.02 mm and [Formula: see text]. The proposed vision-based system was shown to be effective and suitable for real joint fracture surgical procedures, contributing a potential improvement of their quality.

  4. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  5. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  6. An Autonomous Gps-Denied Unmanned Vehicle Platform Based on Binocular Vision for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.

    2018-04-01

    Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.

  7. A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  8. Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor

    NASA Astrophysics Data System (ADS)

    Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick

    This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  9. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    PubMed Central

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-01-01

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318

  10. Enhanced Vision for All-Weather Operations Under NextGen

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.

    2010-01-01

    Recent research in Synthetic/Enhanced Vision technology is analyzed with respect to existing Category II/III performance and certification guidance. The goal is to start the development of performance-based vision systems technology requirements to support future all-weather operations and the NextGen goal of Equivalent Visual Operations. This work shows that existing criteria to operate in Category III weather and visibility are not directly applicable since, unlike today, the primary reference for maneuvering the airplane is based on what the pilot sees visually through the "vision system." New criteria are consequently needed. Several possible criteria are discussed, but more importantly, the factors associated with landing system performance using automatic and manual landings are delineated.

  11. Image-guided feedback for ophthalmic microsurgery using multimodal intraoperative swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Li, Jianwei D.; Malone, Joseph D.; El-Haddad, Mohamed T.; Arquitola, Amber M.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2017-02-01

    Surgical interventions for ocular diseases involve manipulations of semi-transparent structures in the eye, but limited visualization of these tissue layers remains a critical barrier to developing novel surgical techniques and improving clinical outcomes. We addressed limitations in image-guided ophthalmic microsurgery by using microscope-integrated multimodal intraoperative swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography (iSS-SESLO-OCT). We previously demonstrated in vivo human ophthalmic imaging using SS-SESLO-OCT, which enabled simultaneous acquisition of en face SESLO images with every OCT cross-section. Here, we integrated our new 400 kHz iSS-SESLO-OCT, which used a buffered Axsun 1060 nm swept-source, with a surgical microscope and TrueVision stereoscopic viewing system to provide image-based feedback. In vivo human imaging performance was demonstrated on a healthy volunteer, and simulated surgical maneuvers were performed in ex vivo porcine eyes. Denselysampled static volumes and volumes subsampled at 10 volumes-per-second were used to visualize tissue deformations and surgical dynamics during corneal sweeps, compressions, and dissections, and retinal sweeps, compressions, and elevations. En face SESLO images enabled orientation and co-registration with the widefield surgical microscope view while OCT imaging enabled depth-resolved visualization of surgical instrument positions relative to anatomic structures-of-interest. TrueVision heads-up display allowed for side-by-side viewing of the surgical field with SESLO and OCT previews for real-time feedback, and we demonstrated novel integrated segmentation overlays for augmented-reality surgical guidance. Integration of these complementary imaging modalities may benefit surgical outcomes by enabling real-time intraoperative visualization of surgical plans, instrument positions, tissue deformations, and image-based surrogate biomarkers correlated with completion of surgical goals.

  12. Application Of The Excimer Laser To Area Recontouring Of The Cornea

    NASA Astrophysics Data System (ADS)

    Yoder, Paul R.; Telfair, William B.; Warner, John W.; Martin, Clifford A.; L'Esperance, Francis A.

    1989-04-01

    Excimer lasers operating at 193 nm are being used experimentally in a special type of materials processing wherein the central portion of the anterior surface of the human cornea is selectively ablated so as to change its refractive power and, hopefully, improve impaired vision. Research to date has demonstrated recontouring as a potential means for reducing myopia and hyperopia of cadaver eyes while studies of ablations on the corneas of living monkeys and of blind human volunteers show promise of prompt and successful healing. The procedure has also shown merit in removing superficial scars from the corneal surface. In this paper, we describe the electro-optical system used to deliver the UV laser beam in these experiments and report some preliminary results of the ablation studies.

  13. STS-52 Mission Insignia

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The STS-52 insignia, designed by the mission's crew members, features a large gold star to symbolize the crew's mission on the frontiers of space. A gold star is often used to symbolize the frontier period of the American West. The red star in the shape of the Greek letter lambda represents both the laser measurements taken from the Laser Geodynamic Satellite (LAGEOS II) and the Lambda Point Experiment, which was part of the United States Microgravity Payload (USMP-l). The remote manipulator and maple leaf are emblematic of the Canadian payload specialist who conducted a series of Canadian flight experiments (CANEX-2), including the Space Vision System test.

  14. Space Shuttle Projects

    NASA Image and Video Library

    1992-10-20

    The STS-52 insignia, designed by the mission’s crew members, features a large gold star to symbolize the crew's mission on the frontiers of space. A gold star is often used to symbolize the frontier period of the American West. The red star in the shape of the Greek letter lambda represents both the laser measurements taken from the Laser Geodynamic Satellite (LAGEOS II) and the Lambda Point Experiment, which was part of the United States Microgravity Payload (USMP-l). The remote manipulator and maple leaf are emblematic of the Canadian payload specialist who conducted a series of Canadian flight experiments (CANEX-2), including the Space Vision System test.

  15. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    NASA Astrophysics Data System (ADS)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  16. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  17. Properties of light reflected from road signs in active imaging.

    PubMed

    Halstuch, Aviran; Yitzhaky, Yitzhak

    2008-08-01

    Night vision systems in vehicles are a new emerging technology. A crucial problem in active (laser-based) systems is distortion of images by saturation and blooming due to strong retroreflections from road signs. We quantify this phenomenon. We measure the Mueller matrices and the polarization state of the reflected light from three different types of road sign commonly used. Measurements of the reflected intensity are also taken with respect to the angle of reflection. We find that different types of sign have different reflection properties. It is concluded that the optimal solution for attenuating the retroreflected intensity is using a linear polarized light source and a linear polarizer with perpendicular orientation (with regard to the source) at the detector. Unfortunately, while this solution performs well for two types of road sign, it is less efficient for the third sign type.

  18. Towards computer-assisted TTTS: Laser ablation detection for workflow segmentation from fetoscopic video.

    PubMed

    Vasconcelos, Francisco; Brandão, Patrick; Vercauteren, Tom; Ourselin, Sebastien; Deprest, Jan; Peebles, Donald; Stoyanov, Danail

    2018-06-27

    Intrauterine foetal surgery is the treatment option for several congenital malformations. For twin-to-twin transfusion syndrome (TTTS), interventions involve the use of laser fibre to ablate vessels in a shared placenta. The procedure presents a number of challenges for the surgeon, and computer-assisted technologies can potentially be a significant support. Vision-based sensing is the primary source of information from the intrauterine environment, and hence, vision approaches present an appealing approach for extracting higher level information from the surgical site. In this paper, we propose a framework to detect one of the key steps during TTTS interventions-ablation. We adopt a deep learning approach, specifically the ResNet101 architecture, for classification of different surgical actions performed during laser ablation therapy. We perform a two-fold cross-validation using almost 50 k frames from five different TTTS ablation procedures. Our results show that deep learning methods are a promising approach for ablation detection. To our knowledge, this is the first attempt at automating photocoagulation detection using video and our technique can be an important component of a larger assistive framework for enhanced foetal therapies. The current implementation does not include semantic segmentation or localisation of the ablation site, and this would be a natural extension in future work.

  19. Feasibility Study of a Vision-Based Landing System for Unmanned Fixed-Wing Aircraft

    DTIC Science & Technology

    2017-06-01

    International Journal of Computer Science and Network Security 7 no. 3: 112–117. Accessed April 7, 2017. http://www.sciencedirect.com/science/ article /pii...the feasibility of applying computer vision techniques and visual feedback in the control loop for an autonomous system. This thesis examines the...integration into an autonomous aircraft control system. 14. SUBJECT TERMS autonomous systems, auto-land, computer vision, image processing

  20. Exercise and Drinking May Play a Role in Vision Impairment Risk

    MedlinePlus

    ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Academy Publications EyeNet Ophthalmology ... Plastic Surgery Center Laser Surgery Education Center Redmond Ethics Center Global Ophthalmology Guide Find an Ophthalmologist Advanced ...

  1. Hyphema

    MedlinePlus

    ... your vision. Privacy Policy Related Laser Pointers Are Still Not Toys Jun 22, 2018 Is Your Laser Pointer Dangerous Enough to Cause Eye Injury? Jun 22, 2018 Top 5 Eye Health Stories of 2017 Dec 21, 2017 Solar Eclipse Inflicts Damage in the Shape of the ...

  2. Experimental results of a new system using microwaves for vision correction

    NASA Astrophysics Data System (ADS)

    Ryan, Thomas P.; Pertaub, Radha; Meyers, Steven R.; Dresher, Russell P.; Scharf, Ronald

    2009-02-01

    Technology is in development to correct vision without the use of lasers or cutting of the eye. Many current technologies used to reshape the cornea are invasive, in that either RF needles are placed into the cornea or a flap is cut and then a laser used to ablate the cornea in the optical zone. Keraflex, a therapeutic microwave treatment, is a noninvasive, non-incisional refractive surgery procedure capable of treating myopia (nearsightedness). The goal is to create a predictable refractive change in the optical zone, while preserving the epithelium and deeper structures of the eye. A further goal is to avoid incisions and damage to the epithelium which both require a post-treatment healing period. Experimental work with fresh porcine eyes examined the following variables: duration of the RF pulse, RF power level, coolant amount and timing, electrode spacing, applanation force against the eye, initial eye temperature, and age of eye. We measured curvature changes of the eye with topography, Scheimpflug, Wavefront aberrometry or other means to characterize diopter change as an important endpoint. Other assessment includes evaluation of a fine white ring seen in the cornea following treatment. Dose studies have been done to correlate the treated region with energy delivered. The timing and dosing of energy and cooling were investigated to achieve the target diopter change in vision.

  3. Latency in Visionic Systems: Test Methods and Requirements

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  4. Differential Gene Expression in Explanted Human Retinal Pigment Epithelial Cells 24-Hours Post-Exposure to 532 nm, 3.0 ns Pulsed Laser Light and 1064 nm, 170 ps Pulsed Laser Light 12-Hours Post-Exposure: Results Compendium

    DTIC Science & Technology

    2004-06-01

    Additionally, we offer 3 conceptual cartoons outlining our vision for the future progres of laser bioeffects research, metabonomic risk assessment...future progress of laser bioeffects research, metabonomic risk assessment modeling and knowledge building from laser bioeffects data. BACKGROUND In the...our concepts of future laser bioeffects research directions (Figure 5), a metabonomic risk assessment model of laser tissue interaction (Figure 6

  5. An Automatic Image-Based Modelling Method Applied to Forensic Infography

    PubMed Central

    Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David

    2015-01-01

    This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628

  6. The visually impaired patient.

    PubMed

    Rosenberg, Eric A; Sperazza, Laura C

    2008-05-15

    Blindness or low vision affects more than 3 million Americans 40 years and older, and this number is projected to reach 5.5 million by 2020. In addition to treating a patient's vision loss and comorbid medical issues, physicians must be aware of the physical limitations and social issues associated with vision loss to optimize health and independent living for the visually impaired patient. In the United States, the four most prevalent etiologies of vision loss in persons 40 years and older are age-related macular degeneration, cataracts, glaucoma, and diabetic retinopathy. Exudative macular degeneration is treated with laser therapy, and progression of nonexudative macular degeneration in its advanced stages may be slowed with high-dose antioxidant and zinc regimens. The value of screening for glaucoma is uncertain; management of this condition relies on topical ocular medications. Cataract symptoms include decreased visual acuity, decreased color perception, decreased contrast sensitivity, and glare disability. Lifestyle and environmental interventions can improve function in patients with cataracts, but surgery is commonly performed if the condition worsens. Diabetic retinopathy responds to tight glucose control, and severe cases marked by macular edema are treated with laser photocoagulation. Vision-enhancing devices can help magnify objects, and nonoptical interventions include special filters and enhanced lighting.

  7. Machine vision for digital microfluidics

    NASA Astrophysics Data System (ADS)

    Shin, Yong-Jun; Lee, Jeong-Bong

    2010-01-01

    Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.

  8. Robot path planning using expert systems and machine vision

    NASA Astrophysics Data System (ADS)

    Malone, Denis E.; Friedrich, Werner E.

    1992-02-01

    This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.

  9. The Development of a Robot-Based Learning Companion: A User-Centered Design Approach

    ERIC Educational Resources Information Center

    Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong

    2015-01-01

    A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…

  10. Using Vision System Technologies for Offset Approaches in Low Visibility Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.

    2015-01-01

    Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen

  11. Integrated cockpit design for the Army helicopter improvement program

    NASA Technical Reports Server (NTRS)

    Drennen, T.; Bowen, B.

    1984-01-01

    The main Army Helicopter Improvement Program (AHIP) mission is to navigate precisely, locate targets accurately, communicate their position to other battlefield elements, and to designate them for laser guided weapons. The onboard navigation and mast-mounted sight (MMS) avionics enable accurate tracking of current aircraft position and subsequent target location. The AHIP crewstation development was based on extensive mission/task analysis, function allocation, total system design, and test and verification. The avionics requirements to meet the mission was limited by the existing aircraft structural and performance characteristics and resultant space, weight, and power restrictions. These limitations and night operations requirement led to the use of night vision goggles. The combination of these requirements and limitations dictated an integrated control/display approach using multifunction displays and controls.

  12. The research of binocular vision ranging system based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Li, Shikuan; Yang, Xu

    2017-10-01

    Based on the study of the principle of binocular parallax ranging, a binocular vision ranging system is designed and built. The stereo matching algorithm is realized by LabVIEW software. The camera calibration and distance measurement are completed. The error analysis shows that the system fast, effective, can be used in the corresponding industrial occasions.

  13. Robotic vision techniques for space operations

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar

    1994-01-01

    Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.

  14. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  15. Pyramidal neurovision architecture for vision machines

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1993-08-01

    The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.

  16. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    PubMed

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  17. A Multi-Sensorial Simultaneous Localization and Mapping (SLAM) System for Low-Cost Micro Aerial Vehicles in GPS-Denied Environments

    PubMed Central

    López, Elena; García, Sergio; Barea, Rafael; Bergasa, Luis M.; Molinos, Eduardo J.; Arroyo, Roberto; Romera, Eduardo; Pardo, Samuel

    2017-01-01

    One of the main challenges of aerial robots navigation in indoor or GPS-denied environments is position estimation using only the available onboard sensors. This paper presents a Simultaneous Localization and Mapping (SLAM) system that remotely calculates the pose and environment map of different low-cost commercial aerial platforms, whose onboard computing capacity is usually limited. The proposed system adapts to the sensory configuration of the aerial robot, by integrating different state-of-the art SLAM methods based on vision, laser and/or inertial measurements using an Extended Kalman Filter (EKF). To do this, a minimum onboard sensory configuration is supposed, consisting of a monocular camera, an Inertial Measurement Unit (IMU) and an altimeter. It allows to improve the results of well-known monocular visual SLAM methods (LSD-SLAM and ORB-SLAM are tested and compared in this work) by solving scale ambiguity and providing additional information to the EKF. When payload and computational capabilities permit, a 2D laser sensor can be easily incorporated to the SLAM system, obtaining a local 2.5D map and a footprint estimation of the robot position that improves the 6D pose estimation through the EKF. We present some experimental results with two different commercial platforms, and validate the system by applying it to their position control. PMID:28397758

  18. Design and control of active vision based mechanisms for intelligent robots

    NASA Technical Reports Server (NTRS)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  19. Vision-based vehicle detection and tracking algorithm design

    NASA Astrophysics Data System (ADS)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  20. Color vision deficits and laser eyewear protection for soft tissue laser applications.

    PubMed

    Teichman, J M; Vassar, G J; Yates, J T; Angle, B N; Johnson, A J; Dirks, M S; Thompson, I M

    1999-03-01

    Laser safety considerations require urologists to wear laser eye protection. Laser eye protection devices block transmittance of specific light wavelengths and may distort color perception. We tested whether urologists risk color confusion when wearing laser eye protection devices for laser soft tissue applications. Subjects were tested with the Farnsworth-Munsell 100-Hue Test without (controls) and with laser eye protection devices for carbon dioxide, potassium titanyl phosphate (KTP), neodymium (Nd):YAG and holmium:YAG lasers. Color deficits were characterized by error scores, polar graphs, confusion angles, confusion index, scatter index and color axes. Laser eye protection device spectral transmittance was tested with spectrophotometry. Mean total error scores plus or minus standard deviation were 13+/-5 for controls, and 44+/-31 for carbon dioxide, 273+/-26 for KTP, 22+/-6 for Nd:YAG and 14+/-8 for holmium:YAG devices (p <0.001). The KTP laser eye protection polar graphs, and confusion and scatter indexes revealed moderate blue-yellow and red-green color confusion. Color axes indicated no significant deficits for controls, or carbon dioxide, Nd:YAG or holmium:YAG laser eye protection in any subject compared to blue-yellow color vision deficits in 8 of 8 tested with KTP laser eye protection (p <0.001). Spectrophotometry demonstrated that light was blocked with laser eye protection devices for carbon dioxide less than 380, holmium:YAG greater than 850, Nd:YAG less than 350 and greater than 950, and KTP less than 550 and greater than 750 nm. The laser eye protection device for KTP causes significant blue-yellow and red-green color confusion. Laser eye protection devices for carbon dioxide, holmium:YAG and Nd:YAG cause no significant color confusion compared to controls. The differences are explained by laser eye protection spectrophotometry characteristics and visual physiology.

  1. Flash LIDAR Systems for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Dissly, Richard; Weinberg, J.; Weimer, C.; Craig, R.; Earhart, P.; Miller, K.

    2009-01-01

    Ball Aerospace offers a mature, highly capable 3D flash-imaging LIDAR system for planetary exploration. Multi mission applications include orbital, standoff and surface terrain mapping, long distance and rapid close-in ranging, descent and surface navigation and rendezvous and docking. Our flash LIDAR is an optical, time-of-flight, topographic imaging system, leveraging innovations in focal plane arrays, readout integrated circuit real time processing, and compact and efficient pulsed laser sources. Due to its modular design, it can be easily tailored to satisfy a wide range of mission requirements. Flash LIDAR offers several distinct advantages over traditional scanning systems. The entire scene within the sensor's field of view is imaged with a single laser flash. This directly produces an image with each pixel already correlated in time, making the sensor resistant to the relative motion of a target subject. Additionally, images may be produced at rates much faster than are possible with a scanning system. And because the system captures a new complete image with each flash, optical glint and clutter are easily filtered and discarded. This allows for imaging under any lighting condition and makes the system virtually insensitive to stray light. Finally, because there are no moving parts, our flash LIDAR system is highly reliable and has a long life expectancy. As an industry leader in laser active sensor system development, Ball Aerospace has been working for more than four years to mature flash LIDAR systems for space applications, and is now under contract to provide the Vision Navigation System for NASA's Orion spacecraft. Our system uses heritage optics and electronics from our star tracker products, and space qualified lasers similar to those used in our CALIPSO LIDAR, which has been in continuous operation since 2006, providing more than 1.3 billion laser pulses to date.

  2. [Maculopathy caused by Nd:YAG laser accident].

    PubMed

    Blümel, C; Brosig, J

    1999-02-01

    Since the construction of the first laser in the sixties and the extended use in medicine, technology and hobby the number of accidents has increased. Appreciated to therapy concepts are missing at the time. A 19 year-old-man was hit by the impulse of an military hand-held rangefinder (Nd:YAG with a wavelength of 1064 nm) on the right eye. The visual acuity dropped to 1/35 and a central scotoma with metamorphopsia occurred immediatly after the accident. The ophthalmological findings showed a distinct submacular hemorrhage. The therapy with Prednisolon intravenous and daily parabulbar, vitamin C, indomethacin systemical and lokal application resulted in an increase of visual acuity up to 0.4 and a reduction of central scotoma from 8 degrees to 2 degrees. Systemical and local use of antiphlogistic and antiinflamatoric substances may partially reduce the vision limitating scar formation. Application of antioxidants to neutralize the toxic radicals that arise by tissue decay should be given additionally to the cyclopegic medication. Special attention should be payed to the prevention of such laser accidents.

  3. Clinical Outcome of Retinal Vasculitis and Predictors for Prognosis of Ischemic Retinal Vasculitis.

    PubMed

    Sharief, Lazha; Lightman, Sue; Blum-Hareuveni, Tamar; Bar, Asaf; Tomkins-Netzer, Oren

    2017-05-01

    To determine factors affecting the visual outcome in eyes with retinal vasculitis and the rate of neovascularization relapse in ischemic vasculitis. Retrospective cohort study. We reviewed 1169 uveitis patients from Moorfields Eye Hospital, London, UK. Retinal vasculitis was observed in 236 eyes (121 ischemic, 115 nonischemic) that were compared with a control group (1022 eyes) with no retinal vasculitis. Ultra-widefield fluorescein angiography images were obtained in 63 eyes with ischemic vasculitis to quantify area of nonperfusion measured as ischemic index. The risk of vision loss was significantly more in the retinal vasculitis compared with the non-vasculitis group (hazard ratio [HR] 1.67, 95% confidence interval [CI] 1.24-2.25, P = .001). Retinal vasculitis had twice the risk of macular edema compared to the non-vasculitis group. Macular ischemia increased the risk of vision loss in vasculitis eyes by 4.4 times. The use of systemic prednisolone in eyes with vasculitis was associated with a reduced risk of vision loss (HR 0.36, 95% CI 0.15-0.82, P = .01). Laser photocoagulation was administered in 75 eyes (62.0%), out of which 29 (38.1%) had new vessel relapse and required additional laser treatment. The median ischemic index was 25.8% (interquartile range 10.2%-46%). Ischemia involving ≥2 quadrants was associated with increased risk of new vessel formation (HR 2.7, 95% CI 1.3-5.5, P = .003). Retinal vasculitis is associated with an increased risk of vision loss, mainly secondary to macular ischemia, and has a higher risk of macular edema compared to eyes with no vasculitis. Ischemia involving ≥2 quadrants is a risk factor for new vessel formation. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. New laser protective eyewear

    NASA Astrophysics Data System (ADS)

    McLear, Mark

    1996-04-01

    Laser technology has significantly impacted our everyday life. Lasers are now used to correct your vision, clear your arteries, and are used in the manufacturing of such diverse products as automobiles, cigarettes, and computers. Lasers are no longer a research tool looking for an application. They are now an integral part of manufacturing. In the case of Class IV lasers, this explosion in laser applications has exposed thousands of individuals to potential safety hazards including eye damage. Specific protective eyewear designed to attenuate the energy of the laser beam below the maximum permissible exposure is required for Class 3B and Class IV lasers according to laser safety standards.

  5. 3D Blade Vibration Measurements on an 80 m Diameter Wind Turbine by Using Non-contact Remote Measurement Systems

    NASA Astrophysics Data System (ADS)

    Ozbek, Muammer; Rixen, Daniel J.

    Non-contact optical measurement systems photogrammetry and laser interferometry are introduced as cost efficient alternatives to the conventional wind turbine/farm monitoring systems that are currently in use. The proposed techniques are proven to provide an accurate measurement of the dynamic behavior of a 2.5 MW—80 m diameter—wind turbine. Several measurements are taken on the test turbine by using 4 CCD cameras and 1 laser vibrometer and the response of the turbine is monitored from a distance of 220 m. The results of the infield tests and the corresponding analyses show that photogrammetry (also can be called as videogrammetry or computer vision technique) enable the 3D deformations of the rotor to be measured at 33 different points simultaneously with an average accuracy of ±25 mm, while the turbine is rotating. Several important turbine modes can also be extracted from the recorded data. Similarly, laser interferometry (used for the parked turbine only) provides very valuable information on the dynamic properties of the turbine structure. Twelve different turbine modes can be identified from the obtained response data.

  6. Ground-Based and Space-Based Laser Beam Power Applications

    NASA Technical Reports Server (NTRS)

    Bozek, John M.

    1995-01-01

    A space power system based on laser beam power is sized to reduce mass, increase operational capabilities, and reduce complexity. The advantages of laser systems over solar-based systems are compared as a function of application. Power produced from the conversion of a laser beam that has been generated on the Earth's surface and beamed into cislunar space resulted in decreased round-trip time for Earth satellite electric propulsion tugs and a substantial landed mass savings for a lunar surface mission. The mass of a space-based laser system (generator in space and receiver near user) that beams down to an extraterrestrial airplane, orbiting spacecraft, surface outpost, or rover is calculated and compared to a solar system. In general, the advantage of low mass for these space-based laser systems is limited to high solar eclipse time missions at distances inside Jupiter. The power system mass is less in a continuously moving Mars rover or surface outpost using space-based laser technology than in a comparable solar-based power system, but only during dust storm conditions. Even at large distances for the Sun, the user-site portion of a space-based laser power system (e.g., the laser receiver component) is substantially less massive than a solar-based system with requisite on-board electrochemical energy storage.

  7. Research on moving object detection based on frog's eyes

    NASA Astrophysics Data System (ADS)

    Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan

    2008-12-01

    On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.

  8. Visions of tomorrow: A focus on national space transportation issues; Proceedings of the Twenty-fifth Goddard Memorial Symposium, Greenbelt, MD, Mar. 18-20, 1987

    NASA Technical Reports Server (NTRS)

    Soffen, Gerald A. (Editor)

    1987-01-01

    The present conference on U.S. space transportation systems development discusses opportunities for aerospace students in prospective military, civil, industrial, and scientific programs, current strategic conceptualization and program planning for future U.S. space transportation, the DOD space transportation plan, NASA space transportation plans, medium launch vehicle and commercial space launch services, the capabilities and availability of foreign launch vehicles, and the role of commercial space launch systems. Also discussed are available upper stage systems, future space transportation needs for space science and applications, the trajectory analysis of a low lift/drag-aeroassisted orbit transfer vehicle, possible replacements for the Space Shuttle, LEO to GEO with combined electric/beamed-microwave power from earth, the National Aerospace Plane, laser propulsion to earth orbit, and a performance analysis for a laser-powered SSTO vehicle.

  9. Comparisons of selected laser beam power missions to conventionally powered missions

    NASA Technical Reports Server (NTRS)

    Bozek, John M.; Oleson, Steven R.; Landis, Geoffrey A.; Stavnes, Mark W.

    1993-01-01

    Earth-based laser sites beaming laser power to space assets have shown benefits over competing power system concepts for specific missions. Missions analyzed in this report that show benefits of laser beam power are low Earth orbit (LEO) to geosynchronous Earth orbit (GEO) transfer, LEO to low lunar orbit (LLO) cargo missions, and lunar-base power. Both laser- and solar-powered orbit-transfer vehicles (OTV's) make a 'tug' concept viable, which substantially reduces cumulative initial mass to LEO in comparison to chemical propulsion concepts. Lunar cargo missions utilizing laser electric propulsion from Earth-orbit to LLO show substantial mass saving to LEO over chemical propulsion systems. Lunar-base power system options were compared on a landed-mass basis. Photovoltaics with regenerative fuel cells, reactor-based systems, and laser-based systems were sized to meet a generic lunar-base power profile. A laser-based system begins to show landed mass benefits over reactor-based systems when proposed production facilities on the Moon require power levels greater than approximately 300 kWe. Benefit/cost ratios of laser power systems for an OTV, both to GEO and LLO, and for a lunar base were calculated to be greater than 1.

  10. Real-Time Implementation of an Asynchronous Vision-Based Target Tracking System for an Unmanned Aerial Vehicle

    DTIC Science & Technology

    2007-06-01

    Chin Khoon Quek. “Vision Based Control and Target Range Estimation for Small Unmanned Aerial Vehicle.” Master’s Thesis, Naval Postgraduate School...December 2005. [6] Kwee Chye Yap. “Incorporating Target Mensuration System for Target Motion Estimation Along a Road Using Asynchronous Filter

  11. 78 FR 68475 - Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-14

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-899] Certain Vision-Based Driver Assistance System.... International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby given that a complaint was filed with the U.S. International Trade Commission on September 20, 2013, under section 337 of the Tariff Act of...

  12. A real-time camera calibration system based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  13. Next-Generation Terrestrial Laser Scanning to Measure Forest Canopy Structure

    NASA Astrophysics Data System (ADS)

    Danson, M.

    2015-12-01

    Terrestrial laser scanners (TLS) are now capable of semi-automatic reconstruction of the structure of complete trees or forest stands and have the potential to provide detailed information on tree architecture and foliage biophysical properties. The trends for the next generation of TLS are towards higher resolution, faster scanning and full-waveform data recording, with mobile, multispectral laser devices. The convergence of these technological advances in the next generation of TLS will allow the production of information for forest and woodland mapping and monitoring that is far more detailed, more accurate, and more comprehensive than any available today. This paper describes recent scientific advances in the application of TLS for characterising forest and woodland areas, drawing on the authors' development of the Salford Advanced Laser Canopy Analyser (SALCA), the activities of the Terrestrial Laser Scanner International Interest Group (TLSIIG), and recent advances in laser scanner technology around the world. The key findings illustrated in the paper are that (i) a complete understanding of system measurement characteristics is required for quantitative analysis of TLS data, (ii) full-waveform data recording is required for extraction of forest biophysical variables and, (iii) multi-wavelength systems provide additional spectral information that is essential for classifying different vegetation components. The paper uses a range of recent experimental TLS measurements to support these findings, and sets out a vision for new research to develop an information-rich future-forest information system, populated by mobile autonomous multispectral TLS devices.

  14. Vision sensor and dual MEMS gyroscope integrated system for attitude determination on moving base

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoting; Sun, Changku; Wang, Peng; Huang, Lu

    2018-01-01

    To determine the relative attitude between the objects on a moving base and the base reference system by a MEMS (Micro-Electro-Mechanical Systems) gyroscope, the motion information of the base is redundant, which must be removed from the gyroscope. Our strategy is to add an auxiliary gyroscope attached to the reference system. The master gyroscope is to sense the total motion, and the auxiliary gyroscope is to sense the motion of the moving base. By a generalized difference method, relative attitude in a non-inertial frame can be determined by dual gyroscopes. With the vision sensor suppressing accumulative drift of the MEMS gyroscope, the vision and dual MEMS gyroscope integration system is formed. Coordinate system definitions and spatial transform are executed in order to fuse inertial and visual data from different coordinate systems together. And a nonlinear filter algorithm, Cubature Kalman filter, is used to fuse slow visual data and fast inertial data together. A practical experimental setup is built up and used to validate feasibility and effectiveness of our proposed attitude determination system in the non-inertial frame on the moving base.

  15. Low computation vision-based navigation for a Martian rover

    NASA Technical Reports Server (NTRS)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  16. MMW radar enhanced vision systems: the Helicopter Autonomous Landing System (HALS) and Radar-Enhanced Vision System (REVS) are rotary and fixed wing enhanced flight vision systems that enable safe flight operations in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Cross, Jack; Schneider, John; Cariani, Pete

    2013-05-01

    Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.

  17. Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)

    NASA Astrophysics Data System (ADS)

    Ashcraft, Todd W.; Atac, Robert

    2012-06-01

    Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.

  18. Flexible Wing Base Micro Aerial Vehicles: Vision-Guided Flight Stability and Autonomy for Micro Air Vehicles

    NASA Technical Reports Server (NTRS)

    Ettinger, Scott M.; Nechyba, Michael C.; Ifju, Peter G.; Wazak, Martin

    2002-01-01

    Substantial progress has been made recently towards design building and test-flying remotely piloted Micro Air Vehicle's (MAVs). We seek to complement this progress in overcoming the aerodynamic obstacles to.flight at very small scales with a vision stability and autonomy system. The developed system based on a robust horizon detection algorithm which we discuss in greater detail in a companion paper. In this paper, we first motivate the use of computer vision for MAV autonomy arguing that given current sensor technology, vision may he the only practical approach to the problem. We then briefly review our statistical vision-based horizon detection algorithm, which has been demonstrated at 30Hz with over 99.9% correct horizon identification. Next we develop robust schemes for the detection of extreme MAV attitudes, where no horizon is visible, and for the detection of horizon estimation errors, due to external factors such as video transmission noise. Finally, we discuss our feed-back controller for self-stabilized flight, and report results on vision autonomous flights of duration exceeding ten minutes.

  19. Research into the Architecture of CAD Based Robot Vision Systems

    DTIC Science & Technology

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  20. Using advanced computer vision algorithms on small mobile robots

    NASA Astrophysics Data System (ADS)

    Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.

    2006-05-01

    The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.

  1. A prospective comparison of phakic collamer lenses and wavefront-optimized laser-assisted in situ keratomileusis for correction of myopia

    PubMed Central

    Parkhurst, Gregory D

    2016-01-01

    Purpose The aim of this study was to evaluate and compare night vision and low-luminance contrast sensitivity (CS) in patients undergoing implantation of phakic collamer lenses or wavefront-optimized laser-assisted in situ keratomileusis (LASIK). Patients and methods This is a nonrandomized, prospective study, in which 48 military personnel were recruited. Rabin Super Vision Test was used to compare the visual acuity and CS of Visian implantable collamer lens (ICL) and LASIK groups under normal and low light conditions, using a filter for simulated vision through night vision goggles. Results Preoperative mean spherical equivalent was −6.10 D in the ICL group and −6.04 D in the LASIK group (P=0.863). Three months postoperatively, super vision acuity (SVa), super vision acuity with (low-luminance) goggles (SVaG), super vision contrast (SVc), and super vision contrast with (low luminance) goggles (SVcG) significantly improved in the ICL and LASIK groups (P<0.001). Mean improvement in SVaG at 3 months postoperatively was statistically significantly greater in the ICL group than in the LASIK group (mean change [logarithm of the minimum angle of resolution, LogMAR]: ICL =−0.134, LASIK =−0.085; P=0.032). Mean improvements in SVc and SVcG were also statistically significantly greater in the ICL group than in the LASIK group (SVc mean change [logarithm of the CS, LogCS]: ICL =0.356, LASIK =0.209; P=0.018 and SVcG mean change [LogCS]: ICL =0.390, LASIK =0.259; P=0.024). Mean improvement in SVa at 3 months was comparable in both groups (P=0.154). Conclusion Simulated night vision improved with both ICL implantation and wavefront-optimized LASIK, but improvements were significantly greater with ICLs. These differences may be important in a military setting and may also affect satisfaction with civilian vision correction. PMID:27418804

  2. A prospective comparison of phakic collamer lenses and wavefront-optimized laser-assisted in situ keratomileusis for correction of myopia.

    PubMed

    Parkhurst, Gregory D

    2016-01-01

    The aim of this study was to evaluate and compare night vision and low-luminance contrast sensitivity (CS) in patients undergoing implantation of phakic collamer lenses or wavefront-optimized laser-assisted in situ keratomileusis (LASIK). This is a nonrandomized, prospective study, in which 48 military personnel were recruited. Rabin Super Vision Test was used to compare the visual acuity and CS of Visian implantable collamer lens (ICL) and LASIK groups under normal and low light conditions, using a filter for simulated vision through night vision goggles. Preoperative mean spherical equivalent was -6.10 D in the ICL group and -6.04 D in the LASIK group (P=0.863). Three months postoperatively, super vision acuity (SVa), super vision acuity with (low-luminance) goggles (SVaG), super vision contrast (SVc), and super vision contrast with (low luminance) goggles (SVcG) significantly improved in the ICL and LASIK groups (P<0.001). Mean improvement in SVaG at 3 months postoperatively was statistically significantly greater in the ICL group than in the LASIK group (mean change [logarithm of the minimum angle of resolution, LogMAR]: ICL =-0.134, LASIK =-0.085; P=0.032). Mean improvements in SVc and SVcG were also statistically significantly greater in the ICL group than in the LASIK group (SVc mean change [logarithm of the CS, LogCS]: ICL =0.356, LASIK =0.209; P=0.018 and SVcG mean change [LogCS]: ICL =0.390, LASIK =0.259; P=0.024). Mean improvement in SVa at 3 months was comparable in both groups (P=0.154). Simulated night vision improved with both ICL implantation and wavefront-optimized LASIK, but improvements were significantly greater with ICLs. These differences may be important in a military setting and may also affect satisfaction with civilian vision correction.

  3. Vision Algorithms to Determine Shape and Distance for Manipulation of Unmodeled Objects

    NASA Technical Reports Server (NTRS)

    Montes, Leticia; Bowers, David; Lumia, Ron

    1998-01-01

    This paper discusses the development of a robotic system for general use in an unstructured environment. This is illustrated through pick and place of randomly positioned, un-modeled objects. There are many applications for this project, including rock collection for the Mars Surveyor Program. This system is demonstrated with a Puma560 robot, Barrett hand, Cognex vision system, and Cimetrix simulation and control, all running on a PC. The demonstration consists of two processes: vision system and robotics. The vision system determines the size and location of the unknown objects. The robotics part consists of moving the robot to the object, configuring the hand based on the information from the vision system, then performing the pick/place operation. This work enhances and is a part of the Low Cost Virtual Collaborative Environment which provides remote simulation and control of equipment.

  4. In Vivo Fluorescence Imaging and Tracking of Circulating Cells and Therapeutic Nanoparticles

    NASA Astrophysics Data System (ADS)

    Markovic, Stacey

    Noninvasive enumeration of rare circulating cells in small animals is of great importance in many areas of biomedical research, but most existing enumeration techniques involve drawing and enriching blood which is known to be problematic. Recently, small animal "in vivo flow cytometry" (IVFC) techniques have been developed, where cells flowing through small arterioles are counted continuously and noninvasively in vivo. However, higher sensitivity IVFC techniques are needed for studying low-abundance (<100/mL) circulating cells. To this end, we developed a macroscopic fluorescence imaging system and automated computer vision algorithm that allows in vivo detection, enumeration and tracking of circulating fluorescently labeled cells from multiple large blood vessels in the ear of a mouse. This technique ---"computer vision IVFC" (CV-IVFC) --- allows cell detection and enumeration at concentrations of 20 cells/mL. Performance of CV-IVFC was also characterized for low-contrast imaging scenarios, representing conditions of weak cell fluorescent labeling or high background tissue autofluorescence, and showed efficient tracking and enumeration of circulating cells with 50% sensitivity in contrast conditions degraded 2 orders of magnitude compared to in vivo testing supporting the potential utility of CV-IVFC in a range of biological models. Refinement of prior work in our lab of a separate rare-cell detection platform - "diffuse fluorescence flow cytometry" (DFFC) --- implemented a "frequency encoding" scheme by modulating two excitation lasers. Fluorescent light from both lasers can be simultaneously detected and split by frequency allowing for better discrimination of noise, sensitivity, and cell localization. The system design is described in detail and preliminary data is shown. Last, we developed a broad-field transmission fluorescence imaging system to observe nanoparticle (NP) diffusion in bulk biological tissue. Novel, implantable NP spacers allow controlled, long-term release of drugs. However, kinetics of NP (drug) diffusion over time is still poorly understood. Our imaging system allowed us to quantify diffusion of free dye and NPs of different sizes in vitro and in vivo. Subsequent analysis verified that there was continuous diffusion which could be controlled based on particle size. Continued use of this imaging system will aid optimization of NP spacers.

  5. Integrated Electro-optical Laser-Beam Scanners

    NASA Technical Reports Server (NTRS)

    Boord, Warren T.

    1990-01-01

    Scanners using solid-state devices compact, consume little power, and have no moving parts. Integrated electro-optical laser scanner, in conjunction with external lens, points outgoing beam of light in any number of different directions, depending on number of upper electrodes. Offers beam-deflection angles larger than those of acousto-optic scanners. Proposed for such diverse applications as nonimpact laser printing, color imaging, ranging, barcode reading, and robotic vision.

  6. Evidence-based review of diabetic macular edema management: Consensus statement on Indian treatment guidelines

    PubMed Central

    Das, Taraprasad; Aurora, Ajay; Chhablani, Jay; Giridhar, Anantharaman; Kumar, Atul; Raman, Rajiv; Nagpal, Manish; Narayanan, Raja; Natarajan, Sundaram; Ramasamay, Kim; Tyagi, Mudit; Verma, Lalit

    2016-01-01

    The purpose of the study was to review the current evidence and design a diabetic macular edema (DME) management guideline specific for India. The published DME guidelines from different organizations and publications were weighed against the practice trends in India. This included the recently approved drugs. DME management consisted of control of diabetes and other associated systemic conditions, such as hypertension and hyperlipidemia, and specific therapy to reduce macular edema. Quantification of macular edema is precisely made with the optical coherence tomography and treatment options include retinal laser, intravitreal anti-vascular endothelial growth factors (VEGF), and implantable dexamethasone. Specific use of these modalities depends on the presenting vision and extent of macular involvement. Invariable eyes with center-involving macular edema benefit from intravitreal anti-VEGF or dexamethasone implant therapy, and eyes with macular edema not involving the macula center benefit from retinal laser. The results are illustrated with adequate case studies and frequently asked questions. This guideline prepared on the current published evidence is meant as a guideline for the treating physicians. PMID:26953019

  7. Using Vision System Technologies to Enable Operational Improvements for Low Visibility Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.

    2014-01-01

    Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  8. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  9. Self calibration of the stereo vision system of the Chang'e-3 lunar rover based on the bundle block adjustment

    NASA Astrophysics Data System (ADS)

    Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan

    2017-06-01

    The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the stereo vision system of the Chang'e-3 lunar rover on the moon.

  10. Sustainable and Autonomic Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Sterritt, Roy; Rouff, Christopher; Rash, James L.; Truszkowski, Walter

    2006-01-01

    Visions for future space exploration have long term science missions in sight, resulting in the need for sustainable missions. Survivability is a critical property of sustainable systems and may be addressed through autonomicity, an emerging paradigm for self-management of future computer-based systems based on inspiration from the human autonomic nervous system. This paper examines some of the ongoing research efforts to realize these survivable systems visions, with specific emphasis on developments in Autonomic Policies.

  11. Excimer laser ablation of the cornea

    NASA Astrophysics Data System (ADS)

    Pettit, George H.; Ediger, Marwood N.; Weiblinger, Richard P.

    1995-03-01

    Pulsed ultraviolet laser ablation is being extensively investigated clinically to reshape the optical surface of the eye and correct vision defects. Current knowledge of the laser/tissue interaction and the present state of the clinical evaluation are reviewed. In addition, the principal findings of internal Food and Drug Administration research are described in some detail, including a risk assessment of the laser-induced-fluorescence and measurement of the nonlinear optical properties of cornea during the intense UV irradiation. Finally, a survey is presented of the alternative laser technologies being explored for this ophthalmic application.

  12. Scanning electron microscopy investigation of PMMA removal by laser irradiation (Er:YAG) in comparison with an ultrasonic system and curettage in hip joint revision arthroplasty.

    PubMed

    Birnbaum, Klaus; Gutknecht, Norbert

    2010-07-01

    The cement often left in the femur socket during hip joint revision arthroplasty is usually removed by curettage. Another method for removing the cement is to use an ultrasonic system, and yet another alternative may be to use a laser system. The aim of these investigations was to determine the pulse rate and pulse energy of the Er:YAG laser for sufficient cement ablation. We also compared the results obtained using the laser with those obtained using an ultrasonic device or curettage by histological and scanning electron microscopy (SEM) investigation of the border zone between the polymethyl methacrylate (PMMA) and unfixed specimens of femoral bone. Therefore we prepared 30 unfixed human femur stems after hip joint replacement and prepared ten sagittal sections from each femur stem (in total 300 sections). Of these 300 specimens, 180 were treated with the Er:YAG laser, 60 with the ultrasonic system and 60 by curettage. The high pulse energy of 500 mJ and a pulse rate of 4 Hz provided the highest PMMA ablation rate, although the boundary surface between PMMA and femoral bone was not as fine-grained as found in samples treated at 15 Hz and 250 mJ. However, the treatment time for the same cement ablation rate with the latter settings was twice that at 4 Hz and 500 mJ. Compared to the boundary surfaces treated with the ultrasonic device or curettage, the laser-treated samples had a more distinct undifferentiated boundary surface between PMMA and femoral bone. After development of the Er:YAG-laser to provide higher pulse energies, it may in the future be an additional efficient method for the removal of PMMA in revision arthroplasty. The Er:YAG laser should be combined with an endoscopic and a rinsing suction system so that PMMA can be removed from the femoral shaft under direct vision.

  13. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  14. Evaluation and design of non-lethal laser dazzlers utilizing microcontrollers

    NASA Astrophysics Data System (ADS)

    Richardson, Keith Jack

    Current non-lethal weapons suffer from an inability to meet requirements for uses across many fields and purposes. The safety and effectiveness of these weapons are inadequate. New concepts have provided a weapon utilizing lasers to flashblind a target's visual system. Minimal research and testing have been conducted to investigate the efficiency and safety of these weapons called laser dazzlers. Essentially a laser dazzler is comprised of a laser beam that has been diverged with the use of a lens to expand the beam creating an intensely bright flashlight. All laser dazzlers to date are incapable of adjusting to external conditions automatically. This is important, because the power of these weapons need to change according to distance and light conditions. At long distances, the weapon is rendered useless because the laser beam has become diluted. At near distances, the weapon is too powerful causing permanent damage to the eye because the beam is condensed. Similarly, the eye adapts to brightness by adjusting the pupil size, which effectively limits the amount of light entering the eye. Laser eye damage is determined by the level of irradiance entering the eye. Therefore, a laser dazzler needs the ability to adjust output irradiance to compensate for the distance to the target and ambient light conditions. It was postulated if an innovative laser dazzler design could adjust the laser beam divergence then the irradiance at the eye could be optimized for maximum vision disruption with minimal risk of permanent damage. The young nature of these weapons has lead to the rushed assumptions of laser wavelengths (color) and pulsing frequencies to cause maximum disorientation. Research provided key values of irradiance, wavelength, pulsing frequency and functions for the optical lens system. In order for the laser dazzler to continuously evaluate the external conditions, luminosity and distance sensors were incorporated into the design. A control system was devised to operate the mechanical components meeting calculated values. Testing the conceptual laser dazzlers illustrated the complexities of the system. A set irradiance value could be met at any distance and light condition, although this was accomplished by less than ideal methods. The final design included two lasers and only one optical system. The optical system was only capable of providing constant irradiance of one laser or the other allowing only single laser operation. For dual laser operation, the optical system was calibrated to offset the losses of each laser as distance was changed. Ultimately, this provided a constant combined irradiance with a decreasing green irradiance and increasing red irradiance as distance was increasing. Future work should include enhancements to the mechanical components of the laser dazzler to further refine accuracy. This research was intended to provide a proof of concept and did so successfully.

  15. Handheld pose tracking using vision-inertial sensors with occlusion handling

    NASA Astrophysics Data System (ADS)

    Li, Juan; Slembrouck, Maarten; Deboeverie, Francis; Bernardos, Ana M.; Besada, Juan A.; Veelaert, Peter; Aghajan, Hamid; Casar, José R.; Philips, Wilfried

    2016-07-01

    Tracking of a handheld device's three-dimensional (3-D) position and orientation is fundamental to various application domains, including augmented reality (AR), virtual reality, and interaction in smart spaces. Existing systems still offer limited performance in terms of accuracy, robustness, computational cost, and ease of deployment. We present a low-cost, accurate, and robust system for handheld pose tracking using fused vision and inertial data. The integration of measurements from embedded accelerometers reduces the number of unknown parameters in the six-degree-of-freedom pose calculation. The proposed system requires two light-emitting diode (LED) markers to be attached to the device, which are tracked by external cameras through a robust algorithm against illumination changes. Three data fusion methods have been proposed, including the triangulation-based stereo-vision system, constraint-based stereo-vision system with occlusion handling, and triangulation-based multivision system. Real-time demonstrations of the proposed system applied to AR and 3-D gaming are also included. The accuracy assessment of the proposed system is carried out by comparing with the data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved high accuracy of few centimeters in position estimation and few degrees in orientation estimation.

  16. An assembly system based on industrial robot with binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  17. Science and Technology Review October/November 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bearinger, J P

    2009-08-21

    This month's issue has the following articles: (1) Award-Winning Collaborations Provide Solutions--Commentary by Steven D. Liedle; (2) Light-Speed Spectral Analysis of a Laser Pulse--An optical device inspects and stops potentially damaging laser pulses; (3) Capturing Waveforms in a Quadrillionth of a Second--The femtoscope, a time microscope, improves the temporal resolution and dynamic range of conventional recording instruments; (4) Gamma-Ray Spectroscopy in the Palm of Your Hand--A miniature gamma-ray spectrometer provides increased resolution at a reduced cost; (5) Building Fusion Targets with Precision Robotics--A robotic system assembles tiny fusion targets with nanometer precision; (6) ROSE: Making Compiler Technology More Accessible--An open-sourcemore » software infrastructure makes powerful compiler techniques available to all programmers; (7) Restoring Sight to the Blind with an Artificial Retina--A retinal prosthesis could restore vision to people suffering from eye diseases; (8) Eradicating the Aftermath of War--A remotely operated system precisely locates buried land mines; (9) Compact Alignment for Diagnostic Laser Beams--A smaller, less expensive device aligns diagnostic laser beams onto targets; and (10) Securing Radiological Sources in Africa--Livermore and other national laboratories are helping African countries secure their nuclear materials.« less

  18. Endoscopic low-coherence topography measurement for upper airways and hollow samples

    NASA Astrophysics Data System (ADS)

    Delacrétaz, Yves; Shaffer, Etienne; Pavillon, Nicolas; Kühn, Jonas; Lang, Florian; Depeursinge, Christian

    2010-11-01

    To evaluate the severity of airway pathologies, quantitative dimensioning of airways is of utmost importance. Endoscopic vision gives a projective image and thus no true scaling information can be directly deduced from it. In this article, an approach based on an interferometric setup, a low-coherence laser source and a standard rigid endoscope is presented, and applied to hollow samples measurements. More generally, the use of the low-coherence interferometric setup detailed here could be extended to any other endoscopy-related field of interest, e.g., gastroscopy, arthroscopy and other medical or industrial applications where tri-dimensional topology is required. The setup design with a multiple fibers illumination system is presented. Demonstration of the method ability to operate on biological samples is assessed through measurements on ex vivo pig bronchi.

  19. Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  20. Laser thermal shock and fatigue testing system

    NASA Astrophysics Data System (ADS)

    Fantini, Vincenzo; Serri, Laura; Bianchi, P.

    1997-08-01

    Thermal fatigue consists in repeatedly cycling the temperature of a specimen under test without any other constraint and stopping the test when predefined damage aspects. The result is a lifetime in terms of number of cycles. The parameters of the thermal cycle are the following: minimum and maximum temperature, time of heating, of cooling and time at high or at low temperature. When the temperature jump is very big and fast, phenomena of thermal shock can be induced. Among the numerous techniques used to perform these tests, the laser thermal fatigue cycling is very effective when fast heating of small and localized zones is required. That's the case of test performed to compare new and repaired blades of turbogas machines or components of combustion chambers of energy power plants. In order to perform these tests a thermal fatigue system, based on 1 kW Nd-YAG laser as source of heating, has been developed. The diameter of the heated zone of the specimen irradiated by the laser is in the range 0.5 - 20 mm. The temperatures can be chosen between 200 degree(s)C and 1500 degree(s)C and the piece can be maintained at high and/or low temperature from 0 s to 300 s. Temperature are measured by two sensors: a pyrometer for the high range (550 - 1500 degree(s)C) and a contactless thermocouple for the low range (200 - 550 degree(s)C). Two different gases can be blown on the specimen in the irradiated spot or in sample backside to speed up cooling phase. A PC-based control unit with a specially developed software performs PID control of the temperature cycle by fast laser power modulation. A high resolution vision system of suitable magnification is connected to the control unit to detect surface damages on the specimen, allowing real time monitoring of the tested zone as well as recording and reviewing the images of the sample during the test. Preliminary thermal fatigue tests on flat specimens of INCONEL 738 and HAYNES 230 are presented. IN738 samples, laser cladded by powder of the same material to simulate the refurbishing of a damaged turbine blade after long-term operation, are compared to the parents. Lifetimes are decreasing when high temperature of the cycle is increased and shorter lifetimes of repaired pieces have been found. Laser and TIG welding on HY230 specimens are compared to the parent. Parent and repaired samples have no evidence of cracks after 1500 thermal cycles between 650 and 1000 degree(s)C.

  1. New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots

    PubMed Central

    Gonzalez-de-Soto, Mariano; Pajares, Gonzalo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976

  2. New trends in robotics for agriculture: integration and assessment of a real fleet of robots.

    PubMed

    Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.

  3. Monitoring system of multiple fire fighting based on computer vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2010-10-01

    With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.

  4. Corneal perforation during Nd:YAG laser capsulotomy: a case report.

    PubMed

    Türkcü, Fatih Mehmet; Yüksel, Harun; Cingü, Kürşat; Cınar, Yasin; Murat, Mehmet; Caça, Ihsan

    2013-02-01

    We report a case where corneal perforation developed during Nd:YAG laser capsulotomy. We present a 20-year-old male with the complaint of impaired vision in the right eye. Leukoma consistent with the incision line in the cornea and opacity in the posterior capsule were observed.

  5. High power visible diode laser for the treatment of eye diseases by laser coagulation

    NASA Astrophysics Data System (ADS)

    Heinrich, Arne; Hagen, Clemens; Harlander, Maximilian; Nussbaumer, Bernhard

    2015-03-01

    We present a high power visible diode laser enabling a low-cost treatment of eye diseases by laser coagulation, including the two leading causes of blindness worldwide (diabetic retinopathy, age-related macular degeneration) as well as retinopathy of prematurely born children, intraocular tumors and retinal detachment. Laser coagulation requires the exposure of the eye to visible laser light and relies on the high absorption of the retina. The need for treatment is constantly increasing, due to the demographic trend, the increasing average life expectancy and medical care demand in developing countries. The World Health Organization reacts to this demand with global programs like the VISION 2020 "The right to sight" and the following Universal Eye Health within their Global Action Plan (2014-2019). One major point is to motivate companies and research institutes to make eye treatment cheaper and easily accessible. Therefore it becomes capital providing the ophthalmology market with cost competitive, simple and reliable technologies. Our laser is based on the direct second harmonic generation of the light emitted from a tapered laser diode and has already shown reliable optical performance. All components are produced in wafer scale processes and the resulting strong economy of scale results in a price competitive laser. In a broader perspective the technology behind our laser has a huge potential in non-medical applications like welding, cutting, marking and finally laser-illuminated projection.

  6. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  7. Two-dimensional (2D) displacement measurement of moving objects using a new MEMS binocular vision system

    NASA Astrophysics Data System (ADS)

    Di, Si; Lin, Hui; Du, Ruxu

    2011-05-01

    Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.

  8. Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review

    PubMed Central

    Ali, Asraf; Sundaraj, Kenneth; Ahmad, Badlishah; Ahamed, Nizam; Islam, Anamul

    2012-01-01

    Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders. PMID:22938548

  9. Optical Extinction Measurements of Dust Density in the GMRO Regolith Test Bin

    NASA Technical Reports Server (NTRS)

    Lane, J.; Mantovani, J.; Mueller, R.; Nugent, M.; Nick, A.; Schuler, J.; Townsend, I.

    2016-01-01

    A regolith simulant test bin was constructed and completed in the Granular Mechanics and Regolith Operations (GMRO) Lab in 2013. This Planetary Regolith Test Bed (PRTB) is a 64 sq m x 1 m deep test bin, is housed in a climate-controlled facility, and contains 120 MT of lunar-regolith simulant, called Black Point-1 or BP-1, from Black Point, AZ. One of the current uses of the test bin is to study the effects of difficult lighting and dust conditions on Telerobotic Perception Systems to better assess and refine regolith operations for asteroid, Mars and polar lunar missions. Low illumination and low angle of incidence lighting pose significant problems to computer vision and human perception. Levitated dust on Asteroids interferes with imaging and degrades depth perception. Dust Storms on Mars pose a significant problem. Due to these factors, the likely performance of telerobotics is poorly understood for future missions. Current space telerobotic systems are only operated in bright lighting and dust-free conditions. This technology development testing will identify: (1) the impact of degraded lighting and environmental dust on computer vision and operator perception, (2) potential methods and procedures for mitigating these impacts, (3) requirements for telerobotic perception systems for asteroid capture, Mars dust storms and lunar regolith ISRU missions. In order to solve some of the Telerobotic Perception system problems, a plume erosion sensor (PES) was developed in the Lunar Regolith Simulant Bin (LRSB), containing 2 MT of JSC-1a lunar simulant. PES is simply a laser and digital camera with a white target. Two modes of operation have been investigated: (1) single laser spot - the brightness of the spot is dependent on the optical extinction due to dust and is thus an indirect measure of particle number density, and (2) side-scatter - the camera images the laser from the side, showing beam entrance into the dust cloud and the boundary between dust and void. Both methods must assume a mean particle size in order to extract a number density. The optical extinction measurement yields the product of the 2nd moment of the particle size distribution and the extinction efficiency Qe. For particle sizes in the range of interest (greater than 1 micrometer), Qe approximately equal to 2. Scaling up of the PES single laser and camera system is underway in the PRTB, where an array of lasers penetrate a con-trolled dust cloud, illuminating multiple targets. Using high speed HD GoPro video cameras, the evolution of the dust cloud and particle size density can be studied in detail.

  10. Non-Contact Measurement Using A Laser Scanning Probe

    NASA Astrophysics Data System (ADS)

    Modjarrad, Amir

    1989-03-01

    Traditional high accuracy touch-trigger probing can now be complemented by high speed, non-contact, profile scanning to give another "dimension" to the three-dimensional Co-ordinate Measuring Machines (CMMs). Some of the features of a specially developed laser scanning probe together with the trade-offs involved in the design of inspection systems that use triangulation are examined. Applications of such a laser probe on CMMs are numerous since high speed scanning allows inspection of many different components and surfaces. For example, car body panels, tyre moulds, aircraft wing skins, turbine blades, wax and clay models, plastics, etc. Other applications include in-process surveillance in manufacturing and food processing, robotics vision and many others. Some of these applications are discussed and practical examples, case studies and experimental results are given with particular reference to use on CMMs. In conclusion, future developments and market trends in high speed non-contact measurement are discussed.

  11. Appendix B: Rapid development approaches for system engineering and design

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Conventional processes often produce systems which are obsolete before they are fielded. This paper explores some of the reasons for this, and provides a vision of how we can do better. This vision is based on our explorations in improved processes and system/software engineering tools.

  12. A trunk ranging system based on binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Zhao, Xixuan; Kan, Jiangming

    2017-07-01

    Trunk ranging is an essential function for autonomous forestry robots. Traditional trunk ranging systems based on personal computers are not convenient in practical application. This paper examines the implementation of a trunk ranging system based on the binocular vision theory via TI's DaVinc DM37x system. The system is smaller and more reliable than that implemented using a personal computer. It calculates the three-dimensional information from the images acquired by binocular cameras, producing the targeting and ranging results. The experimental results show that the measurement error is small and the system design is feasible for autonomous forestry robots.

  13. Machine vision system for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.

  14. Two-micron (Thulium) Laser Prostatectomy: An Effective Method for BPH Treatment.

    PubMed

    Jiang, Qi; Xia, Shujie

    2014-01-01

    The two-micron (thulium) laser is the newest laser technique for treatment of bladder outlet obstruction resulting from benign prostatic hyperplasia (BPH). It takes less operative time than standard techniques, provides clear vision and lower blood loss as well as shorter catheterization times and hospitalization times. It has been identified to be a safe and efficient method for BPH treatment regardless of the prostate size.

  15. Remote media vision-based computer input device

    NASA Astrophysics Data System (ADS)

    Arabnia, Hamid R.; Chen, Ching-Yi

    1991-11-01

    In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.

  16. An architecture for real-time vision processing

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong

    1994-01-01

    To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.

  17. Updated laser safety & hazard analysis for the ARES laser system based on the 2007 ANSI Z136.1 standard.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustoni, Arnold L.

    A laser safety and hazard analysis was performed for the temperature stabilized Big Sky Laser Technology (BSLT) laser central to the ARES system based on the 2007 version of the American National Standards Institutes (ANSI) Standard Z136.1, for Safe Use of Lasers and the 2005 version of the ANSI Standard Z136.6, for Safe Use of Lasers Outdoors. The ARES laser system is a Van/Truck based mobile platform, which is used to perform laser interaction experiments and tests at various national test sites.

  18. Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision

    NASA Astrophysics Data System (ADS)

    Gai, Qiyang

    2018-01-01

    Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.

  19. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  20. Study on Dynamic Development of Three-dimensional Weld Pool Surface in Stationary GTAW

    NASA Astrophysics Data System (ADS)

    Huang, Jiankang; He, Jing; He, Xiaoying; Shi, Yu; Fan, Ding

    2018-04-01

    The weld pool contains abundant information about the welding process. In particular, the type of the weld pool surface shape, i. e., convex or concave, is determined by the weld penetration. To detect it, an innovative laser-vision-based sensing method is employed to observe the weld pool surface of the gas tungsten arc welding (GTAW). A low-power laser dots pattern is projected onto the entire weld pool surface. Its reflection is intercepted by a screen and captured by a camera. Then the dynamic development process of the weld pool surface can be detected. By observing and analyzing, the change of the reflected laser dots reflection pattern, for shape of the weld pool surface shape, was found to closely correlate to the penetration of weld pool in the welding process. A mathematical model was proposed to correlate the incident ray, reflected ray, screen and surface of weld pool based on structured laser specular reflection. The dynamic variation of the weld pool surface and its corresponding dots laser pattern were simulated and analyzed. By combining the experimental data and the mathematical analysis, the results show that the pattern of the reflected laser dots pattern is closely correlated to the development of weld pool, such as the weld penetration. The concavity of the pool surface was found to increase rapidly after the surface shape was changed from convex to concave during the stationary GTAW process.

  1. Research on an autonomous vision-guided helicopter

    NASA Technical Reports Server (NTRS)

    Amidi, Omead; Mesaki, Yuji; Kanade, Takeo

    1994-01-01

    Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.

  2. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  3. Night vision goggle stimulation using LCoS and DLP projection technology, which is better?

    NASA Astrophysics Data System (ADS)

    Ali, Masoud H.; Lyon, Paul; De Meerleer, Peter

    2014-06-01

    High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.

  4. Automated Grading of Rough Hardwood Lumber

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Any automatic hardwood grading system must have two components. The first of these is a computer vision system for locating and identifying defects on rough lumber. The second is a system for automatically grading boards based on the output of the computer vision system. This paper presents research results aimed at developing the first of these components. The...

  5. Homework system development with the intention of supporting Saudi Arabia's vision 2030

    NASA Astrophysics Data System (ADS)

    Elgimari, Atifa; Alshahrani, Shafya; Al-shehri, Amal

    2017-10-01

    This paper suggests a web-based homework system. The suggested homework system can serve targeted students with ages of 7-11 years old. By using the suggested homework system, hard copies of homeworks were replaced by soft copies. Parents were involved in the education process electronically. It is expected to participate in applying Saudi Arabia's Vision 2030, specially in the education sector, where it considers the primary education is its foundation stone, as the success of the Vision depends in large assess on reforms in the education system generating a better basis for employment of young Saudis.

  6. Rapid matching of stereo vision based on fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei

    2016-09-01

    As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.

  7. Vision and dual IMU integrated attitude measurement system

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoting; Sun, Changku; Wang, Peng; Lu, Huang

    2018-01-01

    To determination relative attitude between two space objects on a rocking base, an integrated system based on vision and dual IMU (inertial determination unit) is built up. The determination system fuses the attitude information of vision with the angular determinations of dual IMU by extended Kalman filter (EKF) to obtain the relative attitude. One IMU (master) is attached to the measured motion object and the other (slave) to the rocking base. As the determination output of inertial sensor is relative to inertial frame, thus angular rate of the master IMU includes not only motion of the measured object relative to inertial frame but also the rocking base relative to inertial frame, where the latter can be seen as redundant harmful movement information for relative attitude determination between the measured object and the rocking base. The slave IMU here assists to remove the motion information of rocking base relative to inertial frame from the master IMU. The proposed integrated attitude determination system is tested on practical experimental platform. And experiment results with superior precision and reliability show the feasibility and effectiveness of the proposed attitude determination system.

  8. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    PubMed Central

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-01

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activities, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation of the performance of human activity recognition. PMID:23353144

  9. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    NASA Astrophysics Data System (ADS)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  10. Design and Development of a High Speed Sorting System Based on Machine Vision Guiding

    NASA Astrophysics Data System (ADS)

    Zhang, Wenchang; Mei, Jiangping; Ding, Yabin

    In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on "Servo motor + synchronous conveyer" is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.

  11. A study on low-cost, high-accuracy, and real-time stereo vision algorithms for UAV power line inspection

    NASA Astrophysics Data System (ADS)

    Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue

    2018-04-01

    Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.

  12. The precision measurement and assembly for miniature parts based on double machine vision systems

    NASA Astrophysics Data System (ADS)

    Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.

    2015-02-01

    In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.

  13. On a Fundamental Evaluation of a Uav Equipped with a Multichannel Laser Scanner

    NASA Astrophysics Data System (ADS)

    Nakano, K.; Suzuki, H.; Omori, K.; Hayakawa, K.; Kurodai, M.

    2018-05-01

    Unmanned aerial vehicles (UAVs), which have been widely used in various fields such as archaeology, agriculture, mining, and construction, can acquire high-resolution images at the millimetre scale. It is possible to obtain realistic 3D models using high-overlap images and 3D reconstruction software based on computer vision technologies such as Structure from Motion and Multi-view Stereo. However, it remains difficult to obtain key points from surfaces with limited texture such as new asphalt or concrete, or from areas like forests that may be concealed by vegetation. A promising method for conducting aerial surveys is through the use of UAVs equipped with laser scanners. We conducted a fundamental performance evaluation of the Velodyne VLP-16 multi-channel laser scanner equipped to a DJI Matrice 600 Pro UAV at a construction site. Here, we present our findings with respect to both the geometric and radiometric aspects of the acquired data.

  14. Software model of a machine vision system based on the common house fly.

    PubMed

    Madsen, Robert; Barrett, Steven; Wilcox, Michael

    2005-01-01

    The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.

  15. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  16. [Clinical observation on the relation between laser in situ keratomileusis treating myopic anisometropia and binocular vision].

    PubMed

    Huang, Jing; Lu, Wei

    2009-09-29

    To analyze the effect of LASIK on visual quality of anisometropia, and evaluate its clinical value in the view of visual quality. Prospective observational case series. Assayed the naked vision, glasses-corrected vision and binocular vision of 45 cases with anisometropia >or= 2.25D before and after the operation of LASIK. 91.57% of the eyes after the operation reached the vision >or= 0.8, which says a significant improvement for binocular vision after the operation (P < 0.05). There was a significant difference on diopter between the pre-operation and post-operation (P < 0.05). As for anisometropia, there was no significant difference between simultaneous binocular visions (P = 0.431), but there was of great significance among combined, short and long distance stereopsis visions (P = 0.000). Binocular vision deteriorated as anisometropia increased (P < 0.05). The short distance stereopsis visions of LASIK-treated myopic anisometropia were better than that of glasses-corrected patients (P < 0.05). The operation of LASIK can improve the visual quality and resume the binocular vision. LASIK can correct anisometropia and its therapeutic efficacy deserves to confirm.

  17. Improving Federal Education Programs through an Integrated Performance and Benchmarking System.

    ERIC Educational Resources Information Center

    Department of Education, Washington, DC. Office of the Under Secretary.

    This document highlights the problems with current federal education program data collection activities and lists several factors that make movement toward a possible solution, then discusses the vision for the Integrated Performance and Benchmarking System (IPBS), a vision of an Internet-based system for harvesting information from states about…

  18. Astigmatism

    MedlinePlus

    ... change the shape of the cornea surface to eliminate astigmatism, along with nearsightedness or farsightedness. ... contact lenses. Laser vision correction can most often eliminate, or greatly reduce astigmatism.

  19. Development of 3D online contact measurement system for intelligent manufacturing based on stereo vision

    NASA Astrophysics Data System (ADS)

    Li, Peng; Chong, Wenyan; Ma, Yongjun

    2017-10-01

    In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.

  20. MS Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koppenaal, David W.; Barinaga, Charles J.; Denton, M Bonner B.

    2005-11-01

    Good eyesight is often taken for granted, a situation that everyone appreciates once vision begins to fade with age. New eyeglasses or contact lenses are traditional ways to improve vision, but recent new technology, i.e. LASIK laser eye surgery, provides a new and exciting means for marked vision restoration and improvement. In mass spectrometry, detectors are the 'eyes' of the MS instrument. These 'eyes' have also been taken for granted. New detectors and new technologies are likewise needed to correct, improve, and extend ion detection and hence, our 'chemical vision'. The purpose of this report is to review and assessmore » current MS detector technology and to provide a glimpse towards future detector technologies. It is hoped that the report will also serve to motivate interest, prompt ideas, and inspire new visions for ion detection research.« less

  1. 3D laser imaging for ODOT interstate network at true 1-mm resolution.

    DOT National Transportation Integrated Search

    2014-12-01

    With the development of 3D laser imaging technology, the latest iteration of : PaveVision3D Ultra can obtain true 1mm resolution 3D data at full-lane coverage in all : three directions at highway speed up to 60MPH. This project provides rapid survey ...

  2. Overview of refractive surgery.

    PubMed

    Bower, K S; Weichel, E D; Kim, T J

    2001-10-01

    Patients with myopia, hyperopia and astigmatism can now reduce or eliminate their dependence on contact lenses and eyeglasses through refractive surgery that includes radial keratotomy (RK), photorefractive keratectomy (PRK), laser-assisted in situ keratomileusis (LASIK), laser thermal keratoplasty (LTK) and intrastromal corneal rings (ICR). Since the approval of the excimer laser in 1995, the popularity of RK has declined because of the superior outcomes from PRK and LASIK. In patients with low-to-moderate myopia, PRK produces stable and predictable results with an excellent safety profile. LASIK is also efficacious, predictable and safe, with the additional advantages of rapid vision recovery and minimal pain. LASIK has rapidly become the most widely performed refractive surgery, with high patient and surgeon satisfaction. Noncontact Holium: YAG LTK provides satisfactory correction in patients with low hyperopia. ICR offers patients with low myopia the potential advantage of removal if the vision outcome is unsatisfactory. Despite the current widespread advertising and media attention about laser refractive surgery, not all patients are good candidates for this surgery. Family physicians should be familiar with the different refractive surgeries and their potential complications.

  3. Parametric study of sensor placement for vision-based relative navigation system of multiple spacecraft

    NASA Astrophysics Data System (ADS)

    Jeong, Junho; Kim, Seungkeun; Suk, Jinyoung

    2017-12-01

    In order to overcome the limited range of GPS-based techniques, vision-based relative navigation methods have recently emerged as alternative approaches for a high Earth orbit (HEO) or deep space missions. Therefore, various vision-based relative navigation systems use for proximity operations between two spacecraft. For the implementation of these systems, a sensor placement problem can occur on the exterior of spacecraft due to its limited space. To deal with the sensor placement, this paper proposes a novel methodology for a vision-based relative navigation based on multiple position sensitive diode (PSD) sensors and multiple infrared beacon modules. For the proposed method, an iterated parametric study is used based on the farthest point optimization (FPO) and a constrained extended Kalman filter (CEKF). Each algorithm is applied to set the location of the sensors and to estimate relative positions and attitudes according to each combination by the PSDs and beacons. After that, scores for the sensor placement are calculated with respect to parameters: the number of the PSDs, number of the beacons, and accuracy of relative estimates. Then, the best scoring candidate is determined for the sensor placement. Moreover, the results of the iterated estimation show that the accuracy improves dramatically, as the number of the PSDs increases from one to three.

  4. Vision-Based UAV Flight Control and Obstacle Avoidance

    DTIC Science & Technology

    2006-01-01

    denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion

  5. NASA Tech Briefs, November 2008

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Topics covered include: Digital Phase Meter for a Laser Heterodyne Interferometer; Vision System Measures Motions of Robot and External Objects; Advanced Precipitation Radar Antenna to Measure Rainfall From Space; Wide-Band Radar for Measuring Thickness of Sea Ice; Vertical Isolation for Photodiodes in CMOS Imagers; Wide-Band Microwave Receivers Using Photonic Processing; L-Band Transmit/Receive Module for Phase-Stable Array Antennas; Microwave Power Combiner/Switch Utilizing a Faraday Rotator; Compact Low-Loss Planar Magic-T; Using Pipelined XNOR Logic to Reduce SEU Risks in State Machines; Quasi-Optical Transmission Line for 94-GHz Radar; Next Generation Flight Controller Trainer System; Converting from DDOR SASF to APF; Converting from CVF to AAF; Documenting AUTOGEN and APGEN Model Files; Sequence History Update Tool; Extraction and Analysis of Display Data; MRO DKF Post-Processing Tool; Rig Diagnostic Tools; MRO Sequence Checking Tool; Science Activity Planner for the MER Mission; UAVSAR Flight-Planning System; Templates for Deposition of Microscopic Pointed Structures; Adjustable Membrane Mirrors Incorporating G-Elastomers; Hall-Effect Thruster Utilizing Bismuth as Propellant; High-Temperature Crystal-Growth Cartridge Tubes Made by VPS; Quench Crucibles Reinforced with Metal; Deep-Sea Hydrothermal-Vent Sampler; Mars Rocket Propulsion System; Two-Stage Passive Vibration Isolator; Improved Thermal Design of a Compression Mold; Enhanced Pseudo-Waypoint Guidance for Spacecraft Maneuvers; Altimetry Using GPS-Reflection/Occultation Interferometry; Thermally Driven Josephson Effect; Perturbation Effects on a Supercritical C7H16/N2 Mixing Layer; Gold Nanoparticle Labels Amplify Ellipsometric Signals; Phase Matching of Diverse Modes in a WGM Resonator; WGM Resonators for Terahertz-to-Optical Frequency Conversion; Determining Concentration of Nanoparticles from Ellipsometry; Microwave-to-Optical Conversion in WGM Resonators; Four-Pass Coupler for Laser-Diode-Pumped Solid-State Laser; Low-Resolution Raman-Spectroscopy Combustion Thermometry; Temperature Sensors Based on WGM Optical Resonators; Varying the Divergence of Multiple Parallel Laser Beams; Efficient Algorithm for Rectangular Spiral Search; Algorithm-Based Fault Tolerance Integrated with Replication; Targeting and Localization for Mars Rover Operations; Terrain-Adaptive Navigation Architecture; Self-Adjusting Hash Tables for Embedded Flight Applications; Schema for Spacecraft-Command Dictionary; Combined GMSK Communications and PN Ranging; System-Level Integration of Mass Memory; Network-Attached Solid-State Recorder Architecture; Method of Cross-Linking Aerogels Using a One-Pot Reaction Scheme; An Efficient Reachability Analysis Algorithm.

  6. ROBOSIGHT: Robotic Vision System For Inspection And Manipulation

    NASA Astrophysics Data System (ADS)

    Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh

    1989-02-01

    Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.

  7. Advancing adverse outcome pathways for integrated toxicology and regulatory applications

    EPA Science Inventory

    Recent regulatory efforts in many countries have focused on a toxicological pathway-based vision for human health assessments relying on in vitro systems and predictive models to generate the toxicological data needed to evaluate chemical hazard. A pathway-based vision is equally...

  8. Passive in-home measurement of stride-to-stride gait variability comparing vision and Kinect sensing.

    PubMed

    Stone, Erik E; Skubic, Marjorie

    2011-01-01

    We present an analysis of measuring stride-to-stride gait variability passively, in a home setting using two vision based monitoring techniques: anonymized video data from a system of two web-cameras, and depth imagery from a single Microsoft Kinect. Millions of older adults fall every year. The ability to assess the fall risk of elderly individuals is essential to allowing them to continue living safely in independent settings as they age. Studies have shown that measures of stride-to-stride gait variability are predictive of falls in older adults. For this analysis, a set of participants were asked to perform a number of short walks while being monitored by the two vision based systems, along with a marker based Vicon motion capture system for ground truth. Measures of stride-to-stride gait variability were computed using each of the systems and compared against those obtained from the Vicon.

  9. Improving CAR Navigation with a Vision-Based System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  10. Improving Car Navigation with a Vision-Based System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  11. Laser safety and hazard analysis for the temperature stabilized BSLT ARES laser system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustoni, Arnold L.

    A laser safety and hazard analysis was performed for the temperature stabilized Big Sky Laser Technology (BSLT) laser central to the ARES system based on the 2000 version of the American National Standards Institute's (ANSI) Standard Z136.1, for Safe Use of Lasers and the 2000 version of the ANSI Standard Z136.6, for Safe Use of Lasers Outdoors. As a result of temperature stabilization of the BSLT laser the operating parameters of the laser had changed requiring a hazard analysis based on the new operating conditions. The ARES laser system is a Van/Truck based mobile platform, which is used to performmore » laser interaction experiments and tests at various national test sites.« less

  12. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  13. Optical Telescope System-Level Design Considerations for a Space-Based Gravitational Wave Mission

    NASA Technical Reports Server (NTRS)

    Livas, Jeffrey C.; Sankar, Shannon R.

    2016-01-01

    The study of the Universe through gravitational waves will yield a revolutionary new perspective on the Universe, which has been intensely studied using electromagnetic signals in many wavelength bands. A space-based gravitational wave observatory will enable access to a rich array of astrophysical sources in the measurement band from 0.1 to 100 mHz, and nicely complement observations from ground-based detectors as well as pulsar timing arrays by sampling a different range of compact object masses and astrophysical processes. The observatory measures gravitational radiation by precisely monitoring the tiny change in the proper distance between pairs of freely falling proof masses. These masses are separated by millions of kilometers and, using a laser heterodyne interferometric technique, the change in their proper separation is detected to approx. 10 pm over timescales of 1000 seconds, a fractional precision of better than one part in 10(exp 19). Optical telescopes are essential for the implementation of this precision displacement measurement. In this paper we describe some of the key system level design considerations for the telescope subsystem in a mission context. The reference mission for this purpose is taken to be the enhanced Laser Interferometry Space Antenna mission (eLISA), a strong candidate for the European Space Agency's Cosmic Visions L3 launch opportunity in 2034. We will review the flow-down of observatory level requirements to the telescope subsystem, particularly pertaining to the effects of telescope dimensional stability and scattered light suppression, two performance specifications which are somewhat different from the usual requirements for an image forming telescope.

  14. Real-life experience of ranibizumab for diabetic macular edema in Taiwan.

    PubMed

    Tsai, Meng-Ju; Hsieh, Yi-Ting; Peng, Yi-Jie

    2018-06-20

    To evaluate the visual and anatomical outcomes of intravitreal ranibizumab for diabetic macular edema (DME) in the healthcare system of Taiwan. A total of 39 eyes from 39 patients were retrospectively enrolled in the study. All eyes that fulfilled the key criteria, including a baseline vision between 20 and 70 ETDRS letters and a minimum central macular thickness (CMT) of 300 µm, had at least 3 monthly loading injections of ranibizumab in a year. Macular laser or posterior subtenon injections of triamcinolone acetonide (PSTA) could be performed as supplementary treatments following loading injections. Primary outcomes include best-corrected visual acuity and CMT. Patients' vision improved from 46.5 ± 15.3 letters at baseline to 51.4 ± 16.6 letters at 12 months (p = 0.031). Mean CMT at baseline was 406 ± 105 µm, which decreased to 329 ± 108 µm (p = 0.002). At 12 months, 44.4% of eyes with total injection number < 5 and 42.9% with injection number ≥ 5 achieved a gain in vision that was 10 letters or more. A total of 5 injections or more did not lead to a better visual gain in comparison with only 3-4 injections (p = 0.71), and both had similar number of supplementary treatments (p = 0.43). Monthly reinjections of ranibizumab resulted in a lower likelihood of visual loss of 10 or 15 letters (p = 0.019 and 0.015, respectively, adjusted for age, baseline vision, severity of diabetic retinopathy and the presence of previous treatments); however, supplementary macular lasers, PSTA or ranibizumab without monthly reinjections did not (all p > 0.05). The average number of injections was 4.3 ± 1.0. Treatment for DME with at least three monthly ranibizumab loading injections, with or without other supplementary treatments, is effective at 12 months thereafter. Two monthly reinjections of ranibizumab, while not significantly increasing vision, may have a role in preventing visual loss.

  15. Optical Quality, Threshold Target Identification, and Military Target Task Performance After Advanced Keratorefractive Surgery

    DTIC Science & Technology

    2012-05-01

    undergo wavefront-guided (WFG) photorefractive keratectomy ( PRK ), WFG laser in situ keratomileusis ( LASIK ), wavefront optimized (WFO) PRK or WFO...Military, Refractive Surgery, PRK , LASIK , Night Vision, Wavefront Optimized, Wavefront Guided, Visual Performance, Quality of Vision, Outcomes...military. In a prospective, randomized treatment trial we will enroll 224 nearsighted soldiers to WFG photorefractive keratectomy ( PRK ), WFG LASIK , WFO PRK

  16. Optical Quality and Threshold Target Identification and Military Target Task Performance after Advanced Keratorefractive Surgery

    DTIC Science & Technology

    2013-05-01

    and Sensors Directorate. • Study participants and physicians select treatment: PRK or LASIK . WFG vs . WFO treatment modality is randomized. The...to undergo wavefront-guided (WFG) photorefractive keratectomy ( PRK ), WFG laser in situ keratomileusis ( LASIK ), wavefront optimized (WFO) PRK or WFO...TERMS Military, Refractive Surgery, PRK , LASIK , Night Vision, Wavefront Optimized, Wavefront Guided, Visual Performance, Quality of Vision, Outcomes

  17. Laser Safety and Hazardous Analysis for the ARES (Big Sky) Laser System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    AUGUSTONI, ARNOLD L.

    A laser safety and hazard analysis was performed for the ARES laser system based on the 2000 version of the American National Standards Institute's (ANSI) Standard Z136.1,for Safe Use of Lasers and the 2000 version of the ANSI Standard Z136.6, for Safe Use of Lasers Outdoors. The ARES laser system is a Van/Truck based mobile platform, which is used to perform laser interaction experiments and tests at various national test sites.

  18. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    PubMed

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  19. Using laser induced breakdown spectroscopy and acoustic radiation force elasticity microscope to measure the spatial distribution of corneal elasticity

    NASA Astrophysics Data System (ADS)

    Sun, Hui; Li, Xin; Fan, Zhongwei; Kurtz, Ron; Juhasz, Tibor

    2017-02-01

    Corneal biomechanics plays an important role in determining the eye's structural integrity, optical power and the overall quality of vision. It also plays an increasingly recognized role in corneal transplant and refractive surgery, affecting the predictability, quality and stability of final visual outcome [1]. A critical limitation to increasing our understanding of how corneal biomechanics controls corneal stability and refraction is the lack of non-invasive technologies that microscopically measure local biomechanical properties, such as corneal elasticity within the 3D space. Bubble based acoustic radiation force elastic microscopy (ARFEM) introduce the opportunity to measure the inhomogeneous elastic properties of the cornea by the movement of a micron size cavitation bubble generated by a low energy femtosecond laser pulse [2, 3]. Laser induced breakdown spectroscopy (LIBS) also known as laser induced plasma spectroscopy (LIPS) or laser spark spectrometry (LSS) is an atomic emission spectroscopy [4]. The LIBS principle of operation is quite simple, although the physical processes involved in the laser matter interaction are complex and still not completely understood. In one sentence for description, the laser pulses are focused down to a target so as to generate plasma that vaporizes a small amount of material which the emitted spectrum is measured to analysis the elements of the target.

  20. Accidental human laser retinal injuries from military laser systems

    NASA Astrophysics Data System (ADS)

    Stuck, Bruce E.; Zwick, Harry; Molchany, Jerome W.; Lund, David J.; Gagliano, Donald A.

    1996-04-01

    The time course of the ophthalmoscopic and functional consequences of eight human laser accident cases from military laser systems is described. All patients reported subjective vision loss with ophthalmoscopic evidence of retinal alteration ranging from vitreous hemorrhage to retinal burn. Five of the cases involved single or multiple exposures to Q-switched neodymium radiation at close range whereas the other three incidents occur over large ranges. Most exposures were within 5 degrees of the foveola, yet none directly in the foveola. High contrast visual activity improved with time except in the cases with progressive retinal fibrosis between lesion sites or retinal hole formation encroaching the fovea. In one patient the visual acuity recovered from 20/60 at one week to 20/25 in four months with minimal central visual field loss. Most cases showed suppression of high and low spatial frequency contrast sensitivity. Visual field measurements were enlarged relative to ophthalmoscopic lesion size observations. Deep retinal scar formation and retinal traction were evident in two of the three cases with vitreous hemorrhage. In one patient, nerve fiber layer damage to the papillo-macular bundle was clearly evident. Visual performance measured with a pursuit tracking task revealed significant performance loss relative to normal tracking observers even in cases where acuity returned to near normal levels. These functional and performance deficits may reflect secondary effects of parafoveal laser injury.

  1. Vision based assistive technology for people with dementia performing activities of daily living (ADLs): an overview

    NASA Astrophysics Data System (ADS)

    As'ari, M. A.; Sheikh, U. U.

    2012-04-01

    The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.

  2. Vision-based system identification technique for building structures using a motion capture system

    NASA Astrophysics Data System (ADS)

    Oh, Byung Kwan; Hwang, Jin Woo; Kim, Yousok; Cho, Tongjun; Park, Hyo Seon

    2015-11-01

    This paper presents a new vision-based system identification (SI) technique for building structures by using a motion capture system (MCS). The MCS with outstanding capabilities for dynamic response measurements can provide gage-free measurements of vibrations through the convenient installation of multiple markers. In this technique, from the dynamic displacement responses measured by MCS, the dynamic characteristics (natural frequency, mode shape, and damping ratio) of building structures are extracted after the processes of converting the displacement from MCS to acceleration and conducting SI by frequency domain decomposition. A free vibration experiment on a three-story shear frame was conducted to validate the proposed technique. The SI results from the conventional accelerometer-based method were compared with those from the proposed technique and showed good agreement, which confirms the validity and applicability of the proposed vision-based SI technique for building structures. Furthermore, SI directly employing MCS measured displacements to FDD was performed and showed identical results to those of conventional SI method.

  3. Real-time unconstrained object recognition: a processing pipeline based on the mammalian visual system.

    PubMed

    Aguilar, Mario; Peot, Mark A; Zhou, Jiangying; Simons, Stephen; Liao, Yuwei; Metwalli, Nader; Anderson, Mark B

    2012-03-01

    The mammalian visual system is still the gold standard for recognition accuracy, flexibility, efficiency, and speed. Ongoing advances in our understanding of function and mechanisms in the visual system can now be leveraged to pursue the design of computer vision architectures that will revolutionize the state of the art in computer vision.

  4. Electronic health records (EHRs): supporting ASCO's vision of cancer care.

    PubMed

    Yu, Peter; Artz, David; Warner, Jeremy

    2014-01-01

    ASCO's vision for cancer care in 2030 is built on the expanding importance of panomics and big data, and envisions enabling better health for patients with cancer by the rapid transformation of systems biology knowledge into cancer care advances. This vision will be heavily dependent on the use of health information technology for computational biology and clinical decision support systems (CDSS). Computational biology will allow us to construct models of cancer biology that encompass the complexity of cancer panomics data and provide us with better understanding of the mechanisms governing cancer behavior. The Agency for Healthcare Research and Quality promotes CDSS based on clinical practice guidelines, which are knowledge bases that grow too slowly to match the rate of panomic-derived knowledge. CDSS that are based on systems biology models will be more easily adaptable to rapid advancements and translational medicine. We describe the characteristics of health data representation, a model for representing molecular data that supports data extraction and use for panomic-based clinical research, and argue for CDSS that are based on systems biology and are algorithm-based.

  5. Novel compact panomorph lens based vision system for monitoring around a vehicle

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2008-04-01

    Automotive applications are one of the largest vision-sensor market segments and one of the fastest growing ones. The trend to use increasingly more sensors in cars is driven both by legislation and consumer demands for higher safety and better driving experiences. Awareness of what directly surrounds a vehicle affects safe driving and manoeuvring of a vehicle. Consequently, panoramic 360° Field of View imaging can contributes most to the perception of the world around the driver than any other sensors. However, to obtain a complete vision around the car, several sensor systems are necessary. To solve this issue, a customized imaging system based on a panomorph lens will provide the maximum information for the drivers with a reduced number of sensors. A panomorph lens is a hemispheric wide angle anamorphic lens with enhanced resolution in predefined zone of interest. Because panomorph lenses are optimized to a custom angle-to-pixel relationship, vision systems provide ideal image coverage that reduces and optimizes the processing. We present various scenarios which may benefit from the use of a custom panoramic sensor. We also discuss the technical requirements of such vision system. Finally we demonstrate how the panomorph based visual sensor is probably one of the most promising ways to fuse many sensors in one. For example, a single panoramic sensor on the front of a vehicle could provide all necessary information for assistance in crash avoidance, lane tracking, early warning, park aids, road sign detection, and various video monitoring views.

  6. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  7. Remote-controlled vision-guided mobile robot system

    NASA Astrophysics Data System (ADS)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  8. Vision Based Localization in Urban Environments

    NASA Technical Reports Server (NTRS)

    McHenry, Michael; Cheng, Yang; Matthies, Larry

    2005-01-01

    As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by each camera. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations, For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of two primary components. The first is the vision system which uses binocular stereo ranging together with a set of heuristics to identify features likely to be part of building exteriors and to compute an estimate of the robot's motion since the previous step. The resulting visual features and the associated range measurements are software component, a particle-filter based localization system. This system uses the map and the then fed to the second primary most recent results from the vision system to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and will include the results of applying the system to the global localization of a robot over an approximately half-kilometer traverse across JPL'S Pasadena campus.

  9. Colour, vision and coevolution in avian brood parasitism.

    PubMed

    Stoddard, Mary Caswell; Hauber, Mark E

    2017-07-05

    The coevolutionary interactions between avian brood parasites and their hosts provide a powerful system for investigating the diversity of animal coloration. Specifically, reciprocal selection pressure applied by hosts and brood parasites can give rise to novel forms and functions of animal coloration, which largely differ from those that arise when selection is imposed by predators or mates. In the study of animal colours, avian brood parasite-host dynamics therefore invite special consideration. Rapid advances across disciplines have paved the way for an integrative study of colour and vision in brood parasite-host systems. We now know that visually driven host defences and host life history have selected for a suite of phenotypic adaptations in parasites, including mimicry, crypsis and supernormal stimuli. This sometimes leads to vision-based host counter-adaptations and increased parasite trickery. Here, we review vision-based adaptations that arise in parasite-host interactions, emphasizing that these adaptations can be visual/sensory, cognitive or phenotypic in nature. We highlight recent breakthroughs in chemistry, genomics, neuroscience and computer vision, and we conclude by identifying important future directions. Moving forward, it will be essential to identify the genetic and neural bases of adaptation and to compare vision-based adaptations to those arising in other sensory modalities.This article is part of the themed issue 'Animal coloration: production, perception, function and application'. © 2017 The Author(s).

  10. Optical devices in highly myopic eyes with low vision: a prospective study.

    PubMed

    Scassa, C; Cupo, G; Bruno, M; Iervolino, R; Capozzi, S; Tempesta, C; Giusti, C

    2012-01-01

    To compare, in relation to the cause of visual impairment, the possibility of rehabilitation, the corrective systems already in use and the finally prescribed optical devices in highly myopic patients with low vision. Some considerations about the rehabilitation of these subjects, especially in relation to their different pathologies, have also been made. 25 highly myopic subjects were enrolled. We evaluated both visual acuity and retinal sensitivity by Scanning Laser Ophthalmoscope (SLO) microperimetry. 20 patients (80%) were rehabilitated by means of monocular optical devices while five patients (20%) were rehabilitated binocularly. We found a good correlation between visual acuity and retinal sensitivity only when the macular pathology did not induce large areas of chorioretinal atrophy that cause lack of stabilization of the preferential retinal locus. In fact, the best results in reading and performing daily visual tasks were obtained by maximizing the residual vision in patients with retinal sensitivity greater than 10 dB. A well circumscribed area of absolute scotoma with a defined new retinal fixation locus could be considered as a positive predictive factor for the final rehabilitation process. A more careful evaluation of visual acuity, retinal sensitivity and preferential fixation locus is necessary in order to prescribe the best optical devices to patients with low vision, thus reducing the impact of the disability on their daily life.

  11. HRV based health&sport markers using video from the face.

    PubMed

    Capdevila, Lluis; Moreno, Jordi; Movellan, Javier; Parrado, Eva; Ramos-Castro, Juan

    2012-01-01

    Heart Rate Variability (HRV) is an indicator of health status in the general population and of adaptation to stress in athletes. In this paper we compare the performance of two systems to measure HRV: (1) A commercial system based on recording the physiological cardiac signal with (2) A computer vision system that uses a standard video images of the face to estimate RR from changes in skin color of the face. We show that the computer vision system performs surprisingly well. It estimates individual RR intervals in a non-invasive manner and with error levels comparable to those achieved by the physiological based system.

  12. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  13. Magician Simulator: A Realistic Simulator for Heterogenous Teams of Autonomous Robots. MAGIC 2010 Challenge

    DTIC Science & Technology

    2011-02-07

    Sensor UGVs (SUGV) or Disruptor UGVs, depending on their payload. The SUGVs included vision, GPS/IMU, and LIDAR systems for identifying and tracking...employed by all the MAGICian research groups. Objects of interest were tracked using standard LIDAR and Computer Vision template-based feature...tracking approaches. Mapping was solved through Multi-Agent particle-filter based Simultaneous Locali- zation and Mapping ( SLAM ). Our system contains

  14. Vision communications based on LED array and imaging sensor

    NASA Astrophysics Data System (ADS)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  15. Development of Fiber-Based Laser Systems for LISA

    NASA Technical Reports Server (NTRS)

    Numata, Kenji; Camp, Jordan

    2010-01-01

    We present efforts on fiber-based laser systems for the LISA mission at the NASA Goddard Space Flight Center. A fiber-based system has the advantage of higher robustness against external disturbances and easier implementation of redundancies. For a master oscillator, we are developing a ring fiber laser and evaluating two commercial products, a DBR linear fiber laser and a planar-waveguide external cavity diode laser. They all have comparable performance to a traditional NPRO at LISA band. We are also performing reliability tests of a 2-W Yb fiber amplifier and radiation tests of fiber laser/amplifier components. We describe our progress to date and discuss the path to a working LISA laser system design.

  16. History of infrared optronics in France

    NASA Astrophysics Data System (ADS)

    Fouilloy, J. P.; Siriex, Michel B.

    1995-09-01

    In France, the real start of work on the applications of infrared radiations occurred around 1947 - 1948. During many years, technological research was performed in the field of detectors, optical material, modulation techniques, and a lot of measurements were made in order to acquire a better knowledge of the propagation medium and radiation of IR sources, namely those of jet engines. The birth of industrial infrared activities in France started with the Franco-German missile guidance programs: Milan, HOT, Roland and the French air to air missile seeker programs: R530, MAGIC. At these early stages of IR technologies development, it was a great technical adventure for both the governmental agencies and industry to develop: detector technology with PbS and InSb, detector cooling for 3 - 5 micrometer wavelength range, optical material transparent in the infrared, opto mechanical design, signal processing and related electronic technologies. Etablissement Jean Turck and SAT were the pioneers associated with Aerospatiale, Matra and under contracts from the French Ministry of Defence (DGA). In the 60s, the need arose to enhance night vision capability of equipment in service with the French Army. TRT was chosen by DGA to develop the first thermal imagers: LUTHER 1, 2, and 3 with an increasing number of detectors and image frequency rate. This period was also the era in which the SAT detector made rapid advance. After basic work done in the CNRS and with the support of DGA, SAT became the world leader of MCT photovoltaic detector working in the 8 to 12 micron waveband. From 1979, TRT and SAT were given the responsibility for the joint development and production of the first generation French thermal imaging modular system so-called SMT. Now, THOMSON TTD Optronique takes over the opto-electronics activities of TRT. Laser based systems were also studied for military application using YAG type laser and CO2 laser: Laboratoire de Marcousis, CILAS, THOMSON CSF and SAT have developed during the 70s prototypes for a laser range finder, lidar, laser weapon, and target designator. The constant need to develop increasingly efficient infrared equipment led to a significant increase in the number of detector elements implying the integration of the detector and multiplexer electronic. After tests on several possible technologies at SAT, THOMSON CSF, and LETI, the work performed by these teams in 1980 was concentrated on the development of an MCT type IRCCD detector. The selection of this detector technology for the TRIGAT program led to the creation in 1986 of SOFRADIR with the pooling of the different existing expertise. Much other equipment of the first generation was created during the 80s and is now in production: IRST for naval and airborne applications; IR line scanner for airborne reconnaissance; light thermal imagers for man-portable weapons; infrared seekers for ground to air and air to air missiles; thermal sights for submarine, tank, and missile launch systems; night vision systems for flying helicopter and aircraft; air to ground attack pods for night and day operations.

  17. Autostereoscopic three-dimensional display by combining a single spatial light modulator and a zero-order nulled grating

    NASA Astrophysics Data System (ADS)

    Su, Yanfeng; Cai, Zhijian; Liu, Quan; Lu, Yifan; Guo, Peiliang; Shi, Lingyan; Wu, Jianhong

    2018-04-01

    In this paper, an autostereoscopic three-dimensional (3D) display system based on synthetic hologram reconstruction is proposed and implemented. The system uses a single phase-only spatial light modulator to load the synthetic hologram of the left and right stereo images, and the parallax angle between two reconstructed stereo images is enlarged by a grating to meet the split angle requirement of normal stereoscopic vision. To realize the crosstalk-free autostereoscopic 3D display with high light utilization efficiency, the groove parameters of the grating are specifically designed by the rigorous coupled-wave theory for suppressing the zero-order diffraction, and then the zero-order nulled grating is fabricated by the holographic lithography and the ion beam etching. Furthermore, the diffraction efficiency of the fabricated grating is measured under the illumination of a laser beam with a wavelength of 532 nm. Finally, the experimental verification system for the proposed autostereoscopic 3D display is presented. The experimental results prove that the proposed system is able to generate stereoscopic 3D images with good performances.

  18. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  19. Relative Proportion Of Different Types Of Refractive Errors In Subjects Seeking Laser Vision Correction.

    PubMed

    Althomali, Talal A

    2018-01-01

    Refractive errors are a form of optical defect affecting more than 2.3 billion people worldwide. As refractive errors are a major contributor of mild to moderate vision impairment, assessment of their relative proportion would be helpful in the strategic planning of health programs. To determine the pattern of the relative proportion of types of refractive errors among the adult candidates seeking laser assisted refractive correction in a private clinic setting in Saudi Arabia. The clinical charts of 687 patients (1374 eyes) with mean age 27.6 ± 7.5 years who desired laser vision correction and underwent a pre-LASIK work-up were reviewed retrospectively. Refractive errors were classified as myopia, hyperopia and astigmatism. Manifest refraction spherical equivalent (MRSE) was applied to define refractive errors. Distribution percentage of different types of refractive errors; myopia, hyperopia and astigmatism. The mean spherical equivalent for 1374 eyes was -3.11 ± 2.88 D. Of the total 1374 eyes, 91.8% (n = 1262) eyes had myopia, 4.7% (n = 65) eyes had hyperopia and 3.4% (n = 47) had emmetropia with astigmatism. Distribution percentage of astigmatism (cylinder error of ≥ 0.50 D) was 78.5% (1078/1374 eyes); of which % 69.1% (994/1374) had low to moderate astigmatism and 9.4% (129/1374) had high astigmatism. Of the adult candidates seeking laser refractive correction in a private setting in Saudi Arabia, myopia represented greatest burden with more than 90% myopic eyes, compared to hyperopia in nearly 5% eyes. Astigmatism was present in more than 78% eyes.

  20. 77 FR 16890 - Eighteenth Meeting: RTCA Special Committee 213, Enhanced Flight Visions Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-22

    ... Committee 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April 17-19...

  1. 78 FR 5557 - Twenty-First Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-25

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held...

  2. 77 FR 56254 - Twentieth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-12

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October 2-4...

  3. Vision sensing techniques in aeronautics and astronautics

    NASA Technical Reports Server (NTRS)

    Hall, E. L.

    1988-01-01

    The close relationship between sensing and other tasks in orbital space, and the integral role of vision sensing in practical aerospace applications, are illustrated. Typical space mission-vision tasks encompass the docking of space vehicles, the detection of unexpected objects, the diagnosis of spacecraft damage, and the inspection of critical spacecraft components. Attention is presently given to image functions, the 'windowing' of a view, the number of cameras required for inspection tasks, the choice of incoherent or coherent (laser) illumination, three-dimensional-to-two-dimensional model-matching, edge- and region-segmentation techniques, and motion analysis for tracking.

  4. Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking

    NASA Astrophysics Data System (ADS)

    Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas

    2018-01-01

    The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.

  5. Line width determination using a biomimetic fly eye vision system.

    PubMed

    Benson, John B; Wright, Cameron H G; Barrett, Steven F

    2007-01-01

    Developing a new vision system based on the vision of the common house fly, Musca domestica, has created many interesting design challenges. One of those problems is line width determination, which is the topic of this paper. It has been discovered that line width can be determined with a single sensor as long as either the sensor, or the object in question, has a constant, known velocity. This is an important first step for determining the width of any arbitrary object, with unknown velocity.

  6. Anti-vascular endothelial growth factor for diabetic macular oedema.

    PubMed

    Virgili, Gianni; Parravano, Mariacristina; Menchini, Francesca; Evans, Jennifer R

    2014-10-24

    Diabetic macular oedema (DMO) is a common complication of diabetic retinopathy. Although grid or focal laser photocoagulation has been shown to reduce the risk of visual loss in DMO, or clinically significant macular oedema (CSMO), vision is rarely improved. Antiangiogenic therapy with anti-vascular endothelial growth factor (anti-VEGF) modalities is used to try to improve vision in people with DMO. To investigate the effects in preserving and improving vision and acceptability, including the safety, compliance with therapy and quality of life, of antiangiogenic therapy with anti-VEGF modalities for the treatment of DMO. We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2014, Issue 3), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to April 2014), EMBASE (January 1980 to April 2014), Latin American and Caribbean Health Sciences Literature Database (LILACS) (January 1982 to April 2014), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 28 April 2014. We included randomised controlled trials (RCTs) comparing any antiangiogenic drugs with an anti-VEGF mechanism of action versus another treatment, sham treatment or no treatment in people with DMO. We used standard methodological procedures expected by The Cochrane Collaboration. The risk ratios (RR) for visual loss and visual gain of three or more lines of logMAR visual acuity were estimated at one year of follow-up (plus or minus six months) after treatment initiation. Eighteen studies provided data on four comparisons of interest in this review. Participants in the trials had central DMO and moderate vision loss.Compared with grid laser photocoagulation, people treated with antiangiogenic therapy were more likely to gain 3 or more lines of vision at one year (RR 3.6, 95% confidence interval (CI) 2.7 to 4.8, 10 studies, 1333 cases, high quality evidence) and less likely to lose 3 or more lines of vision (RR 0.11, 95% CI 0.05 to 0.24, 7 studies, 1086 cases, high quality evidence). In meta-analyses, no significant subgroup difference was demonstrated between bevacizumab, ranibizumab and aflibercept for the two primary outcomes, but there was little power to detect a difference. The quality of the evidence was judged to be high, because the effect was large, precisely measured and did not vary across studies, although some studies were at high or unclear risk of bias for one or more domains. Regarding absolute benefit, we estimated that 8 out of 100 participants with DMO may gain 3 or more lines of visual acuity using photocoagulation whereas 28 would do so with antiangiogenic therapy, meaning that 100 participants need to be treated with antiangiogenic therapy to allow 20 more people (95% CI 13 to 29) to markedly improve their vision after one year. People treated with anti-VEGF on average had 1.6 lines better vision (95% CI 1.4 to 1.8) after one year compared to laser photocoagulation (9 studies, 1292 cases, high quality evidence). To achieve this result, seven to nine injections were delivered in the first year and three or four in the second, in larger studies adopting either as needed regimens with monthly monitoring or fixed regimens.In other analyses antiangiogenic therapy was more effective than sham (3 studies on 497 analysed participants, high quality evidence) and ranibizumab associated with laser was more effective than laser alone (4 studies on 919 participants, high quality evidence).Ocular severe adverse events, such as endophthalmitis, were rare in the included studies. Meta-analyses conducted for all antiangiogenic drugs compared with either sham or photocoagulation did not show a significant difference regarding serious systemic adverse events (15 studies, 441 events in 2985 participants, RR 0.98, 95% CI 0.83 to 1.17), arterial thromboembolic events (14 studies, 129 events in 3034 participants, RR 0.89, 95% CI 0.63 to 1.25) and overall mortality (63 events in 3562 participants, RR 0.88, 95% CI 0.52 to 1.47). We judged the quality of the evidence on adverse effects as moderate due to partial reporting of safety data and the exclusion of participants with previous cardiovascular events in some studies. There is high quality evidence that antiangiogenic drugs provide a benefit compared to current therapeutic options for DMO, that is grid laser photocoagulation, in clinical trial populations at one or two years. Future research should investigate differences between drugs, effectiveness under real-world monitoring and treatment conditions, and safety in high-risk populations, particularly regarding cardiovascular risk.

  7. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  8. Precision Timing and Measurement for Inference with Laser and Vision

    DTIC Science & Technology

    2010-01-01

    Robotics 24, 8-9 (2007), 699–722. [84] Nüchter, A., Surmann, H., Lingemann, K., and Hertzberg, J. Semantic scene analysis of scanned 3D indoor...Delaunay based triangulation can be ob- tained. Popping the points back to their 3D locations yields a feature sensitive surface mesh . Sequeira et al... texture maps acquired from cameras, the eye can be fooled into believing that the mesh quality is better than it really is. Range images are used to

  9. Noninvasive forward-scattering system for rapid detection, characterization, and identification of Listeria colonies: image processing and data analysis

    NASA Astrophysics Data System (ADS)

    Rajwa, Bartek; Bayraktar, Bulent; Banada, Padmapriya P.; Huff, Karleigh; Bae, Euiwon; Hirleman, E. Daniel; Bhunia, Arun K.; Robinson, J. Paul

    2006-10-01

    Bacterial contamination by Listeria monocytogenes puts the public at risk and is also costly for the food-processing industry. Traditional methods for pathogen identification require complicated sample preparation for reliable results. Previously, we have reported development of a noninvasive optical forward-scattering system for rapid identification of Listeria colonies grown on solid surfaces. The presented system included application of computer-vision and patternrecognition techniques to classify scatter pattern formed by bacterial colonies irradiated with laser light. This report shows an extension of the proposed method. A new scatterometer equipped with a high-resolution CCD chip and application of two additional sets of image features for classification allow for higher accuracy and lower error rates. Features based on Zernike moments are supplemented by Tchebichef moments, and Haralick texture descriptors in the new version of the algorithm. Fisher's criterion has been used for feature selection to decrease the training time of machine learning systems. An algorithm based on support vector machines was used for classification of patterns. Low error rates determined by cross-validation, reproducibility of the measurements, and robustness of the system prove that the proposed technology can be implemented in automated devices for detection and classification of pathogenic bacteria.

  10. A stereo vision-based obstacle detection system in vehicles

    NASA Astrophysics Data System (ADS)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  11. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  12. Closed cycle electric discharge laser design investigation

    NASA Technical Reports Server (NTRS)

    Baily, P. K.; Smith, R. C.

    1978-01-01

    Closed cycle CO2 and CO electric discharge lasers were studied. An analytical investigation assessed scale-up parameters and design features for CO2, closed cycle, continuous wave, unstable resonator, electric discharge lasing systems operating in space and airborne environments. A space based CO system was also examined. The program objectives were the conceptual designs of six CO2 systems and one CO system. Three airborne CO2 designs, with one, five, and ten megawatt outputs, were produced. These designs were based upon five minute run times. Three space based CO2 designs, with the same output levels, were also produced, but based upon one year run times. In addition, a conceptual design for a one megawatt space based CO laser system was also produced. These designs include the flow loop, compressor, and heat exchanger, as well as the laser cavity itself. The designs resulted in a laser loop weight for the space based five megawatt system that is within the space shuttle capacity. For the one megawatt systems, the estimated weight of the entire system including laser loop, solar power generator, and heat radiator is less than the shuttle capacity.

  13. Computer-assisted surgery of the paranasal sinuses: technical and clinical experience with 368 patients, using the Vector Vision Compact system.

    PubMed

    Stelter, K; Andratschke, M; Leunig, A; Hagedorn, H

    2006-12-01

    This paper presents our experience with a navigation system for functional endoscopic sinus surgery. In this study, we took particular note of the surgical indications and risks and the measurement precision and preparation time required, and we present one brief case report as an example. Between 2000 and 2004, we performed functional endoscopic sinus surgery on 368 patients at the Ludwig Maximilians University, Munich, Germany. We used the Vector Vision Compact system (BrainLAB) with laser registration. The indications for surgery ranged from severe nasal polyps and chronic sinusitis to malignant tumours of the paranasal sinuses and skull base. The time needed for data preparation was less than five minutes. The time required for preparation and patient registration depended on the method used and the experience of the user. In the later cases, it took 11 minutes on average, using Z-Touch registration. The clinical plausibility test produced an average deviation of 1.3 mm. The complications of system use comprised one intra-operative re-registration (18 per cent) and one complete failure (5 per cent). Despite the assistance of an accurate working computer, the anterior ethmoidal artery was incised in one case. However, in all 368 cases, we experienced no cerebrospinal fluid leaks, optic nerve lesions, retrobulbar haematomas or intracerebral bleeding. There were no deaths. From our experience with computer-guided surgical procedures, we conclude that computer-guided navigational systems are so accurate that the risk of misleading the surgeon is minimal. In the future, their use in certain specialized procedures will be not only sensible but mandatory. We recommend their use not only in difficult surgical situations but also in routine procedures and for surgical training.

  14. Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.

    PubMed

    Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique

    Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.

  15. Laser applications in ophthalmology: overview

    NASA Astrophysics Data System (ADS)

    Soederberg, Per G.

    1992-03-01

    In 1961, one year after its invention, the laser was used for experimental photocoagulation in animals. In 1963 it was tried for treatment of human eyes. Due to the fact that the optical media in the eye are transmissible to light, the laser offers the unique possibility of measuring and manipulating within a very strict localization without opening the eye. The properties of laser light are increasingly exploited for diagnostics in ophthalmic disease. The introduction of the laser as a tool in ophthalmology has revolutionized ophthalmic treatment. Unfortunately, it has been pointed out in international peace meetings that the biological effect evoked by lasers can also be used for intentional destruction of the vision of enemy soldiers. To prevent such an abuse of lasers against eyes, a strong formal international anti-laser weapon movement has been initiated.

  16. Laser radar: historical prospective-from the East to the West

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl; McManamon, Paul; Steinvall, Ove; Kobayashi, Takao; Chen, Weibiao

    2017-03-01

    This article discusses the history of laser radar development in America, Europe, and Asia. Direct detection laser radar is discussed for range finding, designation, and topographic mapping of Earth and of extraterrestrial objects. Coherent laser radar is discussed for environmental applications, such as wind sensing and for synthetic aperture laser radar development. Gated imaging is discussed through scattering layers for military, medical, and security applications. Laser microradars have found applications in intravascular studies and in ophthalmology for vision correction. Ghost laser radar has emerged as a new technology in theoretical and simulation applications. Laser radar is now emerging as an important technology for applications such as self-driving cars and unmanned aerial vehicles. It is also used by police to measure speed, and in gaming, such as the Microsoft Kinect.

  17. Guidelines on Diabetic Eye Care: The International Council of Ophthalmology Recommendations for Screening, Follow-up, Referral, and Treatment Based on Resource Settings.

    PubMed

    Wong, Tien Y; Sun, Jennifer; Kawasaki, Ryo; Ruamviboonsuk, Paisan; Gupta, Neeru; Lansingh, Van Charles; Maia, Mauricio; Mathenge, Wanjiku; Moreker, Sunil; Muqit, Mahi M K; Resnikoff, Serge; Verdaguer, Juan; Zhao, Peiquan; Ferris, Frederick; Aiello, Lloyd P; Taylor, Hugh R

    2018-05-24

    Diabetes mellitus (DM) is a global epidemic and affects populations in both developing and developed countries, with differing health care and resource levels. Diabetic retinopathy (DR) is a major complication of DM and a leading cause of vision loss in working middle-aged adults. Vision loss from DR can be prevented with broad-level public health strategies, but these need to be tailored to a country's and population's resource setting. Designing DR screening programs, with appropriate and timely referral to facilities with trained eye care professionals, and using cost-effective treatment for vision-threatening levels of DR can prevent vision loss. The International Council of Ophthalmology Guidelines for Diabetic Eye Care 2017 summarize and offer a comprehensive guide for DR screening, referral and follow-up schedules for DR, and appropriate management of vision-threatening DR, including diabetic macular edema (DME) and proliferative DR, for countries with high- and low- or intermediate-resource settings. The guidelines include updated evidence on screening and referral criteria, the minimum requirements for a screening vision and retinal examination, follow-up care, and management of DR and DME, including laser photocoagulation and appropriate use of intravitreal anti-vascular endothelial growth factor inhibitors and, in specific situations, intravitreal corticosteroids. Recommendations for management of DR in patients during pregnancy and with concomitant cataract also are included. The guidelines offer suggestions for monitoring outcomes and indicators of success at a population level. Copyright © 2018 American Academy of Ophthalmology. All rights reserved.

  18. Vision-based navigation in a dynamic environment for virtual human

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Sun, Ji-Zhou; Zhang, Jia-Wan; Li, Ming-Chu

    2004-06-01

    Intelligent virtual human is widely required in computer games, ergonomics software, virtual environment and so on. We present a vision-based behavior modeling method to realize smart navigation in a dynamic environment. This behavior model can be divided into three modules: vision, global planning and local planning. Vision is the only channel for smart virtual actor to get information from the outside world. Then, the global and local planning module use A* and D* algorithm to find a way for virtual human in a dynamic environment. Finally, the experiments on our test platform (Smart Human System) verify the feasibility of this behavior model.

  19. A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wan, Chao; Yuan, Fuh-Gwo

    2017-04-01

    In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.

  20. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  1. Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method

    NASA Astrophysics Data System (ADS)

    Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao

    2016-09-01

    To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.

  2. Robust range estimation with a monocular camera for vision-based forward collision warning system.

    PubMed

    Park, Ki-Yeong; Hwang, Sun-Young

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.

  3. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    PubMed Central

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344

  4. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  5. Surface modeling method for aircraft engine blades by using speckle patterns based on the virtual stereo vision system

    NASA Astrophysics Data System (ADS)

    Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang

    2018-03-01

    A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.

  6. Grid Integration Webinars | Energy Systems Integration Facility | NREL

    Science.gov Websites

    Vision Future. The study used detailed nodal simulations of the Western Interconnection system with greater than 35% wind energy, based on scenarios from the DOE Wind Vision study to assess the operability Renewable Energy Integration in California April 14, 2016 Greg Brinkman discussed the Low Carbon Grid Study

  7. Safety and efficacy of a hydrogel inlay with laser in situ keratomileusis to improve vision in myopic presbyopic patients: one-year results.

    PubMed

    Garza, Enrique Barragan; Chayet, Arturo

    2015-02-01

    To study the safety and efficacy of implanting a hydrogel corneal inlay (Raindrop Near Vision Inlay) concurrently with performing laser in situ keratomileusis (LASIK) to treat myopic presbyopia and to compare the results with results of the same treatment in emmetropic and hyperopic patients. Two private clinics, Tijuana and Monterrey, Mexico. Prospective nonrandomized clinical trial. Bilateral myopic LASIK was performed and a corneal inlay was concurrently implanted in the nondominant eye under a flap created using a femtosecond laser. Primary safety outcomes were the retention of corrected distance (CDVA) and near (CNVA) visual acuities. Efficacy was evaluated on the basis of uncorrected near (UNVA), intermediate (UIVA), and distance (UDVA) visual acuities. A patient questionnaire was used to assess the preoperative and postoperative incidence of visual symptoms, the ability to perform common tasks with no correction, and patient satisfaction with vision. Thirty eyes were enrolled. At each postoperative visit, the mean CDVA and CNVA were within one half line of preoperative measurements and no eye lost 2 or more lines of CDVA. The mean binocular UDVA, UIVA, and UNVA were better than 20/25 Snellen at all postoperative visits. By 6 months, 93% of patients had a binocular Snellen acuity of 20/25 or better across all visual ranges. According to patient questionnaires, 1 year after surgery, visual symptoms were at preoperative levels, 98% of all visual tasks could be easily performed without correction, and 90% of patients were satisfied or very satisfied with their overall vision. A hydrogel corneal inlay with concurrent LASIK was safe and effective for treating myopic presbyopia. Drs. Garza and Chayet are consultants to and investigators for Revision Optics, Inc. Copyright © 2015. Published by Elsevier Inc.

  8. Quality Control by Artificial Vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, Edmond Y.; Gleason, Shaun Scott; Niel, Kurt S.

    2010-01-01

    Computational technology has fundamentally changed many aspects of our lives. One clear evidence is the development of artificial-vision systems, which have effectively automated many manual tasks ranging from quality inspection to quantitative assessment. In many cases, these machine-vision systems are even preferred over manual ones due to their repeatability and high precision. Such advantages come from significant research efforts in advancing sensor technology, illumination, computational hardware, and image-processing algorithms. Similar to the Special Section on Quality Control by Artificial Vision published two years ago in Volume 17, Issue 3 of the Journal of Electronic Imaging, the present one invited papersmore » relevant to fundamental technology improvements to foster quality control by artificial vision, and fine-tuned the technology for specific applications. We aim to balance both theoretical and applied work pertinent to this special section theme. Consequently, we have seven high-quality papers resulting from the stringent peer-reviewing process in place at the Journal of Electronic Imaging. Some of the papers contain extended treatment of the authors work presented at the SPIE Image Processing: Machine Vision Applications conference and the International Conference on Quality Control by Artificial Vision. On the broad application side, Liu et al. propose an unsupervised texture image segmentation scheme. Using a multilayer data condensation spectral clustering algorithm together with wavelet transform, they demonstrate the effectiveness of their approach on both texture and synthetic aperture radar images. A problem related to image segmentation is image extraction. For this, O'Leary et al. investigate the theory of polynomial moments and show how these moments can be compared to classical filters. They also show how to use the discrete polynomial-basis functions for the extraction of 3-D embossed digits, demonstrating superiority over Fourier-basis functions for this task. Image registration is another important task for machine vision. Bingham and Arrowood investigate the implementation and results in applying Fourier phase matching for projection registration, with a particular focus on nondestructive testing using computed tomography. Readers interested in enriching their arsenal of image-processing algorithms for machine-vision tasks should find these papers enriching. Meanwhile, we have four papers dealing with more specific machine-vision tasks. The first one, Yahiaoui et al., is quantitative in nature, using machine vision for real-time passenger counting. Occulsion is a common problem in counting objects and people, and they circumvent this issue with a dense stereovision system, achieving 97 to 99% accuracy in their tests. On the other hand, the second paper by Oswald-Tranta et al. focuses on thermographic crack detection. An infrared camera is used to detect inhomogeneities, which may indicate surface cracks. They describe the various steps in developing fully automated testing equipment aimed at a high throughput. Another paper describing an inspection system is Molleda et al., which handles flatness inspection of rolled products. They employ optical-laser triangulation and 3-D surface reconstruction for this task, showing how these can be achieved in real time. Last but not least, Presles et al. propose a way to monitor the particle-size distribution of batch crystallization processes. This is achieved through a new in situ imaging probe and image-analysis methods. While it is unlikely any reader may be working on these four specific problems at the same time, we are confident that readers will find these papers inspiring and potentially helpful to their own machine-vision system developments.« less

  9. Kinect2 - respiratory movement detection study.

    PubMed

    Rihana, Sandy; Younes, Elie; Visvikis, Dimitris; Fayad, Hadi

    2016-08-01

    Radiotherapy is one of the main cancer treatments. It consists in irradiating tumor cells to destroy them while sparing healthy tissue. The treatment is planned based on Computed Tomography (CT) and is delivered over fractions during several days. One of the main challenges is replacing patient in the same position every day to irradiate the tumor volume while sparing healthy tissues. Many patient positioning techniques are available. They are both invasive and not accurate performed using tattooed marker on the patient's skin aligned with a laser system calibrated in the treatment room or irradiating using X-ray. Currently systems such as Vision RT use two Time of Flight cameras. Time of Flight cameras have the advantage of having a very fast acquisition rate allows the real time monitoring of patient movement and patient repositioning. The purpose of this work is to test the Microsoft Kinect2 camera for potential use for patient positioning and respiration trigging. This type of Time of Flight camera is non-invasive and costless which facilitate its transfer to clinical practice.

  10. Application of polarization in high speed, high contrast inspection

    NASA Astrophysics Data System (ADS)

    Novak, Matthew J.

    2017-08-01

    Industrial optical inspection often requires high speed and high throughput of materials. Engineers use a variety of techniques to handle these inspection needs. Some examples include line scan cameras, high speed multi-spectral and laser-based systems. High-volume manufacturing presents different challenges for inspection engineers. For example, manufacturers produce some components in quantities of millions per month, per week or even per day. Quality control of so many parts requires creativity to achieve the measurement needs. At times, traditional vision systems lack the contrast to provide the data required. In this paper, we show how dynamic polarization imaging captures high contrast images. These images are useful for engineers to perform inspection tasks in some cases where optical contrast is low. We will cover basic theory of polarization. We show how to exploit polarization as a contrast enhancement technique. We also show results of modeling for a polarization inspection application. Specifically, we explore polarization techniques for inspection of adhesives on glass.

  11. Drift-Free Indoor Navigation Using Simultaneous Localization and Mapping of the Ambient Heterogeneous Magnetic Field

    NASA Astrophysics Data System (ADS)

    Chow, J. C. K.

    2017-09-01

    In the absence of external reference position information (e.g. surveyed targets or Global Navigation Satellite Systems) Simultaneous Localization and Mapping (SLAM) has proven to be an effective method for indoor navigation. The positioning drift can be reduced with regular loop-closures and global relaxation as the backend, thus achieving a good balance between exploration and exploitation. Although vision-based systems like laser scanners are typically deployed for SLAM, these sensors are heavy, energy inefficient, and expensive, making them unattractive for wearables or smartphone applications. However, the concept of SLAM can be extended to non-optical systems such as magnetometers. Instead of matching features such as walls and furniture using some variation of the Iterative Closest Point algorithm, the local magnetic field can be matched to provide loop-closure and global trajectory updates in a Gaussian Process (GP) SLAM framework. With a MEMS-based inertial measurement unit providing a continuous trajectory, and the matching of locally distinct magnetic field maps, experimental results in this paper show that a drift-free navigation solution in an indoor environment with millimetre-level accuracy can be achieved. The GP-SLAM approach presented can be formulated as a maximum a posteriori estimation problem and it can naturally perform loop-detection, feature-to-feature distance minimization, global trajectory optimization, and magnetic field map estimation simultaneously. Spatially continuous features (i.e. smooth magnetic field signatures) are used instead of discrete feature correspondences (e.g. point-to-point) as in conventional vision-based SLAM. These position updates from the ambient magnetic field also provide enough information for calibrating the accelerometer bias and gyroscope bias in-use. The only restriction for this method is the need for magnetic disturbances (which is typically not an issue for indoor environments); however, no assumptions are required for the general motion of the sensor (e.g. static periods).

  12. Development and Implementation of an Objective, Non-invasive, Behaviorally Relevant Metric for Laser Eye Injury

    DTIC Science & Technology

    2005-09-01

    34Assessment of local retinal function in patients with retinitis pigmentosa using the multi-focal ERG technique.," Vision Research, vol. 38, pp. 163-179... pigmentosa , and retinal detachment10. mfERG response characteristics have been shown to vary depending on the part of the retina that is affected by a...expanding. Thus, the potential for laser eye injury and retinal damage is increasing. Sensitive and accurate methods to evaluate and follow laser retinal

  13. Technical Challenges in the Development of a NASA Synthetic Vision System Concept

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Parrish, Russell V.; Kramer, Lynda J.; Harrah, Steve; Arthur, J. J., III

    2002-01-01

    Within NASA's Aviation Safety Program, the Synthetic Vision Systems Project is developing display system concepts to improve pilot terrain/situation awareness by providing a perspective synthetic view of the outside world through an on-board database driven by precise aircraft positioning information updating via Global Positioning System-based data. This work is aimed at eliminating visibility-induced errors and low visibility conditions as a causal factor to civil aircraft accidents, as well as replicating the operational benefits of clear day flight operations regardless of the actual outside visibility condition. Synthetic vision research and development activities at NASA Langley Research Center are focused around a series of ground simulation and flight test experiments designed to evaluate, investigate, and assess the technology which can lead to operational and certified synthetic vision systems. The technical challenges that have been encountered and that are anticipated in this research and development activity are summarized.

  14. Quantifying biological samples using Linear Poisson Independent Component Analysis for MALDI-ToF mass spectra

    PubMed Central

    Deepaisarn, S; Tar, P D; Thacker, N A; Seepujak, A; McMahon, A W

    2018-01-01

    Abstract Motivation Matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI) facilitates the analysis of large organic molecules. However, the complexity of biological samples and MALDI data acquisition leads to high levels of variation, making reliable quantification of samples difficult. We present a new analysis approach that we believe is well-suited to the properties of MALDI mass spectra, based upon an Independent Component Analysis derived for Poisson sampled data. Simple analyses have been limited to studying small numbers of mass peaks, via peak ratios, which is known to be inefficient. Conventional PCA and ICA methods have also been applied, which extract correlations between any number of peaks, but we argue makes inappropriate assumptions regarding data noise, i.e. uniform and Gaussian. Results We provide evidence that the Gaussian assumption is incorrect, motivating the need for our Poisson approach. The method is demonstrated by making proportion measurements from lipid-rich binary mixtures of lamb brain and liver, and also goat and cow milk. These allow our measurements and error predictions to be compared to ground truth. Availability and implementation Software is available via the open source image analysis system TINA Vision, www.tina-vision.net. Contact paul.tar@manchester.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29091994

  15. Biofeedback for Better Vision

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Biofeedtrac, Inc.'s Accommotrac Vision Trainer, invented by Dr. Joseph Trachtman, is based on vision research performed by Ames Research Center and a special optometer developed for the Ames program by Stanford Research Institute. In the United States, about 150 million people are myopes (nearsighted), who tend to overfocus when they look at distant objects causing blurry distant vision, or hyperopes (farsighted), whose vision blurs when they look at close objects because they tend to underfocus. The Accommotrac system is an optical/electronic system used by a doctor as an aid in teaching a patient how to contract and relax the ciliary body, the focusing muscle. The key is biofeedback, wherein the patient learns to control a bodily process or function he is not normally aware of. Trachtman claims a 90 percent success rate for correcting, improving or stopping focusing problems. The Vision Trainer has also proved effective in treating other eye problems such as eye oscillation, cross eyes, and lazy eye and in professional sports to improve athletes' peripheral vision and reaction time.

  16. The Laser Interferometer Space Antenna: A space-based Gravitational Wave Observatory

    NASA Astrophysics Data System (ADS)

    Thorpe, James Ira; McNamara, Paul

    2018-01-01

    After decades of persistence, scientists have recently developed facilities which can measure the vibrations of spacetime caused by astrophysical cataclysms such as the mergers of black holes and neutron stars. The first few detections have presented some interesting astrophysical questions and it is clear that with an increase in the number and capability of ground-based facilities, gravitational waves will become an important tool for astronomy. A space-based observatory will complement these efforts by providing access to the milliHertz gravitational wave band, which is expected to be rich in both number and variety of sources. The European Space Agency (ESA) has recently selected the Laser Interferometer Space Antenna (LISA) as a Large-Class mission in its Cosmic Visions Programme. The modern LISA retains the basic design features of previous incarnations and, like its predecessors is expected to be a collaboration between ESA, NASA, and a number of European States. In this poster, we present an overview of the current LISA design, its scientific capabilities, and the timeline to launch.

  17. Survey of computer vision-based natural disaster warning systems

    NASA Astrophysics Data System (ADS)

    Ko, ByoungChul; Kwak, Sooyeong

    2012-07-01

    With the rapid development of information technology, natural disaster prevention is growing as a new research field dealing with surveillance systems. To forecast and prevent the damage caused by natural disasters, the development of systems to analyze natural disasters using remote sensing geographic information systems (GIS), and vision sensors has been receiving widespread interest over the last decade. This paper provides an up-to-date review of five different types of natural disasters and their corresponding warning systems using computer vision and pattern recognition techniques such as wildfire smoke and flame detection, water level detection for flood prevention, coastal zone monitoring, and landslide detection. Finally, we conclude with some thoughts about future research directions.

  18. Rationale, Design and Implementation of a Computer Vision-Based Interactive E-Learning System

    ERIC Educational Resources Information Center

    Xu, Richard Y. D.; Jin, Jesse S.

    2007-01-01

    This article presents a schematic application of computer vision technologies to e-learning that is synchronous, peer-to-peer-based, and supports an instructor's interaction with non-computer teaching equipments. The article first discusses the importance of these focused e-learning areas, where the properties include accurate bidirectional…

  19. Active Voodoo Dolls: A Vision Based Input Device for Nonrigid Control.

    DTIC Science & Technology

    1998-08-01

    A vision based technique for nonrigid control is presented that can be used for animation and video game applications. The user grasps a soft...allowing the user to control it interactively. Our use of texture mapping hardware in tracking makes the system responsive enough for interactive animation and video game character control.

  20. Applications of Adaptive Optics Scanning Laser Ophthalmoscopy

    PubMed Central

    Roorda, Austin

    2010-01-01

    Adaptive optics (AO) describes a set of tools to correct or control aberrations in any optical system. In the eye, AO allows for precise control of the ocular aberrations. If used to correct aberrations over a large pupil, for example, cellular level resolution in retinal images can be achieved. AO systems have been demonstrated for advanced ophthalmoscopy as well as for testing and/or improving vision. In fact, AO can be integrated to any ophthalmic instrument where the optics of the eye is involved, with a scope of applications ranging from phoropters to optical coherence tomography systems. In this paper, I discuss the applications and advantages of using AO in a specific system, the adaptive optics scanning laser ophthalmoscope, or AOSLO. Since the Borish award was, in part, awarded to me because of this effort, I felt it appropriate to select this as the topic for this paper. Furthermore, users of AOSLO continue to appreciate the benefits of the technology, some of which were not anticipated at the time of development, and so it is time to revisit this topic and summarize them in a single paper. PMID:20160657

  1. 78 FR 16756 - Twenty-Second Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...

  2. 78 FR 55774 - Twenty Fourth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-11

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October...

  3. 75 FR 17202 - Eighth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-05

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...

  4. 75 FR 44306 - Eleventh Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-28

    ... Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The...

  5. 75 FR 71183 - Twelfth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY...: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will...

  6. Part Marking and Identification Materials' for MISSE

    NASA Technical Reports Server (NTRS)

    Roxby, Donald; Finckenor, Miria M.

    2008-01-01

    The Materials on International Space Station Experiment (MISSE) is being conducted with funding from NASA and the U.S. Department of Defense, in order to evaluate candidate materials and processes for flight hardware. MISSE modules include test specimens used to validate NASA technical standards for part markings exposed to harsh environments in low-Earth orbit and space, including: atomic oxygen, ultraviolet radiation, thermal vacuum cycling, and meteoroid and orbital debris impact. Marked test specimens are evaluated and then mounted in a passive experiment container (PEC) that is affixed to an exterior surface on the International Space Station (ISS). They are exposed to atomic oxygen and/or ultraviolet radiation for a year or more before being retrieved and reevaluated. Criteria include percent contrast, axial uniformity, print growth, error correction, and overall grade. MISSE 1 and 2 (2001-2005), MISSE 3 and 4 (2006-2007), and MISSE 5 (2005-2006) have been completed to date. Acceptable results were found for test specimens marked with Data Matrix(TradeMark) symbols by Intermec Inc. and Robotic Vision Systems Inc using: laser bonding, vacuum arc vapor deposition, gas assisted laser etch, chemical etch, mechanical dot peening, laser shot peening, laser etching, and laser induced surface improvement. MISSE 6 (2008-2009) is exposing specimens marked by DataLase(Registed TradeMark), Chemico technologies Inc., Intermec Inc., and tesa with laser-markable paint, nanocode tags, DataLase and tesa laser markings, and anodized metal labels.

  7. [Issues of provision of the safety of laserwares in the use of nanotechnologies].

    PubMed

    Pal'tsevIu P; Levina, A V; Kravchenko, O K

    2008-01-01

    Current technologies, including nanotechnologies, cannot be introduced, without applying laser equipment. For provision of the safety in the manufacture and use of up-to-date laser equipment that is characterized by new, previously unused wavelengths and exposure levels, it is necessary to develop the hygienic regulation of the arrangement and maintenance of lasers, to improve approaches to making their sanitary-and-epidemiological examination and manufacturing inspection, to design measuring instruments that permit an objective assessment of new types of laser exposures, and to develop the current means for protecting the organ of vision in service personnel.

  8. Operational Assessment of Color Vision

    DTIC Science & Technology

    2016-06-20

    evaluated in this study. 15. SUBJECT TERMS Color vision, aviation, cone contrast test, Colour Assessment & Diagnosis , color Dx, OBVA 16. SECURITY...symbologies are frequently used to aid or direct critical activities such as aircraft landing approaches or railroad right-of-way designations...computer-generated display systems have facilitated the development of computer-based, automated tests of color vision [14,15]. The United Kingdom’s

  9. Diabetic retinopathy screening: global and local perspective.

    PubMed

    Gangwani, R A; Lian, J X; McGhee, S M; Wong, D; Li, K Kw

    2016-10-01

    Diabetes mellitus has become a global epidemic. It causes significant macrovascular complications such as coronary artery disease, peripheral artery disease, and stroke; as well as microvascular complications such as retinopathy, nephropathy, and neuropathy. Diabetic retinopathy is known to be the leading cause of blindness in the working-age population and may be asymptomatic until vision loss occurs. Screening for diabetic retinopathy has been shown to reduce blindness by timely detection and effective laser treatment. Diabetic retinopathy screening is being done worldwide either as a national screening programme or hospital-based project or as a community-based screening programme. In this article, we review different methods of screening including grading used to detect the severity of sight-threatening retinopathy and the newer screening methods. This review also includes the method of systematic screening being carried out in Hong Kong, a system that has helped to identify diabetic retinopathy among all attendees in public primary care clinics using a Hong Kong-wide public patients' database.

  10. Fiber Lasers and Amplifiers for Space-based Science and Exploration

    NASA Technical Reports Server (NTRS)

    Yu, Anthony W.; Krainak, Michael A.; Stephen, Mark A.; Chen, Jeffrey R.; Coyle, Barry; Numata, Kenji; Camp, Jordan; Abshire, James B.; Allan, Graham R.; Li, Steven X.; hide

    2012-01-01

    We present current and near-term uses of high-power fiber lasers and amplifiers for NASA science and spacecraft applications. Fiber lasers and amplifiers offer numerous advantages for the deployment of instruments on exploration and science remote sensing satellites. Ground-based and airborne systems provide an evolutionary path to space and a means for calibration and verification of space-borne systems. NASA fiber-laser-based instruments include laser sounders and lidars for measuring atmospheric carbon dioxide, oxygen, water vapor and methane and a pulsed or pseudo-noise (PN) code laser ranging system in the near infrared (NIR) wavelength band. The associated fiber transmitters include high-power erbium, ytterbium, and neodymium systems and a fiber laser pumped optical parametric oscillator. We discuss recent experimental progress on these systems and instrument prototypes for ongoing development efforts.

  11. Container-code recognition system based on computer vision and deep neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  12. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  13. Development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification

    NASA Astrophysics Data System (ADS)

    Astafiev, A.; Orlov, A.; Privezencev, D.

    2018-01-01

    The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.

  14. Image-based tracking system for vibration measurement of a rotating object using a laser scanning vibrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dongkyu, E-mail: akein@gist.ac.kr; Khalil, Hossam; Jo, Youngjoon

    2016-06-28

    An image-based tracking system using laser scanning vibrometer is developed for vibration measurement of a rotating object. The proposed system unlike a conventional one can be used where the position or velocity sensor such as an encoder cannot be attached to an object. An image processing algorithm is introduced to detect a landmark and laser beam based on their colors. Then, through using feedback control system, the laser beam can track a rotating object.

  15. Influence of control parameters on the joint tracking performance of a coaxial weld vision system

    NASA Technical Reports Server (NTRS)

    Gangl, K. J.; Weeks, J. L.

    1985-01-01

    The first phase of a series of evaluations of a vision-based welding control sensor for the Space Shuttle Main Engine Robotic Welding System is described. The robotic welding system is presently under development at the Marshall Space Flight Center. This evaluation determines the standard control response parameters necessary for proper trajectory of the welding torch along the joint.

  16. A Multi-Component Automated Laser-Origami System for Cyber-Manufacturing

    NASA Astrophysics Data System (ADS)

    Ko, Woo-Hyun; Srinivasa, Arun; Kumar, P. R.

    2017-12-01

    Cyber-manufacturing systems can be enhanced by an integrated network architecture that is easily configurable, reliable, and scalable. We consider a cyber-physical system for use in an origami-type laser-based custom manufacturing machine employing folding and cutting of sheet material to manufacture 3D objects. We have developed such a system for use in a laser-based autonomous custom manufacturing machine equipped with real-time sensing and control. The basic elements in the architecture are built around the laser processing machine. They include a sensing system to estimate the state of the workpiece, a control system determining control inputs for a laser system based on the estimated data and user’s job requests, a robotic arm manipulating the workpiece in the work space, and middleware, named Etherware, supporting the communication among the systems. We demonstrate automated 3D laser cutting and bending to fabricate a 3D product as an experimental result.

  17. 76 FR 11847 - Thirteenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-03

    ... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...

  18. 76 FR 20437 - Fourteenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-12

    ... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...

  19. Laser safety eyewear.

    PubMed

    1993-04-01

    In spite of repeated warnings about laser safety practices, as well as the availability of laser safety eyewear (LSE), eye injuries continue to occur during use of surgical lasers, as discussed in the Clinical Perspective, "Laser Energy and Its Dangers to Eyes," preceding this Evaluation. We evaluated 48 models of LSE, including goggles, spectacles, and wraps, from 11 manufacturers. The evaluated models are designed with absorptive lenses that provide protection from CO2 (carbon dioxide), Nd:YAG (neodymium:yttrium-aluminum-garnet), and 532 (frequency-doubled Nd:YAG) surgical laser wavelengths; several models provide multiwavelength protection. (Refer to ECRI's Product Comparison System report on LSE for specifications of other models.) Although most of the evaluated models can adequately protect users from laser energy--provided that the eyewear is used--many models of LSE, especially goggles, are designed with little regard for the needs of actual use (e.g., adequate labeling, no alteration of color perception, sufficient field of vision [FOV], comfort). Because these factors can discourage people from using LSE, we encourage manufacturers to develop new and improved models that will be worn. We based our ratings primarily on the laser protection provided by the optical density (OD) of the lenses; we acknowledge the contribution of Montana Laser Optics Inc., of Bozeman, Montana, in performing our OD testing. We also considered actual-use factors, such as those mentioned above, to be significant. Among the models rated Acceptable is one whose labeled OD is lower than the level we determined to be adequate for use during most laser surgery; however, this model offers protection under specific conditions of use (e.g., for use by spectators some distance from the surgical site, for use during endoscopic procedures) that should be determined by the laser safety officer (LSO). LSE that would put the wearer at risk are rated Unacceptable (e.g., some models are not properly or clearly labeled, have measured ODs that are not adequate for protection, or significantly restrict the wearer's FOV); also, LSE with side shields that do not offer adequate protection from diffuse laser energy are rated Unacceptable. Those models that offer adequate protection for surgical applications, but whose measured OD is less than their labeled OD, are rated Acceptable--Not Recommended; if the discrepancy is great, they are rated Unacceptable. Those models whose labels were removed during cleaning are rated Conditionally Acceptable.(ABSTRACT TRUNCATED AT 400 WORDS)

  20. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  1. Hemorrhagic Ischemic Retinal Vasculitis and Alopecia Areata as a Manifestation of HLA-B27.

    PubMed

    Sharma, Ravi; Randhawa, Sandeep

    2018-01-01

    A 12-year-old Indian boy presented with acute and severe vision loss in his right eye. He was being treated for scalp alopecia areata and rashes behind the ears and above the brow. The eye examination revealed unilateral hemorrhagic retinal vasculitis. The lab work was normal except for a positive HLA-B27 result. The patient was treated with intravitreal bevacizumab (Avastin; Genentech, South San Francisco, CA) and systemic immunosuppression. The retinal vasculitis improved with treatment, but visual acuity only mildly improved. The alopecia areata also improved with systemic immunosuppression. [Ophthalmic Surg Lasers Imaging Retina. 2018;49:60-63.]. Copyright 2018, SLACK Incorporated.

  2. Computer vision barrel inspection

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Gunderson, James; Walworth, Matthew E.

    1994-02-01

    One of the Department of Energy's (DOE) ongoing tasks is the storage and inspection of a large number of waste barrels containing a variety of hazardous substances. Martin Marietta is currently contracted to develop a robotic system -- the Intelligent Mobile Sensor System (IMSS) -- for the automatic monitoring and inspection of these barrels. The IMSS is a mobile robot with multiple sensors: video cameras, illuminators, laser ranging and barcode reader. We assisted Martin Marietta in this task, specifically in the development of image processing algorithms that recognize and classify the barrel labels. Our subsystem uses video images to detect and locate the barcode, so that the barcode reader can be pointed at the barcode.

  3. Low vision goggles: optical design studies

    NASA Astrophysics Data System (ADS)

    Levy, Ofer; Apter, Boris; Efron, Uzi

    2006-08-01

    Low Vision (LV) due to Age Related Macular Degeneration (AMD), Glaucoma or Retinitis Pigmentosa (RP) is a growing problem, which will affect more than 15 million people in the U.S alone in 2010. Low Vision Aid Goggles (LVG) have been under development at Ben-Gurion University and the Holon Institute of Technology. The device is based on a unique Image Transceiver Device (ITD), combining both functions of imaging and Display in a single chip. Using the ITD-based goggles, specifically designed for the visually impaired, our aim is to develop a head-mounted device that will allow the capture of the ambient scenery, perform the necessary image enhancement and processing, and re-direct it to the healthy part of the patient's retina. This design methodology will allow the Goggles to be mobile, multi-task and environmental-adaptive. In this paper we present the optical design considerations of the Goggles, including a preliminary performance analysis. Common vision deficiencies of LV patients are usually divided into two main categories: peripheral vision loss (PVL) and central vision loss (CVL), each requiring different Goggles design. A set of design principles had been defined for each category. Four main optical designs are presented and compared according to the design principles. Each of the designs is presented in two main optical configurations: See-through system and Video imaging system. The use of a full-color ITD-Based Goggles is also discussed.

  4. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System.

    PubMed

    Wu, Defeng; Chen, Tianfei; Li, Aiguo

    2016-08-30

    A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.

  5. A vision fusion treatment system based on ATtiny26L

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqing; Zhang, Chunxi; Wang, Jiqiang

    2006-11-01

    Vision fusion treatment is an important and effective project to strabismus children. The vision fusion treatment system based on the principle for eyeballs to follow the moving visual survey pole is put forward first. In this system the original position of visual survey pole is about 35 centimeters far from patient's face before its moving to the middle position between the two eyeballs. The eyeballs of patient will follow the movement of the visual survey pole. When they can't follow, one or two eyeballs will turn to other position other than the visual survey pole. This displacement is recorded every time. A popular single chip microcomputer ATtiny26L is used in this system, which has a PWM output signal to control visual survey pole to move with continuously variable speed. The movement of visual survey pole accords to the modulating law of eyeballs to follow visual survey pole.

  6. Comparison of Artificial Immune System and Particle Swarm Optimization Techniques for Error Optimization of Machine Vision Based Tool Movements

    NASA Astrophysics Data System (ADS)

    Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod

    2015-10-01

    In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.

  7. Draper Laboratory small autonomous aerial vehicle

    NASA Astrophysics Data System (ADS)

    DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.

    1997-06-01

    The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.

  8. Topography-modified refraction (TMR): adjustment of treated cylinder amount and axis to the topography versus standard clinical refraction in myopic topography-guided LASIK.

    PubMed

    Kanellopoulos, Anastasios John

    2016-01-01

    To evaluate the safety, efficacy, and contralateral eye comparison of topography-guided myopic LASIK with two different refraction treatment strategies. Private clinical ophthalmology practice. A total of 100 eyes (50 patients) in consecutive cases of myopic topography-guided LASIK procedures with the same refractive platform (FS200 femtosecond and EX500 excimer lasers) were randomized for treatment as follows: one eye with the standard clinical refraction (group A) and the contralateral eye with the topographic astigmatic power and axis (topography-modified treatment refraction; group B). All cases were evaluated pre- and post-operatively for the following parameters: refractive error, best corrected distance visual acuity (CDVA), uncorrected distance visual acuity (UDVA), topography (Placido-disk based) and tomography (Scheimpflug-image based), wavefront analysis, pupillometry, and contrast sensitivity. Follow-up visits were conducted for at least 12 months. Mean refractive error was -5.5 D of myopia and -1.75 D of astigmatism. In group A versus group B, respectively, the average UDVA improved from 20/200 to 20/20 versus 20/16; post-operative CDVA was 20/20 and 20/13.5; 1 line of vision gained was 27.8% and 55.6%; and 2 lines of vision gained was 5.6% and 11.1%. In group A, 27.8% of eyes had over -0.50 diopters of residual refractive astigmatism, in comparison to 11.7% in group B ( P <0.01). The residual percentages in both groups were measured with refractive astigmatism of more than -0.5 diopters. Topography-modified refraction (TMR): topographic adjustment of the amount and axis of astigmatism treated, when different from the clinical refraction, may offer superior outcomes in topography-guided myopic LASIK. These findings may change the current clinical paradigm of the optimal subjective refraction utilized in laser vision correction.

  9. Topography-modified refraction (TMR): adjustment of treated cylinder amount and axis to the topography versus standard clinical refraction in myopic topography-guided LASIK

    PubMed Central

    Kanellopoulos, Anastasios John

    2016-01-01

    Purpose To evaluate the safety, efficacy, and contralateral eye comparison of topography-guided myopic LASIK with two different refraction treatment strategies. Setting Private clinical ophthalmology practice. Patients and methods A total of 100 eyes (50 patients) in consecutive cases of myopic topography-guided LASIK procedures with the same refractive platform (FS200 femtosecond and EX500 excimer lasers) were randomized for treatment as follows: one eye with the standard clinical refraction (group A) and the contralateral eye with the topographic astigmatic power and axis (topography-modified treatment refraction; group B). All cases were evaluated pre- and post-operatively for the following parameters: refractive error, best corrected distance visual acuity (CDVA), uncorrected distance visual acuity (UDVA), topography (Placido-disk based) and tomography (Scheimpflug-image based), wavefront analysis, pupillometry, and contrast sensitivity. Follow-up visits were conducted for at least 12 months. Results Mean refractive error was −5.5 D of myopia and −1.75 D of astigmatism. In group A versus group B, respectively, the average UDVA improved from 20/200 to 20/20 versus 20/16; post-operative CDVA was 20/20 and 20/13.5; 1 line of vision gained was 27.8% and 55.6%; and 2 lines of vision gained was 5.6% and 11.1%. In group A, 27.8% of eyes had over −0.50 diopters of residual refractive astigmatism, in comparison to 11.7% in group B (P<0.01). The residual percentages in both groups were measured with refractive astigmatism of more than −0.5 diopters. Conclusion Topography-modified refraction (TMR): topographic adjustment of the amount and axis of astigmatism treated, when different from the clinical refraction, may offer superior outcomes in topography-guided myopic LASIK. These findings may change the current clinical paradigm of the optimal subjective refraction utilized in laser vision correction. PMID:27843292

  10. Rubber hose surface defect detection system based on machine vision

    NASA Astrophysics Data System (ADS)

    Meng, Fanwu; Ren, Jingrui; Wang, Qi; Zhang, Teng

    2018-01-01

    As an important part of connecting engine, air filter, engine, cooling system and automobile air-conditioning system, automotive hose is widely used in automobile. Therefore, the determination of the surface quality of the hose is particularly important. This research is based on machine vision technology, using HALCON algorithm for the processing of the hose image, and identifying the surface defects of the hose. In order to improve the detection accuracy of visual system, this paper proposes a method to classify the defects to reduce misjudegment. The experimental results show that the method can detect surface defects accurately.

  11. An egocentric vision based assistive co-robot.

    PubMed

    Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang

    2013-06-01

    We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.

  12. Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System

    PubMed Central

    García-Garrido, Miguel A.; Ocaña, Manuel; Llorca, David F.; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance. PMID:22438704

  13. Complete vision-based traffic sign recognition supported by an I2V communication system.

    PubMed

    García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  14. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    PubMed Central

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  15. Flight Deck-Based Delegated Separation: Evaluation of an On-Board Interval Management System with Synthetic and Enhanced Vision Technology

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Shelton, Kevin J.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.; Norman, Rober M.; Ellis, Kyle K. E.; Barmore, Bryan E.

    2011-01-01

    An emerging Next Generation Air Transportation System concept - Equivalent Visual Operations (EVO) - can be achieved using an electronic means to provide sufficient visibility of the external world and other required flight references on flight deck displays that enable the safety, operational tempos, and visual flight rules (VFR)-like procedures for all weather conditions. Synthetic and enhanced flight vision system technologies are critical enabling technologies to EVO. Current research evaluated concepts for flight deck-based interval management (FIM) operations, integrated with Synthetic Vision and Enhanced Vision flight-deck displays and technologies. One concept involves delegated flight deck-based separation, in which the flight crews were paired with another aircraft and responsible for spacing and maintaining separation from the paired aircraft, termed, "equivalent visual separation." The operation required the flight crews to acquire and maintain an "equivalent visual contact" as well as to conduct manual landings in low-visibility conditions. The paper describes results that evaluated the concept of EVO delegated separation, including an off-nominal scenario in which the lead aircraft was not able to conform to the assigned spacing resulting in a loss of separation.

  16. Scattering properties of ultrafast laser-induced refractive index shaping lenticular structures in hydrogels

    NASA Astrophysics Data System (ADS)

    Wozniak, Kaitlin T.; Germer, Thomas A.; Butler, Sam C.; Brooks, Daniel R.; Huxlin, Krystel R.; Ellis, Jonathan D.

    2018-02-01

    We present measurements of light scatter induced by a new ultrafast laser technique being developed for laser refractive correction in transparent ophthalmic materials such as cornea, contact lenses, and/or intraocular lenses. In this new technique, called intra-tissue refractive index shaping (IRIS), a 405 nm femtosecond laser is focused and scanned below the corneal surface, inducing a spatially-varying refractive index change that corrects vision errors. In contrast with traditional laser correction techniques, such as laser in-situ keratomileusis (LASIK) or photorefractive keratectomy (PRK), IRIS does not operate via photoablation, but rather changes the refractive index of transparent materials such as cornea and hydrogels. A concern with any laser eye correction technique is additional scatter induced by the process, which can adversely affect vision, especially at night. The goal of this investigation is to identify sources of scatter induced by IRIS and to mitigate possible effects on visual performance in ophthalmic applications. Preliminary light scattering measurements on patterns written into hydrogel showed four sources of scatter, differentiated by distinct behaviors: (1) scattering from scanned lines; (2) scattering from stitching errors, resulting from adjacent scanning fields not being aligned to one another; (3) diffraction from Fresnel zone discontinuities; and (4) long-period variations in the scans that created distinct diffraction peaks, likely due to inconsistent line spacing in the writing instrument. By knowing the nature of these different scattering errors, it will now be possible to modify and optimize the design of IRIS structures to mitigate potential deficits in visual performance in human clinical trials.

  17. Laser rocket system analysis

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The laser rocket systems investigated in this study were for orbital transportation using space-based, ground-based and airborne laser transmitters. The propulsion unit of these systems utilizes a continuous wave (CW) laser beam focused into a thrust chamber which initiates a plasma in the hydrogen propellant, thus heating the propellant and providing thrust through a suitably designed nozzle and expansion skirt. The specific impulse is limited only by the ability to adequately cool the thruster and the amount of laser energy entering the engine. The results of the study showed that, with advanced technology, laser rocket systems with either a space- or ground-based laser transmitter could reduce the national budget allocated to space transportation by 10 to 345 billion dollars over a 10-year life cycle when compared to advanced chemical propulsion systems (LO2-LH2) of equal capability. The variation in savings depends upon the projected mission model.

  18. Vision Outcomes Following Anti-Vascular Endothelial Growth Factor Treatment of Diabetic Macular Edema in Clinical Practice.

    PubMed

    Holekamp, Nancy M; Campbell, Joanna; Almony, Arghavan; Ingraham, Herbert; Marks, Steven; Chandwani, Hitesh; Cole, Ashley L; Kiss, Szilárd

    2018-04-20

    To determine monitoring and treatment patterns, and vision outcomes in real-world patients initiating anti-vascular endothelial growth factor (anti-VEGF) therapy for diabetic macular edema (DME). Retrospective interventional cohort study. SETTING: Electronic medical record analysis of Geisinger Health System data. 110 patients (121 study eyes) initiating intravitreal ranibizumab or bevacizumab for DME during January 2007‒May 2012, with baseline corrected visual acuity of 20/40‒20/320, and ≥1 ophthalmologist visits during follow-up. Intravitreal injections per study eye during the first 12 months; corrected visual acuity, change in corrected visual acuity from baseline, proportions of eyes with ≥10 or ≥15 approxEarly Treatment Diabetic Retinopathy Study letter gain/loss at 12 months; number of ophthalmologist visits. Over 12 months, mean number of ophthalmologist visits was 9.2; mean number of intravitreal injections was 3.1 (range, 1-12), with most eyes (68.6%) receiving ≤3 injections. At 12 months, mean corrected visual acuity change was +4.7 letters (mean 56.9 letters at baseline); proportions of eyes gaining ≥10 or ≥15 letters were 31.4% and 24.0%, respectively; proportions of eyes losing ≥10 or ≥15 letters were 10.8% and 8.3%, respectively. Eyes receiving adjunctive laser during the first 6 months (n = 33) showed similar change in corrected visual acuity to non-laser-treated eyes (n = 88) (+3.1 vs +5.3 letters at 12 months). DME patients receiving anti-VEGF therapy in clinical practice undergo less frequent monitoring and intravitreal injections, and achieve inferior vision outcomes to patients in landmark clinical trials. Copyright © 2018. Published by Elsevier Inc.

  19. Femtosecond Lasers in Ophthalmology: Surgery and Imaging

    NASA Astrophysics Data System (ADS)

    Bille, J. F.

    Ophthalmology has traditionally been the field with prevalent laser applications in medicine. The human eye is one of the most accessible human organs and its transparency for visible and near-infrared light allows optical techniques for diagnosis and treatment of almost any ocular structure. Laser vision correction (LVC) was introduced in the late 1980s. Today, the procedural ease, success rate, and lack of disturbing side-effects in laser assisted in-situ keratomileusis (LASIK) have made it the most frequently performed refractive surgical procedure (keratomileusis(greek): cornea-flap-cutting). Recently, it has been demonstrated that specific aspects of LVC can take advantage of unique light-matter interaction processes that occur with femtosecond laser pulses.

  20. Research on laser detonation pulse circuit with low-power based on super capacitor

    NASA Astrophysics Data System (ADS)

    Wang, Hao-yu; Hong, Jin; He, Aifeng; Jing, Bo; Cao, Chun-qiang; Ma, Yue; Chu, En-yi; Hu, Ya-dong

    2018-03-01

    According to the demand of laser initiating device miniaturization and low power consumption of weapon system, research on the low power pulse laser detonation circuit with super capacitor. Established a dynamic model of laser output based on super capacitance storage capacity, discharge voltage and programmable output pulse width. The output performance of the super capacitor under different energy storage capacity and discharge voltage is obtained by simulation. The experimental test system was set up, and the laser diode of low power pulsed laser detonation circuit was tested and the laser output waveform of laser diode in different energy storage capacity and discharge voltage was collected. Experiments show that low power pulse laser detonation based on super capacitor energy storage circuit discharge with high efficiency, good transient performance, for a low power consumption requirement, for laser detonation system and low power consumption and provide reference light miniaturization of engineering practice.

  1. Shedding (Incoherent) Light on Quantum Effects in Light-Induced Biological Processes.

    PubMed

    Brumer, Paul

    2018-05-18

    Light-induced processes that occur in nature, such as photosynthesis and photoisomerization in the first steps in vision, are often studied in the laboratory using coherent pulsed laser sources, which induce time-dependent coherent wavepacket molecule dynamics. Nature, however, uses stationary incoherent thermal radiation, such as sunlight, leading to a totally different molecular response, the time-independent steady state. It is vital to appreciate this difference in order to assess the role of quantum coherence effects in biological systems. Developments in this area are discussed in detail.

  2. Relative Proportion Of Different Types Of Refractive Errors In Subjects Seeking Laser Vision Correction

    PubMed Central

    Althomali, Talal A.

    2018-01-01

    Background: Refractive errors are a form of optical defect affecting more than 2.3 billion people worldwide. As refractive errors are a major contributor of mild to moderate vision impairment, assessment of their relative proportion would be helpful in the strategic planning of health programs. Purpose: To determine the pattern of the relative proportion of types of refractive errors among the adult candidates seeking laser assisted refractive correction in a private clinic setting in Saudi Arabia. Methods: The clinical charts of 687 patients (1374 eyes) with mean age 27.6 ± 7.5 years who desired laser vision correction and underwent a pre-LASIK work-up were reviewed retrospectively. Refractive errors were classified as myopia, hyperopia and astigmatism. Manifest refraction spherical equivalent (MRSE) was applied to define refractive errors. Outcome Measures: Distribution percentage of different types of refractive errors; myopia, hyperopia and astigmatism. Results: The mean spherical equivalent for 1374 eyes was -3.11 ± 2.88 D. Of the total 1374 eyes, 91.8% (n = 1262) eyes had myopia, 4.7% (n = 65) eyes had hyperopia and 3.4% (n = 47) had emmetropia with astigmatism. Distribution percentage of astigmatism (cylinder error of ≥ 0.50 D) was 78.5% (1078/1374 eyes); of which % 69.1% (994/1374) had low to moderate astigmatism and 9.4% (129/1374) had high astigmatism. Conclusion and Relevance: Of the adult candidates seeking laser refractive correction in a private setting in Saudi Arabia, myopia represented greatest burden with more than 90% myopic eyes, compared to hyperopia in nearly 5% eyes. Astigmatism was present in more than 78% eyes. PMID:29872484

  3. The Geoscience Laser Altimeter System Laser Transmitter

    NASA Technical Reports Server (NTRS)

    Afzal, R. S.; Dallas, J. L.; Yu, A. W.; Mamakos, W. A.; Lukemire, A.; Schroeder, B.; Malak, A.

    2000-01-01

    The Geoscience Laser Altimeter System (GLAS), scheduled to launch in 2001, is a laser altimeter and lidar for tile Earth Observing System's (EOS) ICESat mission. The laser transmitter requirements, design and qualification test results for this space- based remote sensing instrument are presented.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apollonov, V V

    We discuss the application of ground-based repetitively pulsed, high-frequency DF-laser systems and space-based Nd : YAG-laser systems for elimination of space debris and objects of natural origin. We have estimated the average power level of such systems ensuring destruction of space debris and similar objects. (laser applications)

  5. Low Cost Night Vision System for Intruder Detection

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  6. Environmental Assessment for Employment of a Mobile Laser Evaluator System (LES-M) for the 20th Fighter Wing at Shaw Air Force Base, South Carolina

    DTIC Science & Technology

    2004-05-01

    Environmental Assessment for Employment of a Mobile Laser Evaluator System (LES-M) for the 20th Fighter Wing at Shaw Air Force Base, South Carolina...Mobile Laser Evaluator System (LES-M) for the 20th Fighter Wing at Shaw Air Force Base, South Carolina 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...a Mobile Laser Evaluator System (LES-M) for the 20’’ Fighter Wing (20 fW) at Shaw Air Force Base (AFB), South Carolir.a DESCRIPTION OF THE PROPOSED

  7. An improved three-dimensional non-scanning laser imaging system based on digital micromirror device

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Lei, Jieyu; Zhai, Yu; Timofeev, Alexander N.

    2018-01-01

    Nowadays, there are two main methods to realize three-dimensional non-scanning laser imaging detection, which are detection method based on APD and detection method based on Streak Tube. However, the detection method based on APD possesses some disadvantages, such as small number of pixels, big pixel interval and complex supporting circuit. The detection method based on Streak Tube possesses some disadvantages, such as big volume, bad reliability and high cost. In order to resolve the above questions, this paper proposes an improved three-dimensional non-scanning laser imaging system based on Digital Micromirror Device. In this imaging system, accurate control of laser beams and compact design of imaging structure are realized by several quarter-wave plates and a polarizing beam splitter. The remapping fiber optics is used to sample the image plane of receiving optical lens, and transform the image into line light resource, which can realize the non-scanning imaging principle. The Digital Micromirror Device is used to convert laser pulses from temporal domain to spatial domain. The CCD with strong sensitivity is used to detect the final reflected laser pulses. In this paper, we also use an algorithm which is used to simulate this improved laser imaging system. In the last, the simulated imaging experiment demonstrates that this improved laser imaging system can realize three-dimensional non-scanning laser imaging detection.

  8. Eye-safe digital 3-D sensing for space applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Blais, Francois; Rioux, Marc; Cournoyer, Luc; Laurin, Denis G.; MacLean, Steve G.

    2000-01-01

    This paper focuses on the characteristics and performance of an eye-safe laser range scanner (LARS) with short- and medium-range 3D sensing capabilities for space applications. This versatile LARS is a precision measurement tool that will complement the current Canadian Space Vision System. The major advantages of the LARS over conventional video- based imaging are its ability to operate with sunlight shining directly into the scanner and its immunity to spurious reflections and shadows, which occur frequently in space. Because the LARS is equipped with two high-speed galvanometers to steer the laser beam, any spatial location within the field of view of the camera can be addressed. This versatility enables the LARS to operate in two basis scan pattern modes: (1) variable-scan-resolution mode and (2) raster-scan mode. In the variable-resolution mode, the LARS can search and track targets and geometrical features on objects located within a field of view of 30 by 30 deg and with corresponding range from about 0.5 to 2000 m. The tracking mode can reach a refresh rate of up to 130 Hz. The raster mode is used primarily for the measurement of registered range and intensity information on large stationary objects. It allows, among other things, target- based measurements, feature-based measurements, and surface- reflectance monitoring. The digitizing and modeling of human subjects, cargo payloads, and environments are also possible with the LARS. Examples illustrating its capabilities are presented.

  9. On-Chip Laser-Power Delivery System for Dielectric Laser Accelerators

    NASA Astrophysics Data System (ADS)

    Hughes, Tyler W.; Tan, Si; Zhao, Zhexin; Sapra, Neil V.; Leedle, Kenneth J.; Deng, Huiyang; Miao, Yu; Black, Dylan S.; Solgaard, Olav; Harris, James S.; Vuckovic, Jelena; Byer, Robert L.; Fan, Shanhui; England, R. Joel; Lee, Yun Jo; Qi, Minghao

    2018-05-01

    We propose an on-chip optical-power delivery system for dielectric laser accelerators based on a fractal "tree-network" dielectric waveguide geometry. This system replaces experimentally demanding free-space manipulations of the driving laser beam with chip-integrated techniques based on precise nanofabrication, enabling access to orders-of-magnitude increases in the interaction length and total energy gain for these miniature accelerators. Based on computational modeling, in the relativistic regime, our laser delivery system is estimated to provide 21 keV of energy gain over an acceleration length of 192 μ m with a single laser input, corresponding to a 108-MV/m acceleration gradient. The system may achieve 1 MeV of energy gain over a distance of less than 1 cm by sequentially illuminating 49 identical structures. These findings are verified by detailed numerical simulation and modeling of the subcomponents, and we provide a discussion of the main constraints, challenges, and relevant parameters with regard to on-chip laser coupling for dielectric laser accelerators.

  10. Impact of fiber ring laser configuration on detection capabilities in FBG based sensor systems

    NASA Astrophysics Data System (ADS)

    Osuch, Tomasz; Kossek, Tomasz; Markowski, Konrad

    2014-11-01

    In this paper fiber ring lasers (FRL) as interrogation units for distributed fiber Bragg grating (FBG) based sensor networks are studied. In particular, two configurations of the fiber laser with erbium-doped fiber amplifier (EDFA) and semiconductor optical amplifier (SOA) as gain medium were analyzed. In the case of EDFA-based fiber interrogation systems, CW as well as active-mode locking operation were taken into account. The influence of spectral overlapping of FBGs spectra on detection capabilities of examined FRLs are presented. Experimental results show that the SOA-based fiber laser interrogation unit can operate as a multi-parametric sensing system. In turn, using an actively mode-locked fiber ring laser with an EDFA, an electronically switchable FBG based sensing system can be realized.

  11. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip

    2015-07-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological datamore » can be incorporated by means of data fusion of the two sensors' output data. (authors)« less

  12. Laser photocoagulation at birth prevents blindness in Norrie's disease diagnosed using amniocentesis.

    PubMed

    Chow, Clement C; Kiernan, Daniel F; Chau, Felix Y; Blair, Michael P; Ticho, Benjamin H; Galasso, John M; Shapiro, Michael J

    2010-12-01

    To report the first case of prophylactic laser treatment to prevent blindness in a patient who was diagnosed with Norrie's disease by genetic testing with amniocentesis. Case report. A 2-year-old white boy with Norrie's disease. A 37-week gestational age male with a family history of Norrie's disease was born via Cesarean section after the mother had undergone prenatal amniocentesis fetal-genetic testing at 23 weeks of gestation. A C520T (nonsense) mutation was found in the Norrie's disease gene. After examination under anesthesia confirmed the diagnosis on the first day of life, laser photocoagulation was applied to the avascular retina bilaterally. The patient was followed closely by ophthalmology, pediatrics, and occupational therapy departments. Functional outcome, as documented by Teller visual acuity and formal occupational therapy testing, and anatomic outcome, as documented by Retcam photography and fluorescein angiography. Complete regression of extraretinal fibrovascular proliferation was observed 1 month after laser treatment. No retinal detachment had occurred to date at 24 months. Teller visual acuity at 23 months of life was 20/100 in both eyes. The patient's vision and developmental milestones were age appropriate. Pre-term genetic diagnosis with immediate laser treatment after birth may preserve vision in individuals affected with Norrie's disease. Copyright © 2010 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  13. The implementation of contour-based object orientation estimation algorithm in FPGA-based on-board vision system

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery

    2016-10-01

    This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.

  14. Integrated navigation, flight guidance, and synthetic vision system for low-level flight

    NASA Astrophysics Data System (ADS)

    Mehler, Felix E.

    2000-06-01

    Future military transport aircraft will require a new approach with respect to the avionics suite to fulfill an ever-changing variety of missions. The most demanding phases of these mission are typically the low level flight segments, including tactical terrain following/avoidance,payload drop and/or board autonomous landing at forward operating strips without ground-based infrastructure. As a consequence, individual components and systems must become more integrated to offer a higher degree of reliability, integrity, flexibility and autonomy over existing systems while reducing crew workload. The integration of digital terrain data not only introduces synthetic vision into the cockpit, but also enhances navigation and guidance capabilities. At DaimlerChrysler Aerospace AG Military Aircraft Division (Dasa-M), an integrated navigation, flight guidance and synthetic vision system, based on digital terrain data, has been developed to fulfill the requirements of the Future Transport Aircraft (FTA). The fusion of three independent navigation sensors provides a more reliable and precise solution to both the 4D-flight guidance and the display components, which is comprised of a Head-up and a Head-down Display with synthetic vision. This paper will present the system, its integration into the DLR's VFW 614 Advanced Technology Testing Aircraft System (ATTAS) and the results of the flight-test campaign.

  15. Safety and Hazard Analysis for the Coherent/Acculite Laser Based Sandia Remote Sensing System (Trailer B70).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustoni, Arnold L.

    A laser safety and hazard analysis is presented, for the Coherent(r) driven Acculite(r) laser central to the Sandia Remote Sensing System (SRSS). The analysis is based on the 2000 version of the American National Standards Institute's (ANSI) Standard Z136.1, for Safe Use of Lasers and the 2000 version of the ANSI Standard Z136.6, for Safe Use of Lasers Outdoors. The trailer (B70) based SRSS laser system is a mobile platform which is used to perform laser interaction experiments and tests at various national test sites. The trailer based SRSS laser system is generally operated on the United State Air Forcemore » Starfire Optical Range (SOR) at Kirtland Air Force Base (KAFB), New Mexico. The laser is used to perform laser interaction testing inside the laser trailer as well as outside the trailer at target sites located at various distances. In order to protect personnel who work inside the Nominal Hazard Zone (NHZ) from hazardous laser exposures, it was necessary to determine the Maximum Permissible Exposure (MPE) for each laser wavelength (wavelength bands) and calculate the appropriate minimum Optical Density (ODmin) necessary for the laser safety eyewear used by authorized personnel. Also, the Nominal Ocular Hazard Distance (NOHD) and The Extended Ocular Hazard Distance (EOHD) are calculated in order to protect unauthorized personnel who may have violated the boundaries of the control area and might enter into the laser's NHZ for testing outside the trailer. 4Page intentionally left blank« less

  16. Vision-Based People Detection System for Heavy Machine Applications

    PubMed Central

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  17. Vision-Based People Detection System for Heavy Machine Applications.

    PubMed

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-20

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance.

  18. Eye injuries from laser exposure: a review.

    PubMed

    Hudson, S J

    1998-05-01

    Lasers pose a significant threat to vision in modern military operations. Anti-personnel lasers have been designed that can cause intentional blindness in large numbers of personnel. Although the use of blinding laser weapons during combat has been prohibited by international legislation, research and development of these weapons have not been prohibited, and significant controversy remains. Unintentional blinding can also result from other types of lasers used on the battlefield, such as range-finders and anti-material lasers. Lasers that are capable of producing blindness operate within specific wavelength parameters and include visible and near infrared lasers. Patients who suffer from laser eye injuries usually complain of flash blindness, followed by transient or permanent visual loss. Laser retinal damage should be suspected in any patient with visual complaints in an operational setting. The treatment for laser retinal injuries is extremely limited, and prevention is essential. Improved protective eyeware and other countermeasures to laser eye injury are necessary as long as the threat remains.

  19. Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Young, Steven D.

    2005-01-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  20. Assessing Dual Sensor Enhanced Flight Vision Systems to Enable Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.

    2016-01-01

    Flight deck-based vision system technologies, such as Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS), may serve as a revolutionary crew/vehicle interface enabling technologies to meet the challenges of the Next Generation Air Transportation System Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. One significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility, pilot workload and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 ft runway visual range by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs as they made approaches to runways with and without touchdown zone and centerline lights. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance, workload, and situation awareness during extremely low visibility approach and landing operations was assessed. Results indicate that all EFVS concepts flown resulted in excellent approach path tracking and touchdown performance without any workload penalty. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.

Top