Science.gov

Sample records for active vision systems

  1. ROVER: A prototype active vision system

    NASA Astrophysics Data System (ADS)

    Coombs, David J.; Marsh, Brian D.

    1987-08-01

    The Roving Eyes project is an experiment in active vision. We present the design and implementation of a prototype that tracks colored balls in images from an on-line charge coupled device (CCD) camera. Rover is designed to keep up with its rapidly changing environment by handling best and average case conditions and ignoring the worst case. This allows Rover's techniques to be less sophisticated and consequently faster. Each of Rover's major functional units is relatively isolated from the others, and an executive which knows all the functional units directs the computation by deciding which jobs would be most effective to run. This organization is realized with a priority queue of jobs and their arguments. Rover's structure not only allows it to adapt its strategy to the environment, but also makes the system extensible. A capability can be added to the system by adding a functional module with a well defined interface and by modifying the executive to make use of the new module. The current implementation is discussed in the appendices.

  2. Vector disparity sensor with vergence control for active vision systems.

    PubMed

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  3. Vector Disparity Sensor with Vergence Control for Active Vision Systems

    PubMed Central

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737

  4. Range gated active night vision system for automobiles.

    PubMed

    David, Ofer; Kopeika, Norman S; Weizer, Boaz

    2006-10-01

    Night vision for automobiles is an emerging safety feature that is being introduced for automotive safety. We develop what we believe is an innovative new night vision system using gated imaging principles. The concept of gated imaging is described and its basic advantages, including the backscatter reduction mechanism for improved vision through fog, rain, and snow. Evaluation of performance is presented by analyzing bar pattern modulation and comparing Johnson chart predictions.

  5. Scene interpretation module for an active vision system

    NASA Astrophysics Data System (ADS)

    Remagnino, P.; Matas, J.; Illingworth, John; Kittler, Josef

    1993-08-01

    In this paper an implementation of a high level symbolic scene interpreter for an active vision system is considered. The scene interpretation module uses low level image processing and feature extraction results to achieve object recognition and to build up a 3D environment map. The module is structured to exploit spatio-temporal context provided by existing partial world interpretations and has spatial reasoning to direct gaze control and thereby achieve efficient and robust processing using spatial focus of attention. The system builds and maintains an awareness of an environment which is far larger than a single camera view. Experiments on image sequences have shown that the system can: establish its position and orientation in a partially known environment, track simple moving objects such as cups and boxes, temporally integrate recognition results to establish or forget object presence, and utilize spatial focus of attention to achieve efficient and robust object recognition. The system has been extensively tested using images from a single steerable camera viewing a simple table top scene containing box and cylinder-like objects. Work is currently progressing to further develop its competences and interface it with the Surrey active stereo vision head, GETAFIX.

  6. An active vision system for multitarget surveillance in dynamic environments.

    PubMed

    Bakhtari, Ardevan; Benhabib, Beno

    2007-02-01

    This paper presents a novel agent-based method for the dynamic coordinated selection and positioning of active-vision cameras for the simultaneous surveillance of multiple objects-of-interest as they travel through a cluttered environment with a-priori unknown trajectories. The proposed system dynamically adjusts not only the orientation but also the position of the cameras in order to maximize the system's performance by avoiding occlusions and acquiring images with preferred viewing angles. Sensor selection and positioning are accomplished through an agent-based approach. The proposed sensing-system reconfiguration strategy has been verified via simulations and implemented on an experimental prototype setup for automated facial recognition. Both simulations and experimental analyses have shown that the use of dynamic sensors along with an effective online dispatching strategy may tangibly improve the surveillance performance of a sensing system.

  7. Active Vision in Marmosets: A Model System for Visual Neuroscience

    PubMed Central

    Reynolds, John H.; Miller, Cory T.

    2014-01-01

    The common marmoset (Callithrix jacchus), a small-bodied New World primate, offers several advantages to complement vision research in larger primates. Studies in the anesthetized marmoset have detailed the anatomy and physiology of their visual system (Rosa et al., 2009) while studies of auditory and vocal processing have established their utility for awake and behaving neurophysiological investigations (Lu et al., 2001a,b; Eliades and Wang, 2008a,b; Osmanski and Wang, 2011; Remington et al., 2012). However, a critical unknown is whether marmosets can perform visual tasks under head restraint. This has been essential for studies in macaques, enabling both accurate eye tracking and head stabilization for neurophysiology. In one set of experiments we compared the free viewing behavior of head-fixed marmosets to that of macaques, and found that their saccadic behavior is comparable across a number of saccade metrics and that saccades target similar regions of interest including faces. In a second set of experiments we applied behavioral conditioning techniques to determine whether the marmoset could control fixation for liquid reward. Two marmosets could fixate a central point and ignore peripheral flashing stimuli, as needed for receptive field mapping. Both marmosets also performed an orientation discrimination task, exhibiting a saturating psychometric function with reliable performance and shorter reaction times for easier discriminations. These data suggest that the marmoset is a viable model for studies of active vision and its underlying neural mechanisms. PMID:24453311

  8. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision

    NASA Astrophysics Data System (ADS)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-01

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  9. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  10. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    PubMed Central

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-01

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activities, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation of the performance of human activity recognition. PMID:23353144

  11. Exploring techniques for vision based human activity recognition: methods, systems, and evaluation.

    PubMed

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-25

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activity, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation towards the performance of human activity recognition.

  12. Self-adaptive Vision System

    NASA Astrophysics Data System (ADS)

    Stipancic, Tomislav; Jerbic, Bojan

    Light conditions represent an important part of every vision application. This paper describes one active behavioral scheme of one particular active vision system. This behavioral scheme enables an active system to adapt to current environmental conditions by constantly validating the amount of the reflected light using luminance meter and dynamically changed significant vision parameters. The purpose of the experiment was to determine the connections between light conditions and inner vision parameters. As a part of the experiment, Response Surface Methodology (RSM) was used to predict values of vision parameters with respect to luminance input values. RSM was used to approximate an unknown function for which only few values were computed. The main output validation system parameter is called Match Score. Match Score indicates how well the found object matches the learned model. All obtained data are stored in the local database. By timely applying new parameters predicted by the RSM, the vision application works in a stabile and robust manner.

  13. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  14. Calculation of utmost parameters of active vision system based on nonscanning thermal imager

    NASA Astrophysics Data System (ADS)

    Sviridov, A. N.

    2003-09-01

    An active vision system (AVS) based on a non scanning thermal imager (TI) and CO2 - quantum amplifier of the image is offered. AVS mathematical model within which investigation of utmost signal / noise values and other system parameters depending on the distances to the scene - the area of observation (AO), an illumination impulse energy (W), an amplification factor (K) of a quantum amplifier, objective lens characteristics, spectral band width of a cooled filter of the thermal imager as well as object and scene characteristics is developed. Calculations were carried out for the following possible operating modes of a discussed vision system: - an active mode of a thermal imager with a cooled wideband filter; an active mode of a thermal imager with a cooled narrowband filter; - passive mode (W = 0, K = 1) of a thermal imager with a cooled wideband filter. As a result of carried out researches the opportunity and expediency of designing AVS, having a nonscanning thermal imager, impulse CO2 - quantum image amplifier and impulse CO2 - illumination laser are shown. It is shown that AVS have advantages over thermal imaging at observation of objects, temperature and reflection factors of which differ slightly from similar parameters of the scene. AVS depending on the W-K product can detect at a distance of up to 3000..5000m practically any local changes (you are interested in ) of a reflection factor. AVS not replacing the thermal imaging allow to receive additional information about observation objects. The images obtained with the help of AVS are more natural and more easy identified than thermal images received at the expense of the object own radiation. For quantitative determination of utmost values of AVS sensitivity it is offered to introduce a new parameter - NERD - 'radiation nose equivalent reflection factors difference'. IR active vision systems of vision, as well as a human vision and vision systems in the near IR - range on the basis image intensifiers

  15. Design of a high-performance telepresence system incorporating an active vision system for enhanced visual perception of remote environments

    NASA Astrophysics Data System (ADS)

    Pretlove, John R. G.; Asbery, Richard

    1995-12-01

    This paper describes the design, development and implementation of a telepresence system for hazardous environment applications. Its primary feature is a high performance active stereo vision system slaved to the motion of the operators head. To simulate the presence of an operator in a remote, hazardous environment, it is necessary to provide sufficient visual information about the remote environment. The operator must be able to interact with the environment so that he can carry out manipulative tasks. To achieve an enhanced sense of visual perception we have developed a tightly integrated pan and tilt stereo vision system with a head-mounted display. The motion of the operators head is monitored by a six DOF sensor which provides the demand signals to servocontrol the active vision system. The system we have developed is a compact yet high performance system employing mechatronic principles to deliver a system that can be mounted on a small mobile platform. We have also developed an open architecture controller to implement the dynamic, active vision system which exhibits dynamic performance characteristics of the human head-eye system so as to form a natural and intuitive interface. A series of tests have been conducted to establish the system latency and to explore the effectiveness of remote 3D human perception, particularly with regard to manipulation tasks and navigation. The results of these tests are presented.

  16. Coherent laser vision system

    SciTech Connect

    Sebastion, R.L.

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  17. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  18. A tactile vision substitution system for the study of active sensing.

    PubMed

    Hsu, Brian; Hsieh, Cheng-Han; Yu, Sung-Nien; Ahissar, Ehud; Arieli, Amos; Zilbershtain-Kra, Yael

    2013-01-01

    This paper presents a tactile vision substitution system (TVSS) for the study of active sensing. Two algorithms, namely image processing and trajectory tracking, were developed to enhance the capability of conventional TVSS. Image processing techniques were applied to reduce the artifacts and extract important features from the active camera and effectively converted the information into tactile stimuli with much lower resolution. A fixed camera was used to record the movement of the active camera. A trajectory tracking algorithm was developed to analyze the active sensing strategy of the TVSS users to explore the environment. The image processing subsystem showed advantageous improvement in extracting object's features for superior recognition. The trajectory tracking subsystem, on the other hand, enabled accurately locating the portion of the scene pointed by the active camera and providing profound information for the study of active sensing strategy applied by TVSS users.

  19. [Quality system Vision 2000].

    PubMed

    Pasini, Evasio; Pitocchi, Oreste; de Luca, Italo; Ferrari, Roberto

    2002-12-01

    A recent document of the Italian Ministry of Health points out that all structures which provide services to the National Health System should implement a Quality System according to the ISO 9000 standards. Vision 2000 is the new version of the ISO standard. Vision 2000 is less bureaucratic than the old version. The specific requests of the Vision 2000 are: a) to identify, to monitor and to analyze the processes of the structure, b) to measure the results of the processes so as to ensure that they are effective, d) to implement actions necessary to achieve the planned results and the continual improvement of these processes, e) to identify customer requests and to measure customer satisfaction. Specific attention should be also dedicated to the competence and training of the personnel involved in the processes. The principles of the Vision 2000 agree with the principles of total quality management. The present article illustrates the Vision 2000 standard and provides practical examples of the implementation of this standard in cardiological departments.

  20. CONDOR Advanced Visionics System

    NASA Astrophysics Data System (ADS)

    Kanahele, David L.; Buckanin, Robert M.

    1996-06-01

    The Covert Night/Day Operations for Rotorcraft (CONDOR) program is a collaborative research and development program between the governments of the United States and the United Kingdom of Great Britain and Northern Ireland to develop and demonstrate an advanced visionics concept coupled with an advanced flight control system to improve rotorcraft mission effectiveness during day, night, and adverse weather conditions in the Nap- of-the-Earth environment. The Advanced Visionics System for CONDOR is the flight- ruggedized head mounted display and computer graphics generator with the intended use of exploring, developing, and evaluating proposed visionic concepts for rotorcraft including; the application of color displays, wide field-of-view, enhanced imagery, virtual displays, mission symbology, stereo imagery, and other graphical interfaces.

  1. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  2. Bird Vision System

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Bird Vision system is a multicamera photogrammerty software application that runs on a Microsoft Windows XP platform and was developed at Kennedy Space Center by ASRC Aerospace. This software system collects data about the locations of birds within a volume centered on the Space Shuttle and transmits it in real time to the laptop computer of a test director in the Launch Control Center (LCC) Firing Room.

  3. A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1993-01-01

    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the

  4. Industrial robot's vision systems

    NASA Astrophysics Data System (ADS)

    Iureva, Radda A.; Raskin, Evgeni O.; Komarov, Igor I.; Maltseva, Nadezhda K.; Fedosovsky, Michael E.

    2016-03-01

    Due to the improved economic situation in the high technology sectors, work on the creation of industrial robots and special mobile robotic systems are resumed. Despite this, the robotic control systems mostly remained unchanged. Hence one can see all advantages and disadvantages of these systems. This is due to lack of funds, which could greatly facilitate the work of the operator, and in some cases, completely replace it. The paper is concerned with the complex machine vision of robotic system for monitoring of underground pipelines, which collects and analyzes up to 90% of the necessary information. Vision Systems are used to identify obstacles to the process of movement on a trajectory to determine their origin, dimensions and character. The object is illuminated in a structured light, TV camera records projected structure. Distortions of the structure uniquely determine the shape of the object in view of the camera. The reference illumination is synchronized with the camera. The main parameters of the system are the basic distance between the generator and the lights and the camera parallax angle (the angle between the optical axes of the projection unit and camera).

  5. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2004-12-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  6. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2005-01-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  7. Visions image operating system

    SciTech Connect

    Kohler, R.R.; Hanson, A.R.

    1982-01-01

    The image operating system is a complete software environment specifically designed for dynamic experimentation in scene analysis. The IOS consists of a high-level interpretive control language (LISP) with efficient image operators in a noninterpretive language. The image operators are viewed as local operators to be applied in parallel at all pixels to a set of input images. In order to carry out complex image analysis experiments an environment conducive to such experimentation was needed. This environment is provided by the visions image operating system based on a computational structure known as a processing cone proposed by Hanson and Riseman (1974, 1980) and implemented on a VAX-11/780 running VMS. 6 references.

  8. Dynamically re-configurable CMOS imagers for an active vision system

    NASA Technical Reports Server (NTRS)

    Yang, Guang (Inventor); Pain, Bedabrata (Inventor)

    2005-01-01

    A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.

  9. Active vision system for planning and programming of industrial robots in one-of-a-kind manufacturing

    NASA Astrophysics Data System (ADS)

    Berger, Ulrich; Schmidt, Achim

    1995-10-01

    The aspects of automation technology in industrial one-of-a-kind manufacturing are discussed. An approach to improve the quality and cost relation is developed and an overview of an 3D- vision supported automation system is given. This system is based on an active vision sensor for 3D-geometry feedback. Its measurement principle, the coded light approach, is explained. The experimental environment for the technical validation of the automation approach is demonstrated, where robot based processes (assembly, arc welding and flame cutting) are graphically simulated and off-line programmed. A typical process sequence for automated one- of-a-kind manufacturing is described. The results of this research development are applied to a project on the automated disassembling of car parts for recycling using industrial robots.

  10. Coevolution of active vision and feature selection.

    PubMed

    Floreano, Dario; Kato, Toshifumi; Marocco, Davide; Sauser, Eric

    2004-03-01

    We show that complex visual tasks, such as position- and size-invariant shape recognition and navigation in the environment, can be tackled with simple architectures generated by a coevolutionary process of active vision and feature selection. Behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons are evolved while they freely interact with their environments. We describe the application of this methodology in three sets of experiments, namely, shape discrimination, car driving, and robot navigation. We show that these systems develop sensitivity to a number of oriented, retinotopic, visual-feature-oriented edges, corners, height, and a behavioral repertoire to locate, bring, and keep these features in sensitive regions of the vision system, resembling strategies observed in simple insects. PMID:15052484

  11. Coevolution of active vision and feature selection.

    PubMed

    Floreano, Dario; Kato, Toshifumi; Marocco, Davide; Sauser, Eric

    2004-03-01

    We show that complex visual tasks, such as position- and size-invariant shape recognition and navigation in the environment, can be tackled with simple architectures generated by a coevolutionary process of active vision and feature selection. Behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons are evolved while they freely interact with their environments. We describe the application of this methodology in three sets of experiments, namely, shape discrimination, car driving, and robot navigation. We show that these systems develop sensitivity to a number of oriented, retinotopic, visual-feature-oriented edges, corners, height, and a behavioral repertoire to locate, bring, and keep these features in sensitive regions of the vision system, resembling strategies observed in simple insects.

  12. Coherent laser vision system (CLVS)

    SciTech Connect

    1997-02-13

    The purpose of the CLVS research project is to develop a prototype fiber-optic based Coherent Laser Vision System suitable for DOE`s EM Robotics program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update geometric data on the order of once per second. The CLVS project plan required implementation in two phases of the contract, a Base Contract and a continuance option. This is the Base Program Interim Phase Topical Report presenting the results of Phase 1 of the CLVS research project. Test results and demonstration results provide a proof-of-concept for a system providing three-dimensional (3D) vision with the performance capability required to update geometric data on the order of once per second.

  13. Real-time vision systems

    SciTech Connect

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  14. Stereoscopic vision system

    NASA Astrophysics Data System (ADS)

    Király, Zsolt; Springer, George S.; Van Dam, Jacques

    2006-04-01

    In this investigation, an optical system is introduced for inspecting the interiors of confined spaces, such as the walls of containers, cavities, reservoirs, fuel tanks, pipelines, and the gastrointestinal tract. The optical system wirelessly transmits stereoscopic video to a computer that displays the video in realtime on the screen, where it is viewed with shutter glasses. To minimize space requirements, the videos from the two cameras (required to produce stereoscopic images) are multiplexed into a single stream for transmission. The video is demultiplexed inside the computer, corrected for fisheye distortion and lens misalignment, and cropped to the proper size. Algorithms are developed that enable the system to perform these tasks. A proof-of-concept device is constructed that demonstrates the operation and the practicality of the optical system. Using this device, tests are performed assessing validities of the concepts and the algorithms.

  15. Active vision in satellite scene analysis

    NASA Technical Reports Server (NTRS)

    Naillon, Martine

    1994-01-01

    In earth observation or planetary exploration it is necessary to have more and, more autonomous systems, able to adapt to unpredictable situations. This imposes the use, in artificial systems, of new concepts in cognition, based on the fact that perception should not be separated from recognition and decision making levels. This means that low level signal processing (perception level) should interact with symbolic and high level processing (decision level). This paper is going to describe the new concept of active vision, implemented in Distributed Artificial Intelligence by Dassault Aviation following a 'structuralist' principle. An application to spatial image interpretation is given, oriented toward flexible robotics.

  16. Vision system for telerobotics operation

    NASA Astrophysics Data System (ADS)

    Wong, Andrew K. C.; Li, Li-Wei; Liu, Wei-Cheng

    1992-10-01

    This paper presents a knowledge-based vision system for a telerobotics guidance project. The system is capable of recognizing and locating 3-D objects with unrestricted viewpoints in a simulated unconstrained space environment. It constructs object representation for vision tasks from wireframe models; recognizes and locates objects in a 3-D scene, and provides world modeling capability to establish, maintain, and update 3-D environment description for telerobotic manipulations. In this paper, an object model is represented by an attributed hypergraph which contains direct structural (relational) information with features grouped according to their multiple-views so as the interpretation of the 3-D object and its 2-D projections are coupled. With this representation, object recognition is directed by a knowledge-directed hypothesis refinement strategy. The strategy starts with the identification of 2-D local feature characteristics for initiating feature and relation matching. Next it continues to refine the matching by adding 2-D features from the image according to viewpoint and geometric consistency. Finally it links the successful matchings back to the 3-D model to recover the feature, relation and location information of the recognized object. The paper also presents the implementation and the experimentation of the vision prototype.

  17. Vision Loss With Sexual Activity.

    PubMed

    Lee, Michele D; Odel, Jeffrey G; Rudich, Danielle S; Ritch, Robert

    2016-01-01

    A 51-year-old white man presented with multiple episodes of transient painless unilateral vision loss precipitated by sexual intercourse. Examination was significant for closed angles bilaterally. His visual symptoms completely resolved following treatment with laser peripheral iridotomies.

  18. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  19. Vision inspection system and method

    NASA Technical Reports Server (NTRS)

    Huber, Edward D. (Inventor); Williams, Rick A. (Inventor)

    1997-01-01

    An optical vision inspection system (4) and method for multiplexed illuminating, viewing, analyzing and recording a range of characteristically different kinds of defects, depressions, and ridges in a selected material surface (7) with first and second alternating optical subsystems (20, 21) illuminating and sensing successive frames of the same material surface patch. To detect the different kinds of surface features including abrupt as well as gradual surface variations, correspondingly different kinds of lighting are applied in time-multiplexed fashion to the common surface area patches under observation.

  20. VISION 21 SYSTEMS ANALYSIS METHODOLOGIES

    SciTech Connect

    G.S. Samuelsen; A. Rao; F. Robson; B. Washom

    2003-08-11

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into power plant systems that meet performance and emission goals of the Vision 21 program. The study efforts have narrowed down the myriad of fuel processing, power generation, and emission control technologies to selected scenarios that identify those combinations having the potential to achieve the Vision 21 program goals of high efficiency and minimized environmental impact while using fossil fuels. The technology levels considered are based on projected technical and manufacturing advances being made in industry and on advances identified in current and future government supported research. Included in these advanced systems are solid oxide fuel cells and advanced cycle gas turbines. The results of this investigation will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  1. Advanced integrated enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha

    2003-09-01

    In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.

  2. Multitask neural network for vision machine systems

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1991-02-01

    A multi-task dynamic neural network that can be programmed for storing processing and encoding spatio-temporal visual information is presented in this paper. This dynamic neural network called the PNnetwork is comprised of numerous densely interconnected neural subpopulations which reside in one of the two coupled sublayers P or N. The subpopulations in the P-sublayer transmit an excitatory or a positive influence onto all interconnected units whereas the subpopulations in the N-sublayer transmit an inhibitory or negative influence. The dynamical activity generated by each subpopulation is given by a nonlinear first-order system. By varying the coupling strength between these different subpopulations it is possible to generate three distinct modes of dynamical behavior useful for performing vision related tasks. It is postulated that the PN-network can function as a basic programmable processor for novel vision machine systems. 1. 0

  3. Compact Autonomous Hemispheric Vision System

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.

    2012-01-01

    Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.

  4. 77 FR 2342 - Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-17

    ... Federal Aviation Administration Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision... Transportation (DOT). ACTION: Notice of RTCA Special Committee 213, Enhanced Flight Vision/ Synthetic Vision... meeting of RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS)....

  5. Flexible task-specific control using active vision

    NASA Astrophysics Data System (ADS)

    Firby, Robert J.; Swain, Michael J.

    1992-04-01

    This paper is about the interface between continuous and discrete robot control. We advocate encapsulating continuous actions and their related sensing strategies into behaviors called situation specific activities, which can be constructed by a symbolic reactive planner. Task- specific, real-time perception is a fundamental part of these activities. While researchers have successfully used primitive touch and sonar sensors in such situations, it is more problematic to achieve reasonable performance with complex signals such as those from a video camera. Active vision routines are suggested as a means of incorporating visual data into real time control and as one mechanism for designating aspects of the world in an indexical-functional manner. Active vision routines are a particularly flexible sensing methodology because different routines extract different functional attributes from the world using the same sensor. In fact, there will often be different active vision routines for extracting the same functional attribute using different processing techniques. This allows an agent substantial leeway to instantiate its activities in different ways under different circumstances using different active vision routines. We demonstrate the utility of this architecture with an object tracking example. A control system is presented that can be reconfigured by a reactive planner to achieve different tasks. We show how this system allows us to build interchangeable tracking activities that use either color histogram or motion based active vision routines.

  6. Active stereo vision routines using PRISM-3

    NASA Astrophysics Data System (ADS)

    Antonisse, Hendrick J.

    1992-11-01

    This paper describes work in progress on a set of visual routines and supporting capabilities implemented on the PRISM-3 real-time vision system. The routines are used in an outdoor robot retrieval task. The task requires the robot to locate a donor agent -- a Hero2000 -- which holds the object to be retrieved, to navigate to the donor, to accept the object from the donor, and return to its original location. The routines described here will form an integral part of the navigation and wide-area search tasks. Active perception is exploited to locate the donor using real-time stereo ranging directed by a pan/tilt/verge mechanism. A framework for orchestrating visual search has been implemented and is briefly described.

  7. A production peripheral vision display system

    NASA Technical Reports Server (NTRS)

    Heinmiller, B.

    1984-01-01

    A small number of peripheral vision display systems in three significantly different configurations were evaluated in various aircraft and simulator situations. The use of these development systems enabled the gathering of much subjective and quantitative data regarding this concept of flight deck instrumentation. However, much was also learned about the limitations of this equipment which needs to be addressed prior to wide spread use. A program at Garrett Manufacturing Limited in which the peripheral vision display system is redesigned and transformed into a viable production avionics system is discussed. Modular design, interchangeable units, optical attenuators, and system fault detection are considered with respect to peripheral vision display systems.

  8. Vision restoration after brain and retina damage: the "residual vision activation theory".

    PubMed

    Sabel, Bernhard A; Henrich-Noack, Petra; Fedorov, Anton; Gall, Carolin

    2011-01-01

    Vision loss after retinal or cerebral visual injury (CVI) was long considered to be irreversible. However, there is considerable potential for vision restoration and recovery even in adulthood. Here, we propose the "residual vision activation theory" of how visual functions can be reactivated and restored. CVI is usually not complete, but some structures are typically spared by the damage. They include (i) areas of partial damage at the visual field border, (ii) "islands" of surviving tissue inside the blind field, (iii) extrastriate pathways unaffected by the damage, and (iv) downstream, higher-level neuronal networks. However, residual structures have a triple handicap to be fully functional: (i) fewer neurons, (ii) lack of sufficient attentional resources because of the dominant intact hemisphere caused by excitation/inhibition dysbalance, and (iii) disturbance in their temporal processing. Because of this resulting activation loss, residual structures are unable to contribute much to everyday vision, and their "non-use" further impairs synaptic strength. However, residual structures can be reactivated by engaging them in repetitive stimulation by different means: (i) visual experience, (ii) visual training, or (iii) noninvasive electrical brain current stimulation. These methods lead to strengthening of synaptic transmission and synchronization of partially damaged structures (within-systems plasticity) and downstream neuronal networks (network plasticity). Just as in normal perceptual learning, synaptic plasticity can improve vision and lead to vision restoration. This can be induced at any time after the lesion, at all ages and in all types of visual field impairments after retinal or brain damage (stroke, neurotrauma, glaucoma, amblyopia, age-related macular degeneration). If and to what extent vision restoration can be achieved is a function of the amount of residual tissue and its activation state. However, sustained improvements require repetitive

  9. Vision restoration after brain and retina damage: the "residual vision activation theory".

    PubMed

    Sabel, Bernhard A; Henrich-Noack, Petra; Fedorov, Anton; Gall, Carolin

    2011-01-01

    Vision loss after retinal or cerebral visual injury (CVI) was long considered to be irreversible. However, there is considerable potential for vision restoration and recovery even in adulthood. Here, we propose the "residual vision activation theory" of how visual functions can be reactivated and restored. CVI is usually not complete, but some structures are typically spared by the damage. They include (i) areas of partial damage at the visual field border, (ii) "islands" of surviving tissue inside the blind field, (iii) extrastriate pathways unaffected by the damage, and (iv) downstream, higher-level neuronal networks. However, residual structures have a triple handicap to be fully functional: (i) fewer neurons, (ii) lack of sufficient attentional resources because of the dominant intact hemisphere caused by excitation/inhibition dysbalance, and (iii) disturbance in their temporal processing. Because of this resulting activation loss, residual structures are unable to contribute much to everyday vision, and their "non-use" further impairs synaptic strength. However, residual structures can be reactivated by engaging them in repetitive stimulation by different means: (i) visual experience, (ii) visual training, or (iii) noninvasive electrical brain current stimulation. These methods lead to strengthening of synaptic transmission and synchronization of partially damaged structures (within-systems plasticity) and downstream neuronal networks (network plasticity). Just as in normal perceptual learning, synaptic plasticity can improve vision and lead to vision restoration. This can be induced at any time after the lesion, at all ages and in all types of visual field impairments after retinal or brain damage (stroke, neurotrauma, glaucoma, amblyopia, age-related macular degeneration). If and to what extent vision restoration can be achieved is a function of the amount of residual tissue and its activation state. However, sustained improvements require repetitive

  10. Space environment robot vision system

    NASA Technical Reports Server (NTRS)

    Wood, H. John; Eichhorn, William L.

    1990-01-01

    A prototype twin-camera stereo vision system for autonomous robots has been developed at Goddard Space Flight Center. Standard charge coupled device (CCD) imagers are interfaced with commercial frame buffers and direct memory access to a computer. The overlapping portions of the images are analyzed using photogrammetric techniques to obtain information about the position and orientation of objects in the scene. The camera head consists of two 510 x 492 x 8-bit CCD cameras mounted on individually adjustable mounts. The 16 mm efl lenses are designed for minimum geometric distortion. The cameras can be rotated in the pitch, roll, and yaw (pan angle) directions with respect to their optical axes. Calibration routines have been developed which automatically determine the lens focal lengths and pan angle between the two cameras. The calibration utilizes observations of a calibration structure with known geometry. Test results show the precision attainable is plus or minus 0.8 mm in range at 2 m distance using a camera separation of 171 mm. To demonstrate a task needed on Space Station Freedom, a target structure with a movable I beam was built. The camera head can autonomously direct actuators to dock the I-beam to another one so that they could be bolted together.

  11. COHERENT LASER VISION SYSTEM (CLVS) OPTION PHASE

    SciTech Connect

    Robert Clark

    1999-11-18

    The purpose of this research project was to develop a prototype fiber-optic based Coherent Laser Vision System (CLVS) suitable for DOE's EM Robotic program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update the dimensional spatial data on the order of once per second. The system has total immunity to ambient lighting conditions.

  12. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  13. Using perturbations to identify the brain circuits underlying active vision.

    PubMed

    Wurtz, Robert H

    2015-09-19

    The visual and oculomotor systems in the brain have been studied extensively in the primate. Together, they can be regarded as a single brain system that underlies active vision--the normal vision that begins with visual processing in the retina and extends through the brain to the generation of eye movement by the brainstem. The system is probably one of the most thoroughly studied brain systems in the primate, and it offers an ideal opportunity to evaluate the advantages and disadvantages of the series of perturbation techniques that have been used to study it. The perturbations have been critical in moving from correlations between neuronal activity and behaviour closer to a causal relation between neuronal activity and behaviour. The same perturbation techniques have also been used to tease out neuronal circuits that are related to active vision that in turn are driving behaviour. The evolution of perturbation techniques includes ablation of both cortical and subcortical targets, punctate chemical lesions, reversible inactivations, electrical stimulation, and finally the expanding optogenetic techniques. The evolution of perturbation techniques has supported progressively stronger conclusions about what neuronal circuits in the brain underlie active vision and how the circuits themselves might be organized.

  14. Compact Through-The-Torch Vision System

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Gutow, David A.

    1992-01-01

    Changes in gas/tungsten-arc welding torch equipped with through-the-torch vision system make it smaller and more resistant to welding environment. Vision subsystem produces image of higher quality, flow of gas enhanced, and parts replaced quicker and easier. Coaxial series of lenses and optical components provides overhead view of joint and weld puddle real-time control. Designed around miniature high-resolution video camera. Smaller size enables torch to weld joints formerly inaccessible.

  15. Volumetric imaging system for the ionosphere (VISION)

    NASA Astrophysics Data System (ADS)

    Dymond, Kenneth F.; Budzien, Scott A.; Nicholas, Andrew C.; Thonnard, Stefan E.; Fortna, Clyde B.

    2002-01-01

    The Volumetric Imaging System for the Ionosphere (VISION) is designed to use limb and nadir images to reconstruct the three-dimensional distribution of electrons over a 1000 km wide by 500 km high slab beneath the satellite with 10 km x 10 km x 10 km voxels. The primary goal of the VISION is to map and monitor global and mesoscale (> 10 km) electron density structures, such as the Appleton anomalies and field-aligned irregularity structures. The VISION consists of three UV limb imagers, two UV nadir imagers, a dual frequency Global Positioning System (GPS) receiver, and a coherently emitting three frequency radio beacon. The limb imagers will observe the O II 83.4 nm line (daytime electron density), O I 135.6 nm line (nighttime electron density and daytime O density), and the N2 Lyman-Birge-Hopfield (LBH) bands near 143.0 nm (daytime N2 density). The nadir imagers will observe the O I 135.6 nm line (nighttime electron density and daytime O density) and the N2 LBH bands near 143.0 nm (daytime N2 density). The GPS receiver will monitor the total electron content between the satellite containing the VISION and the GPS constellation. The three frequency radio beacon will be used with ground-based receiver chains to perform computerized radio tomography below the satellite containing the VISION. The measurements made using the two radio frequency instruments will be used to validate the VISION UV measurements.

  16. Information capacity of electronic vision systems

    NASA Astrophysics Data System (ADS)

    Taubkin, Igor I.; Trishenkov, Mikhail A.

    1996-10-01

    The comparison of various electronic-optical vision systems has been conducted based on the criterion ultimate information capacity, C, limited by fluctuations of the flux of quanta. The information capacity of daylight, night, and thermal vision systems is determined first of all by the number of picture elements, M, in the optical system. Each element, under a sufficient level of irradiation, can transfer about one byte of information for the standard frame time and so C ≈ M bytes per frame. The value of the proportionality factor, one byte per picture element, is referred to systems of daylight and thermal vision, in which a photocharge in a unit cell of the imager is limited by storage capacity, and in general it varies within a small interval of 0.5 byte per frame for night vision systems to 2 bytes per frame for ideal thermal imagers. The ultimate specific information capacity, C ∗, of electronic vision systems under low irradiation levels rises with increasing density of optical channels until the number of the irradiance gradations that can be distinguished becomes less than two in each channel. In this case, the maximum value of C ∗ turns out to be proportional to the flux of quanta coming from an object under observation. Under a high level of irradiation, C ∗ is limited by difraction effects and amounts oto 1/ λ2 bytes/cm 2 frame.

  17. Flight Testing an Integrated Synthetic Vision System

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  18. Flight testing an integrated synthetic vision system

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-05-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream G-V aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  19. Three-Dimensional Robotic Vision System

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1989-01-01

    Stereoscopy and motion provide clues to outlines of objects. Digital image-processing system acts as "intelligent" automatic machine-vision system by processing views from stereoscopic television cameras into three-dimensional coordinates of moving object in view. Epipolar-line technique used to find corresponding points in stereoscopic views. Robotic vision system analyzes views from two television cameras to detect rigid three-dimensional objects and reconstruct numerically in terms of coordinates of corner points. Stereoscopy and effects of motion on two images complement each other in providing image-analyzing subsystem with clues to natures and locations of principal features.

  20. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  1. Near real-time stereo vision system

    NASA Astrophysics Data System (ADS)

    Matthies, Larry H.; Anderson, Charles H.

    1991-12-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  2. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    PubMed

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  3. Study of a dual mode SWIR active imaging system for direct imaging and non-line-of-sight vision

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Christnacher, Frank; Velten, Andreas

    2015-05-01

    The application of non-line of sight vision and see around a corner has been demonstrated in the recent past on laboratory level with round trip path lengths on the scale of 1 m as well as 10 m. This method uses a computational imaging approach to analyze the scattered information of objects which are hidden from the direct sensors field of view. Recent demonstrator systems were driven at laser wavelengths (800 nm and 532 nm) which are far from the eye-safe shortwave infrared (SWIR) wavelength band i.e. between 1.4 μm and 2 μm. Therefore, the application in public or inhabited areas is difficult with respect to international laser safety conventions. In the present work, the authors evaluate the application of recent eye safe laser sources and sensor devices for non-line of sight sensing and give predictions on range and resolution. Further, the realization of a dual mode concept is studied enabling both, the direct view on a scene and the indirect view on a hidden scene. While recent laser gated viewing sensors have high spatial resolution, their application in non-line of sight imaging suffer from a too low temporal resolution due to minimal sensor gate width of around 150 ns. On the other hand, Geiger-mode single photon counting devices have high temporal resolution, but their spatial resolution is (until now) limited to array sizes of some thousand sensor elements. In this publication the authors present detailed theoretical and experimental evaluations.

  4. Using perturbations to identify the brain circuits underlying active vision

    PubMed Central

    Wurtz, Robert H.

    2015-01-01

    The visual and oculomotor systems in the brain have been studied extensively in the primate. Together, they can be regarded as a single brain system that underlies active vision—the normal vision that begins with visual processing in the retina and extends through the brain to the generation of eye movement by the brainstem. The system is probably one of the most thoroughly studied brain systems in the primate, and it offers an ideal opportunity to evaluate the advantages and disadvantages of the series of perturbation techniques that have been used to study it. The perturbations have been critical in moving from correlations between neuronal activity and behaviour closer to a causal relation between neuronal activity and behaviour. The same perturbation techniques have also been used to tease out neuronal circuits that are related to active vision that in turn are driving behaviour. The evolution of perturbation techniques includes ablation of both cortical and subcortical targets, punctate chemical lesions, reversible inactivations, electrical stimulation, and finally the expanding optogenetic techniques. The evolution of perturbation techniques has supported progressively stronger conclusions about what neuronal circuits in the brain underlie active vision and how the circuits themselves might be organized. PMID:26240420

  5. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    PubMed

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  6. Processing system for an enhanced vision system

    NASA Astrophysics Data System (ADS)

    Yelton, Dennis J.; Bernier, Ken L.; Sanders-Reed, John N.

    2004-08-01

    Enhanced Vision Systems (EVS) combines imagery from multiple sensors, possibly running at different frame rates and pixel counts, on to a display. In the case of a Helmet Mounted Display (HMD), the user line of sight is continuously changing with the result that the sensor pixels rendered on the display are changing in real time. In an EVS, the various sensors provide overlapping fields of view which requires stitching imagery together to provide a seamless mosaic to the user. Further, different modality sensors may be present requiring the fusion of imagery from the sensors. All of this takes place in a dynamic flight environment where the aircraft (with fixed mounted sensors) is changing position and orientation while the users are independently changing their lines of sight. In order to provide well registered, seamless imagery, very low throughput latencies are required, while dealing with huge volumes of data. This provides both algorithmic and processing challenges which must be overcome to provide a suitable system. This paper discusses system architecture, efficient stitching and fusing algorithms, and hardware implementation issues.

  7. Approach to constructing reconfigurable computer vision system

    NASA Astrophysics Data System (ADS)

    Xue, Jianru; Zheng, Nanning; Wang, Xiaoling; Zhang, Yongping

    2000-10-01

    In this paper, we propose an approach to constructing reconfigurable vision system. We found that timely and efficient execution of early tasks can significantly enhance the performance of whole computer vision tasks, and abstract out a set of basic, computationally intensive stream operations that may be performed in parallel and embodies them in a series of specific front-end processors. These processors which based on FPGAs (Field programmable gate arrays) can be re-programmable to permit a range of different types of feature maps, such as edge detection & linking, image filtering. Front-end processors and a powerful DSP constitute a computing platform which can perform many CV tasks. Additionally we adopt the focus-of-attention technologies to reduce the I/O and computational demands by performing early vision processing only within a particular region of interest. Then we implement a multi-page, dual-ported image memory interface between the image input and computing platform (including front-end processors, DSP). Early vision features were loaded into banks of dual-ported image memory arrays, which are continually raster scan updated at high speed from the input image or video data stream. Moreover, the computing platform can be complete asynchronous, random access to the image data or any other early vision feature maps through the dual-ported memory banks. In this way, the computing platform resources can be properly allocated to a region of interest and decoupled from the task of dealing with a high speed serial raster scan input. Finally, we choose PCI Bus as the main channel between the PC and computing platform. Consequently, front-end processors' control registers and DSP's program memory were mapped into the PC's memory space, which provides user access to reconfigure the system at any time. We also present test result of a computer vision application based on the system.

  8. Enhanced/Synthetic Vision Systems for Advanced Flight Decks

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Jenkins, James; Statler, Irving C. (Technical Monitor)

    1994-01-01

    One of the most challenging arenas for enhanced and synthetic vision systems is the flight deck. Here, pilots must perform active and supervisory control behaviors based on imagery generated in real time or transduced from imaging sensors. Although enhanced and synthetic vision technologies have been used in military vehicles for more than two decades, they have only recently been considered for civilian transport aircraft. In this paper we discuss the human performance issues still to be resolved for these systems, and consider the special constraints that must be considered for their use in the transport domain.

  9. Leisure Activity Participation of Elderly Individuals with Low Vision.

    ERIC Educational Resources Information Center

    Heinemann, Allen W.

    1988-01-01

    Studied low vision elderly clinic patients (N=63) who reported participation in six categories of leisure activities currently and at onset of vision loss. Found subjects reported significant declines in five of six activity categories. Found prior activity participation was related to current participation only for active crafts, participatory…

  10. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  11. Mobile robot on-board vision system

    SciTech Connect

    McClure, V.W.; Nai-Yung Chen.

    1993-06-15

    An automatic robot system is described comprising: an AGV transporting and transferring work piece, a control computer on board the AGV, a process machine for working on work pieces, a flexible robot arm with a gripper comprising two gripper fingers at one end of the arm, wherein the robot arm and gripper are controllable by the control computer for engaging a work piece, picking it up, and setting it down and releasing it at a commanded location, locating beacon means mounted on the process machine, wherein the locating beacon means are for locating on the process machine a place to pick up and set down work pieces, vision means, including a camera fixed in the coordinate system of the gripper means, attached to the robot arm near the gripper, such that the space between said gripper fingers lies within the vision field of said vision means, for detecting the locating beacon means, wherein the vision means provides the control computer visual information relating to the location of the locating beacon means, from which information the computer is able to calculate the pick up and set down place on the process machine, wherein said place for picking up and setting down work pieces on the process machine is a nest means and further serves the function of holding a work piece in place while it is worked on, the robot system further comprising nest beacon means located in the nest means detectable by the vision means for providing information to the control computer as to whether or not a work piece is present in the nest means.

  12. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  13. Prototype Optical Correlator For Robotic Vision System

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1993-01-01

    Known and unknown images fed in electronically at high speed. Optical correlator and associated electronic circuitry developed for vision system of robotic vehicle. System recognizes features of landscape by optical correlation between input image of scene viewed by video camera on robot and stored reference image. Optical configuration is Vander Lugt correlator, in which Fourier transform of scene formed in coherent light and spatially modulated by hologram of reference image to obtain correlation.

  14. Zoom Vision System For Robotic Welding

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Hudyma, Russell M.

    1990-01-01

    Rugged zoom lens subsystem proposed for use in along-the-torch vision system of robotic welder. Enables system to adapt, via simple mechanical adjustments, to gas cups of different lengths, electrodes of different protrusions, and/or different distances between end of electrode and workpiece. Unnecessary to change optical components to accommodate changes in geometry. Easy to calibrate with respect to object in view. Provides variable focus and variable magnification.

  15. Vision enhanced navigation for unmanned systems

    NASA Astrophysics Data System (ADS)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  16. Bioinspired minimal machine multiaperture apposition vision system.

    PubMed

    Davis, John D; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2008-01-01

    Traditional machine vision systems have an inherent data bottleneck that arises because data collected in parallel must be serialized for transfer from the sensor to the processor. Furthermore, much of this data is not useful for information extraction. This project takes inspiration from the visual system of the house fly, Musca domestica, to reduce this bottleneck by employing early (up front) analog preprocessing to limit the data transfer. This is a first step toward an all analog, parallel vision system. While the current implementation has serial stages, nothing would prevent it from being fully parallel. A one-dimensional photo sensor array with analog pre-processing is used as the sole sensory input to a mobile robot. The robot's task is to chase a target car while avoiding obstacles in a constrained environment. Key advantages of this approach include passivity and the potential for very high effective "frame rates."

  17. Missileborne Artificial Vision System (MAVIS)

    NASA Technical Reports Server (NTRS)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  18. Visual Turing test for computer vision systems

    PubMed Central

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-01-01

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  19. Applications of Augmented Vision Head-Mounted Systems in Vision Rehabilitation

    PubMed Central

    Peli, Eli; Luo, Gang; Bowers, Alex; Rensing, Noa

    2007-01-01

    Vision loss typically affects either the wide peripheral vision (important for mobility), or central vision (important for seeing details). Traditional optical visual aids usually recover the lost visual function, but at a high cost for the remaining visual function. We have developed a novel concept of vision-multiplexing using augmented vision head-mounted display systems to address vision loss. Two applications are discussed in this paper. In the first, minified edge images from a head-mounted video camera are presented on a see-through display providing visual field expansion for people with peripheral vision loss, while still enabling the full resolution of the residual central vision to be maintained. The concept has been applied in daytime and nighttime devices. A series of studies suggested that the system could help with visual search, obstacle avoidance, and nighttime mobility. Subjects were positive in their ratings of device cosmetics and ergonomics. The second application is for people with central vision loss. Using an on-axis aligned camera and display system, central visibility is enhanced with 1:1 scale edge images, while still enabling the wide field of the unimpaired peripheral vision to be maintained. The registration error of the system was found to be low in laboratory testing. PMID:18172511

  20. Stereoscopic Vision System For Robotic Vehicle

    NASA Technical Reports Server (NTRS)

    Matthies, Larry H.; Anderson, Charles H.

    1993-01-01

    Distances estimated from images by cross-correlation. Two-camera stereoscopic vision system with onboard processing of image data developed for use in guiding robotic vehicle semiautonomously. Combination of semiautonomous guidance and teleoperation useful in remote and/or hazardous operations, including clean-up of toxic wastes, exploration of dangerous terrain on Earth and other planets, and delivery of materials in factories where unexpected hazards or obstacles can arise.

  1. Progress in building a cognitive vision system

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Yue, Hong

    2016-05-01

    We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.

  2. Low-power smart vision system-on-a-chip design for ultrafast machine vision applications

    NASA Astrophysics Data System (ADS)

    Fang, Wai-Chi

    1998-03-01

    In this paper, an ultra-fast smart vision system-on-a-chip design is proposed to provide effective solutions for real time machine vision applications by taking advantages of recent advances in integrated sensing/processing designs, electronic neural networks, advanced microprocessors and sub- micron VLSI technology. The smart vision system mimics what is inherent in biological vision systems. It is programmable to perform vision processing in all levels such as image acquisition, image fusion, image analysis, and scene interpretation. A system-on-a-chip implementation of this smart vision system is shown to be feasible by integrating the whole system into a 3-cm by 3-cm chip design in a 0.18- micrometer CMOS technology. The system achieves one tea- operation-per-second computing power that is a two order-of- magnitude increase over the state-of-the-art microcomputer and DSP chips. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation. This highly integrated smart vision system can be used for various NASA scientific missions and other military, industrial or commercial vision applications.

  3. Robust active stereo vision using Kullback-Leibler divergence.

    PubMed

    Wang, Yongchang; Liu, Kai; Hao, Qi; Wang, Xianwang; Lau, Daniel L; Hassebrook, Laurence G

    2012-03-01

    Active stereo vision is a method of 3D surface scanning involving the projecting and capturing of a series of light patterns where depth is derived from correspondences between the observed and projected patterns. In contrast, passive stereo vision reveals depth through correspondences between textured images from two or more cameras. By employing a projector, active stereo vision systems find correspondences between two or more cameras, without ambiguity, independent of object texture. In this paper, we present a hybrid 3D reconstruction framework that supplements projected pattern correspondence matching with texture information. The proposed scheme consists of using projected pattern data to derive initial correspondences across cameras and then using texture data to eliminate ambiguities. Pattern modulation data are then used to estimate error models from which Kullback-Leibler divergence refinement is applied to reduce misregistration errors. Using only a small number of patterns, the presented approach reduces measurement errors versus traditional structured light and phase matching methodologies while being insensitive to gamma distortion, projector flickering, and secondary reflections. Experimental results demonstrate these advantages in terms of enhanced 3D reconstruction performance in the presence of noise, deterministic distortions, and conditions of texture and depth contrast.

  4. Three-dimensional motion estimation using genetic algorithms from image sequence in an active stereo vision system

    NASA Astrophysics Data System (ADS)

    Dipanda, Albert; Ajot, Jerome; Woo, Sanghyuk

    2003-06-01

    This paper proposes a method for estimating 3D rigid motion parameters from an image sequence of a moving object. The 3D surface measurement is achieved using an active stereovision system composed of a camera and a light projector, which illuminates objects to be analyzed by a pyramid-shaped laser beam. By associating the laser rays and the spots in the 2D image, the 3D points corresponding to these spots are reconstructed. Each image of the sequence provides a set of 3D points, which is modeled by a B-spline surface. Therefore, estimating the motion between two images of the sequence boils down to matching two B-spline surfaces. We consider the matching environment as an optimization problem and find the optimal solution using Genetic Algorithms. A chromosome is encoded by concatenating six binary coded parameters, the three angles of rotation and the x-axis, y-axis and z-axis translations. We have defined an original fitness function to calculate the similarity measure between two surfaces. The matching process is performed iteratively: the number of points to be matched grows as the process advances and results are refined until convergence. Experimental results with a real image sequence are presented to show the effectiveness of the method.

  5. Vision system testing for teleoperated vehicles

    SciTech Connect

    McGovern, D.E.; Miller, D.P.

    1989-03-01

    This study compared three forward-looking vision systems consisting of a fixed mount, black and white video camera system, a fixed mount, color video camera system and a steering-slaved color video camera system. Subjects were exposed to a variety of objects and obstacles over a marked, off-road, course while either viewing videotape or performing actual teleoperation of the vehicle. The subjects were required to detect and identify those objects which might require action while driving such as slowing down or maneuvering around the object. Subjects also estimated the same video systems as in the driving task. Two modes of driver interaction were tested: (1) actual remote driving, and (2) noninteractive video simulation. Remote driving has the advantage of realism, but is subject to variability in driving strategies and can be hazardous to equipment. Video simulation provides a more controlled environment in which to compare vision-system parameters, but at the expense of some realism. Results demonstrated that relative differences in performance among the visual systems are generally consistent in the two test modes of remote driving and simulation. A detection-range metric was found to be sensitive enough to demonstrate performance differences viewing large objects. It was also found that subjects typically overestimated distances, and when in error judging clearance, tended to overestimate the gap between the objects. 11 refs., 26 figs., 4 tabs.

  6. Identifying the Computational Requirements of an Integrated Top-Down-Bottom-Up Model for Overt Visual Attention within an Active Vision System

    PubMed Central

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as ‘active vision’, to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of ‘where’ and ‘what’ information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate ‘active’ visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a ‘priority map’. PMID:23437044

  7. Kiwi Forego Vision in the Guidance of Their Nocturnal Activities

    PubMed Central

    Martin, Graham R.; Wilson, Kerry-Jayne; Martin Wild, J.; Parsons, Stuart; Fabiana Kubke, M.; Corfield, Jeremy

    2007-01-01

    Background In vision, there is a trade-off between sensitivity and resolution, and any eye which maximises information gain at low light levels needs to be large. This imposes exacting constraints upon vision in nocturnal flying birds. Eyes are essentially heavy, fluid-filled chambers, and in flying birds their increased size is countered by selection for both reduced body mass and the distribution of mass towards the body core. Freed from these mass constraints, it would be predicted that in flightless birds nocturnality should favour the evolution of large eyes and reliance upon visual cues for the guidance of activity. Methodology/Principal Findings We show that in Kiwi (Apterygidae), flightlessness and nocturnality have, in fact, resulted in the opposite outcome. Kiwi show minimal reliance upon vision indicated by eye structure, visual field topography, and brain structures, and increased reliance upon tactile and olfactory information. Conclusions/Significance This lack of reliance upon vision and increased reliance upon tactile and olfactory information in Kiwi is markedly similar to the situation in nocturnal mammals that exploit the forest floor. That Kiwi and mammals evolved to exploit these habitats quite independently provides evidence for convergent evolution in their sensory capacities that are tuned to a common set of perceptual challenges found in forest floor habitats at night and which cannot be met by the vertebrate visual system. We propose that the Kiwi visual system has undergone adaptive regressive evolution driven by the trade-off between the relatively low rate of gain of visual information that is possible at low light levels, and the metabolic costs of extracting that information. PMID:17332846

  8. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  9. Synthetic vision systems: operational considerations simulation experiment

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  10. Real-time enhanced vision system

    NASA Astrophysics Data System (ADS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-05-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  11. Real-time Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-01-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  12. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  13. Enhanced vision system for laparoscopic surgery.

    PubMed

    Tamadazte, Brahim; Fiard, Gaelle; Long, Jean-Alexandre; Cinquin, Philippe; Voros, Sandrine

    2013-01-01

    Laparoscopic surgery offers benefits to the patients but poses new challenges to the surgeons, including a limited field of view. In this paper, we present an innovative vision system that can be combined with a traditional laparoscope, and provides the surgeon with a global view of the abdominal cavity, bringing him or her closer to open surgery conditions. We present our first experiments performed on a testbench mimicking a laparoscopic setup: they demonstrate an important time gain in performing a complex task consisting bringing a thread into the field of view of the laparoscope.

  14. Part identification in robotic assembly using vision system

    NASA Astrophysics Data System (ADS)

    Balabantaray, Bunil Kumar; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system acts an important role in making robotic assembly system autonomous. Identification of the correct part is an important task which needs to be carefully done by a vision system to feed the robot with correct information for further processing. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Interest point detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus it needs to choose the correct tool for the process with respect to the given environment. In this paper analysis of three major corner detection algorithms is performed on the basis of their accuracy, speed and robustness to noise. The work is performed on the Matlab R2012a. An attempt has been made to find the best algorithm for the problem.

  15. DLP™-based dichoptic vision test system

    NASA Astrophysics Data System (ADS)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  16. Online updating of synthetic vision system databases

    NASA Astrophysics Data System (ADS)

    Simard, Philippe

    In aviation, synthetic vision systems render artificial views of the world (using a database of the world and pose information) to support navigation and situational awareness in low visibility conditions. The database needs to be periodically updated to ensure its consistency with reality, since it reflects at best a nominal state of the environment. This thesis presents an approach for automatically updating the geometry of synthetic vision system databases and 3D models in general. The approach is novel in that it profits from all of the available prior information: intrinsic/extrinsic camera parameters and geometry of the world. Geometric inconsistencies (or anomalies) between the model and reality are quickly localized; this localization serves to significantly reduce the complexity of the updating problem. Given a geometric model of the world, a sample image and known camera motion, a predicted image can be generated based on a differential approach. Model locations where predictions do not match observations are assumed to be incorrect. The updating is then cast as an optimization problem where differences between observations and predictions are minimized. To cope with system uncertainties, a mechanism that automatically infers their impact on prediction validity is derived. This method not only renders the anomaly detection process robust but also prevents the overfitting of the data. The updating framework is examined at first using synthetic data and further tested in both a laboratory environment and using a helicopter in flight. Experimental results show that the algorithm is effective and robust across different operating conditions.

  17. Forward Obstacle Detection System by Stereo Vision

    NASA Astrophysics Data System (ADS)

    Iwata, Hiroaki; Saneyoshi, Keiji

    Forward obstacle detection is needed to prevent car accidents. We have developed forward obstacle detection system which has good detectability and the accuracy of distance only by using stereo vision. The system runs in real time by using a stereo processing system based on a Field-Programmable Gate Array (FPGA). Road surfaces are detected and the space to drive can be limited. A smoothing filter is also used. Owing to these, the accuracy of distance is improved. In the experiments, this system could detect forward obstacles 100 m away. Its error of distance up to 80 m was less than 1.5 m. It could immediately detect cutting-in objects.

  18. Robot vision system programmed in Prolog

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Hack, Ralf

    1995-10-01

    This is the latest in a series of publications which develop the theme of programming a machine vision system using the artificial intelligence language Prolog. The article states the long-term objective of the research program of which this work forms part. Many but not yet all of the goals laid out in this plan have already been achieved in an integrated system, which uses a multi-layer control hierarchy. The purpose of the present paper is to demonstrate that a system based upon a Prolog controller is capable of making complex decisions and operating a standard robot. The authors chose, as a vehicle for this exercise, the task of playing dominoes against a human opponent. This game was selected for this demonstration since it models a range of industrial assembly tasks, where parts are to be mated together. (For example, a 'daisy chain' of electronic equipment and the interconnecting cables/adapters may be likened to a chain of dominoes.)

  19. HMD digital night vision system for fixed wing fighters

    NASA Astrophysics Data System (ADS)

    Foote, Bobby D.

    2013-05-01

    Digital night sensor technology offers both advantages and disadvantages over standard analog systems. As the digital night sensor technology matures and disadvantages are overcome, the transition away from analog type sensors will increase with new programs. In response to this growing need RCEVS is actively investing in digital night vision systems that will provide the performance needed for the future. Rockwell Collins and Elbit Systems of America continue to invest in digital night technology and have completed laboratory, ground and preliminary flight testing to evaluate the important key factors for night vision. These evaluations have led to a summary of the maturity of the digital night capability and status of the key performance gap between analog and digital systems. Introduction of Digital Night Vision Systems can be found in the roadmap of future fixed wing and rotorcraft programs beginning in 2015. This will bring a new set of capabilities to the pilot that will enhance his abilities to perform night operations with no loss of performance.

  20. 78 FR 5557 - Twenty-First Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-25

    ... Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration (FAA), U.S. Department... Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public.../Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held February 5-7, 2013 from 9:00...

  1. Next generation enhanced vision system processing

    NASA Astrophysics Data System (ADS)

    Bernhardt, M.; Cowell, C.; Riley, T.

    2008-04-01

    The use of multiple, high sensitivity sensors can be usefully exploited within military airborne enhanced vision systems (EVS) to provide enhanced situational awareness. To realise such benefits, the imagery from the discrete sensors must be accurately combined and enhanced prior to image presentation to the aircrew. Furthermore, great care must be taken to not introduce artefacts or false information through the image processing routines. This paper outlines developments made to a specific system that uses three collocated low light level cameras. As well as seamlessly merging the individual images, sophisticated processing techniques are used to enhance image quality as well as to remove optical and sensor artefacts such as vignetting and CCD charge smear. The techniques have been designed and tested to be robust across a wide range of scenarios and lighting conditions, and the results presented here highlight the increased performance of the new algorithms over standard EVS image processing techniques.

  2. Fiber optic coherent laser radar 3d vision system

    SciTech Connect

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-12-31

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  3. Application of aircraft navigation sensors to enhanced vision systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.

    1993-01-01

    In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.

  4. Conducting IPN actuators for biomimetic vision system

    NASA Astrophysics Data System (ADS)

    Festin, Nicolas; Plesse, Cedric; Chevrot, Claude; Teyssié, Dominique; Pirim, Patrick; Vidal, Frederic

    2011-04-01

    In recent years, many studies on electroactive polymer (EAP) actuators have been reported. One promising technology is the elaboration of electronic conducting polymers based actuators with Interpenetrating Polymer Networks (IPNs) architecture. Their many advantageous properties as low working voltage, light weight and high lifetime (several million cycles) make them very attractive for various applications including robotics. Our laboratory recently synthesized new conducting IPN actuators based on high molecular Nitrile Butadiene Rubber, poly(ethylene oxide) derivative and poly(3,4-ethylenedioxithiophene). The presence of the elastomer greatly improves the actuator performances such as mechanical resistance and output force. In this article we present the IPN and actuator synthesis, characterizations and design allowing their integration in a biomimetic vision system.

  5. Overview of NETL In-House Vision 21 Activities

    SciTech Connect

    Wildman, David J.

    2001-11-06

    The Office of Science and Technology at the National Energy Technology Laboratory, conducts research in support of Department of Energy's Fossil Energy Program. The research is funded through a variety of programs with each program focusing on a particular aspect of fossil energy. Since the Vision 21 Concept is based on the Advanced Power System Programs (Integrated Gasification Combined Cycle, Pressurized Fluid Bed, HIPPS, Advanced Turbine Systems, and Fuel Cells) it is not surprising that much of the research supports the Vision 21 Concept. The research is classified and presented according to ''enabling technologies'' and ''supporting technologies'' as defined by the Vision 21 Program. Enabling technology include fuel flexible gasification, fuel flexible combustion, hydrogen separation from fuel gas, advanced combustion systems, circulating fluid bed technology, and fuel cells. Supporting technologies include development of advanced materials, computer simulations, computation al fluid dynamics modeling, and advanced environmental control. An overview of Vision 21 related research is described, emphasizing recent accomplishments and capabilities.

  6. Active vision and receptive field development in evolutionary robots.

    PubMed

    Floreano, Dario; Suzuki, Mototaka; Mattiussi, Dario

    2005-01-01

    In this paper, we describe the artificial evolution of adaptive neural controllers for an outdoor mobile robot equipped with a mobile camera. The robot can dynamically select the gazing direction by moving the body and/or the camera. The neural control system, which maps visual information to motor commands, is evolved online by means of a genetic algorithm, but the synaptic connections (receptive fields) from visual photoreceptors to internal neurons can also be modified by Hebbian plasticity while the robot moves in the environment. We show that robots evolved in physics-based simulations with Hebbian visual plasticity display more robust adaptive behavior when transferred to real outdoor environments as compared to robots evolved without visual plasticity. We also show that the formation of visual receptive fields is significantly and consistently affected by active vision as compared to the formation of receptive fields with grid sample images in the environment of the robot. Finally, we show that the interplay between active vision and receptive field formation amounts to the selection and exploitation of a small and constant subset of visual features available to the robot.

  7. Hi-Vision telecine system using pickup tube

    NASA Astrophysics Data System (ADS)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  8. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  9. Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-01-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  10. Technological process supervising using vision systems cooperating with the LabVIEW vision builder

    NASA Astrophysics Data System (ADS)

    Hryniewicz, P.; Banaś, W.; Gwiazda, A.; Foit, K.; Sękala, A.; Kost, G.

    2015-11-01

    One of the most important tasks in the production process is to supervise its proper functioning. Lack of required supervision over the production process can lead to incorrect manufacturing of the final element, through the production line downtime and hence to financial losses. The worst result is the damage of the equipment involved in the manufacturing process. Engineers supervise the production flow correctness use the great range of sensors supporting the supervising of a manufacturing element. Vision systems are one of sensors families. In recent years, thanks to the accelerated development of electronics as well as the easier access to electronic products and attractive prices, they become the cheap and universal type of sensors. These sensors detect practically all objects, regardless of their shape or even the state of matter. The only problem is considered with transparent or mirror objects, detected from the wrong angle. Integrating the vision system with the LabVIEW Vision and the LabVIEW Vision Builder it is possible to determine not only at what position is the given element but also to set its reorientation relative to any point in an analyzed space. The paper presents an example of automated inspection. The paper presents an example of automated inspection of the manufacturing process in a production workcell using the vision supervising system. The aim of the work is to elaborate the vision system that could integrate different applications and devices used in different production systems to control the manufacturing process.

  11. Flight test comparison between enhanced vision (FLIR) and synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-05-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA"s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA's Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  12. Nuclear bimodal new vision solar system missions

    SciTech Connect

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    This paper presents an analysis of the potential mission capability using space reactor bimodal systems for planetary missions. Missions of interest include the Main belt asteroids, Jupiter, Saturn, Neptune, and Pluto. The space reactor bimodal system, defined by an Air Force study for Earth orbital missions, provides 10 kWe power, 1000 N thrust, 850 s Isp, with a 1500 kg system mass. Trajectories to the planetary destinations were examined and optimal direct and gravity assisted trajectories were selected. A conceptual design for a spacecraft using the space reactor bimodal system for propulsion and power, that is capable of performing the missions of interest, is defined. End-to-end mission conceptual designs for bimodal orbiter missions to Jupiter and Saturn are described. All missions considered use the Delta 3 class or Atlas 2AS launch vehicles. The space reactor bimodal power and propulsion system offers both; new vision {open_quote}{open_quote}constellation{close_quote}{close_quote} type missions in which the space reactor bimodal spacecraft acts as a carrier and communication spacecraft for a fleet of microspacecraft deployed at different scientific targets and; conventional missions with only a space reactor bimodal spacecraft and its science payload. {copyright} {ital 1996 American Institute of Physics.}

  13. Intelligent Computer Vision System for Automated Classification

    NASA Astrophysics Data System (ADS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  14. Intelligent Computer Vision System for Automated Classification

    SciTech Connect

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-21

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPtauS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  15. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  16. Computer vision for driver assistance systems

    NASA Astrophysics Data System (ADS)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  17. GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa

    2004-01-01

    The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.

  18. Active vision task and postural control in healthy, young adults: Synergy and probably not duality.

    PubMed

    Bonnet, Cédrick T; Baudry, Stéphane

    2016-07-01

    In upright stance, individuals sway continuously and the sway pattern in dual tasks (e.g., a cognitive task performed in upright stance) differs significantly from that observed during the control quiet stance task. The cognitive approach has generated models (limited attentional resources, U-shaped nonlinear interaction) to explain such patterns based on competitive sharing of attentional resources. The objective of the current manuscript was to review these cognitive models in the specific context of visual tasks involving gaze shifts toward precise targets (here called active vision tasks). The selection excluded the effects of early and late stages of life or disease, external perturbations, active vision tasks requiring head and body motions and the combination of two tasks performed together (e.g., a visual task in addition to a computation in one's head). The selection included studies performed by healthy, young adults with control and active - difficult - vision tasks. Over 174 studies found in Pubmed and Mendeley databases, nine were selected. In these studies, young adults exhibited significantly lower amplitude of body displacement (center of pressure and/or body marker) under active vision tasks than under the control task. Furthermore, the more difficult the active vision tasks were, the better the postural control was. This underscores that postural control during active vision tasks may rely on synergistic relations between the postural and visual systems rather than on competitive or dual relations. In contrast, in the control task, there would not be any synergistic or competitive relations.

  19. A vision architecture for the extravehicular activity retriever

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1992-01-01

    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools, equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This report documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios will be discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the

  20. High Speed Research - External Vision System (EVS)

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Imagine flying a supersonic passenger jet (like the Concorde) at 1500 mph with no front windows in the cockpit - it may one day be a reality, as seen in this animation still. NASA engineers are working to develop technology that would replace the forward cockpit windows in future supersonic passenger jets with large sensor displays. These displays would use video images, enhanced by computer-generated graphics, to take the place of the view out the front windows. The envisioned eXternal Visibility System (XVS) would guide pilots to an airport, warn them of other aircraft near their path, and provide additional visual aides for airport approaches, landings and takeoffs. Currently, supersonic transports like the Anglo-French Concorde droop the front of the jet (the 'nose') downward to allow the pilots to see forward during takeoffs and landings. By enhancing the pilots' vision with high-resolution video displays, future supersonic transport designers could eliminate the heavy and expensive, mechanically-drooped nose. A future U.S. supersonic passenger jet, as envisioned by NASA's High-Speed Research (HSR) program, would carry 300 passengers more than 5000 nautical miles per hour more than 1500 miles per hour (more than twice the speed of sound). Traveling from Los Angeles to Tokyo would take only four hours, with an anticipated fare increase of only 20 percent over current ticket prices for substantially slower subsonic flights. Animation by Joey Ponthieux, Computer Sciences Corporation, Inc.

  1. Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)

    NASA Astrophysics Data System (ADS)

    Ashcraft, Todd W.; Atac, Robert

    2012-06-01

    Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.

  2. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  3. Robust active binocular vision through intrinsically motivated learning.

    PubMed

    Lonini, Luca; Forestier, Sébastien; Teulière, Céline; Zhao, Yu; Shi, Bertram E; Triesch, Jochen

    2013-01-01

    The efficient coding hypothesis posits that sensory systems of animals strive to encode sensory signals efficiently by taking into account the redundancies in them. This principle has been very successful in explaining response properties of visual sensory neurons as adaptations to the statistics of natural images. Recently, we have begun to extend the efficient coding hypothesis to active perception through a form of intrinsically motivated learning: a sensory model learns an efficient code for the sensory signals while a reinforcement learner generates movements of the sense organs to improve the encoding of the signals. To this end, it receives an intrinsically generated reinforcement signal indicating how well the sensory model encodes the data. This approach has been tested in the context of binocular vison, leading to the autonomous development of disparity tuning and vergence control. Here we systematically investigate the robustness of the new approach in the context of a binocular vision system implemented on a robot. Robustness is an important aspect that reflects the ability of the system to deal with unmodeled disturbances or events, such as insults to the system that displace the stereo cameras. To demonstrate the robustness of our method and its ability to self-calibrate, we introduce various perturbations and test if and how the system recovers from them. We find that (1) the system can fully recover from a perturbation that can be compensated through the system's motor degrees of freedom, (2) performance degrades gracefully if the system cannot use its motor degrees of freedom to compensate for the perturbation, and (3) recovery from a perturbation is improved if both the sensory encoding and the behavior policy can adapt to the perturbation. Overall, this work demonstrates that our intrinsically motivated learning approach for efficient coding in active perception gives rise to a self-calibrating perceptual system of high robustness. PMID:24223552

  4. Robust active binocular vision through intrinsically motivated learning.

    PubMed

    Lonini, Luca; Forestier, Sébastien; Teulière, Céline; Zhao, Yu; Shi, Bertram E; Triesch, Jochen

    2013-01-01

    The efficient coding hypothesis posits that sensory systems of animals strive to encode sensory signals efficiently by taking into account the redundancies in them. This principle has been very successful in explaining response properties of visual sensory neurons as adaptations to the statistics of natural images. Recently, we have begun to extend the efficient coding hypothesis to active perception through a form of intrinsically motivated learning: a sensory model learns an efficient code for the sensory signals while a reinforcement learner generates movements of the sense organs to improve the encoding of the signals. To this end, it receives an intrinsically generated reinforcement signal indicating how well the sensory model encodes the data. This approach has been tested in the context of binocular vison, leading to the autonomous development of disparity tuning and vergence control. Here we systematically investigate the robustness of the new approach in the context of a binocular vision system implemented on a robot. Robustness is an important aspect that reflects the ability of the system to deal with unmodeled disturbances or events, such as insults to the system that displace the stereo cameras. To demonstrate the robustness of our method and its ability to self-calibrate, we introduce various perturbations and test if and how the system recovers from them. We find that (1) the system can fully recover from a perturbation that can be compensated through the system's motor degrees of freedom, (2) performance degrades gracefully if the system cannot use its motor degrees of freedom to compensate for the perturbation, and (3) recovery from a perturbation is improved if both the sensory encoding and the behavior policy can adapt to the perturbation. Overall, this work demonstrates that our intrinsically motivated learning approach for efficient coding in active perception gives rise to a self-calibrating perceptual system of high robustness.

  5. Synthetic vision as an integrated element of an enhanced vision system

    NASA Astrophysics Data System (ADS)

    Jennings, Chad W.; Alter, Keith W.; Barrows, Andrew K.; Bernier, Ken L.; Guell, Jeff J.

    2002-07-01

    Enhanced Vision Systems (EVS) and Synthetic Vision Systems (SVS) have the potential to allow vehicle operators to benefit from the best that various image sources have to offer. The ability to see in all directions, even in reduced visibility conditions, offers considerable benefits for operational effectiveness and safety. Nav3D and The Boeing Company are conducting development work on an Enhanced Vision System with an integrated Synthetic Vision System. The EVS consists of several imaging sensors that are digitally fused together to give a pilot a better view of the outside world even in challenging visual conditions. The EVS is limited however to provide imagery within the viewing frustum of the imaging sensors. The SVS can provide a rendered image of an a priori database in any direction that the pilot chooses to look and thus can provide information of terrain and flight path that are outside the purview of the EVS. Design concepts of the system will be discussed. In addition the ground and flight testing of the system will be described.

  6. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  7. The Tactile Vision Substitution System: Applications in Education and Employment

    ERIC Educational Resources Information Center

    Scadden, Lawrence A.

    1974-01-01

    The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)

  8. Building Artificial Vision Systems with Machine Learning

    SciTech Connect

    LeCun, Yann

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  9. Human Factors And Safety Considerations Of Night Vision Systems Flight

    NASA Astrophysics Data System (ADS)

    Verona, Robert W.; Rash, Clarence E.

    1989-03-01

    Military aviation night vision systems greatly enhance the capability to operate during periods of low illumination. After flying with night vision devices, most aviators are apprehensive about returning to unaided night flight. Current night vision imaging devices allow aviators to fly during ambient light conditions which would be extremely dangerous, if not impossible, with unaided vision. However, the visual input afforded with these devices does not approach that experienced using the unencumbered, unaided eye during periods of daylight illumination. Many visual parameters, e,g., acuity, field-of-view, depth perception, etc., are compromised when night vision devices are used. The inherent characteristics of image intensification based sensors introduce new problems associated with the interpretation of visual information based on different spatial and spectral content from that of unaided vision. In addition, the mounting of these devices onto the helmet is accompanied by concerns of fatigue resulting from increased head supported weight and shift in center-of-gravity. All of these concerns have produced numerous human factors and safety issues relating to thb use of night vision systems. These issues are identified and discussed in terms of their possible effects on user performance and safety.

  10. Three-dimensional imaging system combining vision and ultrasonics

    NASA Astrophysics Data System (ADS)

    Wykes, Catherine; Chou, Tsung N.

    1994-11-01

    Vision systems are being applied to a wide range of inspection problems in manufacturing. In 2D systems, a single video camera captures an image of the object and application of suitable image processing techniques enables information about dimension, shape and the presence of features and flaws to be extracted from the image. This can be used to recognize, inspect and/or measure the part. 3D measurement is also possible with vision systems but requires the use of either two or more cameras, or structured lighting (i.e. stripes or grids) and the processing of such images is necessarily considerably more complex, and therefore slower and more expensive than 3D imaging. Ultrasonic imaging is widely used in medical and NDT applications to give 3D images; in these systems, the ultrasound is propagated into a liquid or a solid. Imaging using air-borne ultrasound is much less advanced, mainly due to the limited availability of suitable sensors. Unique 2D ultrasonic ranging systems using in-house built phased arrays have been developed in Nottingham which enable both the range and bearing of targets to be measured. The ultrasonic/vision system will combine the excellent lateral resolution of a vision system with the straightforward range acquisition of the ultrasonic system. The system is expected to extend the use of vision systems in automation, particularly in the area of automated assembly where it can eliminate the need for expensive jigs and orienting part-feeders.

  11. Active vision and image/video understanding systems built upon network-symbolic models for perception-based navigation of mobile robots in real-world environments

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-12-01

    To be completely successful, robots need to have reliable perceptual systems that are similar to human vision. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects with respect to the observer and to each other. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a "raster" into a "vector" representation. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views. Once built, the model of visual scene changes slower then local information in the visual buffer. It allows for disambiguating visual information and effective control of actions and navigation via incremental relational changes in visual buffer. Network-Symbolic models can be seamlessly integrated into the NIST 4D/RCS architecture and better interpret images/video for situation awareness, target recognition, navigation and actions.

  12. Latency in Visionic Systems: Test Methods and Requirements

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  13. Machine vision systems using machine learning for industrial product inspection

    NASA Astrophysics Data System (ADS)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  14. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    SciTech Connect

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  15. Challenges of Embedded Computer Vision in Automotive Safety Systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Dhua, Arnab S.; Kiselewich, Stephen J.; Bauson, William A.

    Vision-based automotive safety systems have received considerable attention over the past decade. Such systems have advantages compared to those based on other types of sensors such as radar, because of the availability of lowcost and high-resolution cameras and abundant information contained in video images. However, various technical challenges exist in such systems. One of the most prominent challenges lies in running sophisticated computer vision algorithms on low-cost embedded systems at frame rate. This chapter discusses these challenges through vehicle detection and classification in a collision warning system.

  16. Remote-controlled vision-guided mobile robot system

    NASA Astrophysics Data System (ADS)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  17. TVS: An Environment For Building Knowledge-Based Vision Systems

    NASA Astrophysics Data System (ADS)

    Weymouth, Terry E.; Amini, Amir A.; Tehrani, Saeid

    1989-03-01

    Advances in the field of knowledge-guided computer vision require the development of large scale projects and experimentation with them. One factor which impedes such development is the lack of software environments which combine standard image processing and graphics abilities with the ability to perform symbolic processing. In this paper, we describe a software environment that assists in the development of knowledge-based computer vision projects. We have built, upon Common LISP and C, a software development environment which combines standard image processing tools and a standard blackboard-based system, with the flexibility of the LISP programming environment. This environment has been used to develop research projects in knowledge-based computer vision and dynamic vision for robot navigation.

  18. Interactive MRI Segmentation with Controlled Active Vision

    PubMed Central

    Karasev, Peter; Kolesov, Ivan; Chudy, Karol; Muller, Grant; Xerogeanes, John; Tannenbaum, Allen

    2013-01-01

    Partitioning Magnetic-Resonance-Imaging (MRI) data into salient anatomic structures is a problem in medical imaging that has continued to elude fully automated solutions. Implicit functions are a common way to model the boundaries between structures and are amenable to control-theoretic methods. In this paper, the goal of enabling a human to obtain accurate segmentations in a short amount of time and with little effort is transformed into a control synthesis problem. Perturbing the state and dynamics of an implicit function’s driving partial differential equation via the accumulated user inputs and an observer-like system leads to desirable closed-loop behavior. Using a Lyapunov control design, a balance is established between the influence of a data-driven gradient flow and the human’s input over time. Automatic segmentation is thus smoothly coupled with interactivity. An application of the mathematical methods to orthopedic segmentation is shown, demonstrating the expected transient and steady state behavior of the implicit segmentation function and auxiliary observer. PMID:24584213

  19. Multiple-channel Streaming Delivery for Omnidirectional Vision System

    NASA Astrophysics Data System (ADS)

    Iwai, Yoshio; Nagahara, Hajime; Yachida, Masahiko

    An omnidirectional vision is an imaging system that can capture a surrounding image in whole direction by using a hyperbolic mirror and a conventional CCD camera. This paper proposes a streaming server that can efficiently transfer movies captured by an omnidirectional vision system through the Internet. The proposed system uses multiple channels to deliver multiple movies synchronously. Through this method, the system enables clients to view the different direction of omnidirectional movies and also support the function to change the view are during playback period. Our evaluation experiments show that our proposed streaming server can effectively deliver multiple movies via multiple channels.

  20. The influence of active vision on the exoskeleton of intelligent agents

    NASA Astrophysics Data System (ADS)

    Smith, Patrice; Terry, Theodore B.

    2016-04-01

    Chameleonization occurs when a self-learning autonomous mobile system's (SLAMR) active vision scans the surface of which it is perched causing the exoskeleton to changes colors exhibiting a chameleon effect. Intelligent agents having the ability to adapt to their environment and exhibit key survivability characteristics of its environments would largely be due in part to the use of active vision. Active vision would allow the intelligent agent to scan its environment and adapt as needed in order to avoid detection. The SLAMR system would have an exoskeleton, which would change, based on the surface it was perched on; this is known as the "chameleon effect." Not in the common sense of the term, but from the techno-bio inspired meaning as addressed in our previous paper. Active vision, utilizing stereoscopic color sensing functionality would enable the intelligent agent to scan an object within its close proximity, determine the color scheme, and match it; allowing the agent to blend with its environment. Through the use of its' optical capabilities, the SLAMR system would be able to further determine its position, taking into account spatial and temporal correlation and spatial frequency content of neighboring structures further ensuring successful background blending. The complex visual tasks of identifying objects, using edge detection, image filtering, and feature extraction are essential for an intelligent agent to gain additional knowledge about its environmental surroundings.

  1. Musca domestica inspired machine vision system with hyperacuity

    NASA Astrophysics Data System (ADS)

    Riley, Dylan T.; Harman, William M.; Tomberlin, Eric; Barrett, Steven F.; Wilcox, Michael; Wright, Cameron H. G.

    2005-05-01

    Musca domestica, the common house fly, has a simple yet powerful and accessible vision system. Cajal indicated in 1885 the fly's vision system is the same as in the human retina. The house fly has some intriguing vision system features such as fast, analog, parallel operation. Furthermore, it has the ability to detect movement and objects at far better resolution than predicted by photoreceptor spacing, termed hyperacuity. We are investigating the mechanisms behind these features and incorporating them into next generation vision systems. We have developed a prototype sensor that employs a fly inspired arrangement of photodetectors sharing a common lens. The Gaussian shaped acceptance profile of each sensor coupled with overlapped sensor field of views provide the necessary configuration for obtaining hyperacuity data. The sensor is able to detect object movement with far greater resolution than that predicted by photoreceptor spacing. We have exhaustively tested and characterized the sensor to determine its practical resolution limit. Our tests coupled with theory from Bucklew and Saleh (1985) indicate that the limit to the hyperacuity response may only be related to target contrast. We have also implemented an array of these prototype sensors which will allow for two - dimensional position location. These high resolution, low contrast capable sensors are being developed for use as a vision system for an autonomous robot and the next generation of smart wheel chairs. However, they are easily adapted for biological endoscopy, downhole monitoring in oil wells, and other applications.

  2. A modular real-time vision system for humanoid robots

    NASA Astrophysics Data System (ADS)

    Trifan, Alina L.; Neves, António J. R.; Lau, Nuno; Cunha, Bernardo

    2012-01-01

    Robotic vision is nowadays one of the most challenging branches of robotics. In the case of a humanoid robot, a robust vision system has to provide an accurate representation of the surrounding world and to cope with all the constraints imposed by the hardware architecture and the locomotion of the robot. Usually humanoid robots have low computational capabilities that limit the complexity of the developed algorithms. Moreover, their vision system should perform in real time, therefore a compromise between complexity and processing times has to be found. This paper presents a reliable implementation of a modular vision system for a humanoid robot to be used in color-coded environments. From image acquisition, to camera calibration and object detection, the system that we propose integrates all the functionalities needed for a humanoid robot to accurately perform given tasks in color-coded environments. The main contributions of this paper are the implementation details that allow the use of the vision system in real-time, even with low processing capabilities, the innovative self-calibration algorithm for the most important parameters of the camera and its modularity that allows its use with different robotic platforms. Experimental results have been obtained with a NAO robot produced by Aldebaran, which is currently the robotic platform used in the RoboCup Standard Platform League, as well as with a humanoid build using the Bioloid Expert Kit from Robotis. As practical examples, our vision system can be efficiently used in real time for the detection of the objects of interest for a soccer playing robot (ball, field lines and goals) as well as for navigating through a maze with the help of color-coded clues. In the worst case scenario, all the objects of interest in a soccer game, using a NAO robot, with a single core 500Mhz processor, are detected in less than 30ms. Our vision system also includes an algorithm for self-calibration of the camera parameters as well

  3. Database integrity monitoring for synthetic vision systems using machine vision and SHADE

    NASA Astrophysics Data System (ADS)

    Cooper, Eric G.; Young, Steven D.

    2005-05-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  4. Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Young, Steven D.

    2005-01-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  5. Technical Challenges in the Development of a NASA Synthetic Vision System Concept

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Parrish, Russell V.; Kramer, Lynda J.; Harrah, Steve; Arthur, J. J., III

    2002-01-01

    Within NASA's Aviation Safety Program, the Synthetic Vision Systems Project is developing display system concepts to improve pilot terrain/situation awareness by providing a perspective synthetic view of the outside world through an on-board database driven by precise aircraft positioning information updating via Global Positioning System-based data. This work is aimed at eliminating visibility-induced errors and low visibility conditions as a causal factor to civil aircraft accidents, as well as replicating the operational benefits of clear day flight operations regardless of the actual outside visibility condition. Synthetic vision research and development activities at NASA Langley Research Center are focused around a series of ground simulation and flight test experiments designed to evaluate, investigate, and assess the technology which can lead to operational and certified synthetic vision systems. The technical challenges that have been encountered and that are anticipated in this research and development activity are summarized.

  6. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  7. Human vision simulation for evaluation of enhanced and synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Doll, Theodore J.; Home, Richard; Cooke, Kevin J.; Wasilewski, Anthony A.; Sheerin, David T.; Hetzler, Morris C.

    2003-09-01

    One of the key problems in developing Enhanced and Synthetic Vision Systems is evaluating their effectiveness in enhancing human visual performance. A validated simulation of human vision would provide a means of avoiding costly and time-consuming testing of human observers. We describe an image-based simulation of human visual search, detection, and identification, and efforts to further validate and refine this simulation. One of the advantages of an image-based simulation is that it can predict performance for exactly the same visual stimuli seen by human operators. This makes it possible to assess aspects of the imagery, such as particular types and amounts of background clutter and sensor distortions, that are not usually considered in non-image based models. We present two validation studies - one showing that the simulation accurately predicts human color discrimination, and a second showing that it produces probabilities of detection (Pd's) that closely match Blackwell-type human threshold data.

  8. 2020 Vision for Tank Waste Cleanup (One System Integration) - 12506

    SciTech Connect

    Harp, Benton; Charboneau, Stacy; Olds, Erik

    2012-07-01

    The mission of the Department of Energy's Office of River Protection (ORP) is to safely retrieve and treat the 56 million gallons of Hanford's tank waste and close the Tank Farms to protect the Columbia River. The millions of gallons of waste are a by-product of decades of plutonium production. After irradiated fuel rods were taken from the nuclear reactors to the processing facilities at Hanford they were exposed to a series of chemicals designed to dissolve away the rod, which enabled workers to retrieve the plutonium. Once those chemicals were exposed to the fuel rods they became radioactive and extremely hot. They also couldn't be used in this process more than once. Because the chemicals are caustic and extremely hazardous to humans and the environment, underground storage tanks were built to hold these chemicals until a more permanent solution could be found. The Cleanup of Hanford's 56 million gallons of radioactive and chemical waste stored in 177 large underground tanks represents the Department's largest and most complex environmental remediation project. Sixty percent by volume of the nation's high-level radioactive waste is stored in the underground tanks grouped into 18 'tank farms' on Hanford's central plateau. Hanford's mission to safely remove, treat and dispose of this waste includes the construction of a first-of-its-kind Waste Treatment Plant (WTP), ongoing retrieval of waste from single-shell tanks, and building or upgrading the waste feed delivery infrastructure that will deliver the waste to and support operations of the WTP beginning in 2019. Our discussion of the 2020 Vision for Hanford tank waste cleanup will address the significant progress made to date and ongoing activities to manage the operations of the tank farms and WTP as a single system capable of retrieving, delivering, treating and disposing Hanford's tank waste. The initiation of hot operations and subsequent full operations of the WTP are not only dependent upon the successful

  9. Synthetic vision system flight test results and lessons learned

    NASA Technical Reports Server (NTRS)

    Radke, Jeffrey

    1993-01-01

    Honeywell Systems and Research Center developed and demonstrated an active 35 GHz Radar Imaging system as part of the FAA/USAF/Industry sponsored Synthetic Vision System Technology Demonstration (SVSTD) Program. The objectives of this presentation are to provide a general overview of flight test results, a system level perspective that encompasses the efforts of the SVSTD and Augmented VIsual Display (AVID) programs, and more importantly, provide the AVID workshop participants with Honeywell's perspective on the lessons that were learned from the SVS flight tests. One objective of the SVSTD program was to explore several known system issues concerning radar imaging technology. The program ultimately resolved some of these issues, left others open, and in fact created several new concerns. In some instances, the interested community has drawn improper conclusions from the program by globally attributing implementation specific issues to radar imaging technology in general. The motivation for this presentation is therefore to provide AVID researchers with a better understanding of the issues that truly remain open, and to identify the perceived issues that are either resolved or were specific to Honeywell's implementation.

  10. A Laser-Based Vision System for Weld Quality Inspection

    PubMed Central

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308

  11. Research on vision control system for inverted pendulum

    NASA Astrophysics Data System (ADS)

    Jin, Xiaolin; Bian, Yongming; Jiang, Jia; Li, Anhu; Jiang, Xuchun; Zhao, Fangwei

    2010-10-01

    This paper focuses on the study and experiment of vision control system for an inverted pendulum. To solve some key technical problems, the hardware platform and the software flow of the control system have been designed. The whole control system is composed of vision module and motion control module. The vision module is based on "CCD camera", the motion control module is based on "Motion Control Card, Servo Driver and Servo Motor", and the software is based on LabView. The main research contents and contributions of this paper are summarized as follows: (1) Analyze the functional requirements of the vision control system about the inverted pendulum, developing the hardware platform and planning the overall arrangement of the system; (2) Design the image processing flow and the recognition track process of the moving objects. The accurate position of the pendulum can be obtained from the image through the flow, which concludes image pretreatment, image segmentation and image post-processing; (3) Design the software structure of the control system and write the program code. It is convenient to update and maintain the control software due to the modularity of the system. Some key technical problems in the software have been solved, so the flexibility and reliability of the control system are improved; (4) Build the experimental platform and set the key parameters of the vision control system through experiments. It is proved that the chosen scheme of this paper is feasible. The experiment provides the basis for the development and application of the whole control system.

  12. Development of a machine vision guidance system for automated assembly of space structures

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Sydow, P. Daniel

    1992-01-01

    The topics are presented in viewgraph form and include: automated structural assembly robot vision; machine vision requirements; vision targets and hardware; reflective efficiency; target identification; pose estimation algorithms; triangle constraints; truss node with joint receptacle targets; end-effector mounted camera and light assembly; vision system results from optical bench tests; and future work.

  13. Vision system for combustion analysis and diagnosis in gas turbines

    NASA Astrophysics Data System (ADS)

    Sassi, Giancarlo; Corbani, Franco; Graziadio, Mario; Novelli, Giuliano

    1995-09-01

    This paper describes the flame vision system developed by CISE, on behalf of Thermical Research Division of ENEL, allowing a non-intrusive analysis and a probabilistic classification of the combustion process inside the gas turbines. The system is composed of a vision probe, designed for working in hostile environments and installed inside the combustion chamber, an optical element housing a videocamera, and a personal computer equipped with a frame grabber board. The main goal of the system is the flames classification in order to evaluate the occurrency of deviation from the optimal combustion conditions and to generate warning messages for power plant personnel. This is obtained by comparing some geometrical features (baricenter, inertia axes, area, orientation, etc.) extracted from flame area of images with templates found out during the training stage and classifying them in a probabilistic way by using a Bayesian algorithm. The vision system, now at the test stage, is intended to be a useful tool for combustion monitoring, has turbines set-up, periodic survey, and for collecting information concerning the burner efficiency and reliability; moreover the vision probe flexibility allows other applications as particle image velocimetry, spectral and thermal analysis.

  14. 75 FR 17202 - Eighth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-05

    ... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration (FAA.../Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of a... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems...

  15. Vision based assistive technology for people with dementia performing activities of daily living (ADLs): an overview

    NASA Astrophysics Data System (ADS)

    As'ari, M. A.; Sheikh, U. U.

    2012-04-01

    The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.

  16. Characterization of a multi-user indoor positioning system based on low cost depth vision (Kinect) for monitoring human activity in a smart home.

    PubMed

    Sevrin, Loïc; Noury, Norbert; Abouchi, Nacer; Jumel, Fabrice; Massot, Bertrand; Saraydaryan, Jacques

    2015-01-01

    An increasing number of systems use indoor positioning for many scenarios such as asset tracking, health care, games, manufacturing, logistics, shopping, and security. Many technologies are available and the use of depth cameras is becoming more and more attractive as this kind of device becomes affordable and easy to handle. This paper contributes to the effort of creating an indoor positioning system based on low cost depth cameras (Kinect). A method is proposed to optimize the calibration of the depth cameras, to describe the multi-camera data fusion and to specify a global positioning projection to maintain the compatibility with outdoor positioning systems. The monitoring of the people trajectories at home is intended for the early detection of a shift in daily activities which highlights disabilities and loss of autonomy. This system is meant to improve homecare health management at home for a better end of life at a sustainable cost for the community.

  17. Characterization of a multi-user indoor positioning system based on low cost depth vision (Kinect) for monitoring human activity in a smart home.

    PubMed

    Sevrin, Loïc; Noury, Norbert; Abouchi, Nacer; Jumel, Fabrice; Massot, Bertrand; Saraydaryan, Jacques

    2015-01-01

    An increasing number of systems use indoor positioning for many scenarios such as asset tracking, health care, games, manufacturing, logistics, shopping, and security. Many technologies are available and the use of depth cameras is becoming more and more attractive as this kind of device becomes affordable and easy to handle. This paper contributes to the effort of creating an indoor positioning system based on low cost depth cameras (Kinect). A method is proposed to optimize the calibration of the depth cameras, to describe the multi-camera data fusion and to specify a global positioning projection to maintain the compatibility with outdoor positioning systems. The monitoring of the people trajectories at home is intended for the early detection of a shift in daily activities which highlights disabilities and loss of autonomy. This system is meant to improve homecare health management at home for a better end of life at a sustainable cost for the community. PMID:26737415

  18. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  19. Configuration assistant for versatile vision-based inspection systems

    NASA Astrophysics Data System (ADS)

    Huesser, Olivier; Huegli, Heinz

    2001-01-01

    Nowadays, vision-based inspection systems are present in many stages of the industrial manufacturing process. Their versatility, which permits us to accommodate a broad range of inspection requirements, is, however, limited by the time consuming system setup performed at each production change. This work aims at providing a configuration assistant that helps to speed up this system setup, considering the peculiarities of industrial vision systems. The pursued principle, which is to maximize the discriminating power of the features involved in the inspection decision, leads to an optimization problem based on a high-dimensional objective function. Several objective functions based on various metrics are proposed, their optimization being performed with the help of various search heuristics such as genetic methods and simulated annealing methods. The experimental results obtained with an industrial inspection system are presented. They show the effectiveness of the presented approach, and validate the configuration assistant as well.

  20. Development and testing of the EVS 2000 enhanced vision system

    NASA Astrophysics Data System (ADS)

    Way, Scott P.; Kerr, Richard; Imamura, Joe J.; Arnoldy, Dan; Zeylmaker, Richard; Zuro, Greg

    2003-09-01

    An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts to provide a single image from uncooled infrared imagers in both the LWIR and SWIR. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for EVS systems.

  1. Head-aimed vision system improves tele-operated mobility

    NASA Astrophysics Data System (ADS)

    Massey, Kent

    2004-12-01

    A head-aimed vision system greatly improves the situational awareness and decision speed for tele-operations of mobile robots. With head-aimed vision, the tele-operator wears a head-mounted display and a small three axis head-position measuring device. Wherever the operator looks, the remote sensing system "looks". When the system is properly designed, the operator's occipital lobes are "fooled" into believing that the operator is actually on the remote robot. The result is at least a doubling of: situational awareness, threat identification speed, and target tracking ability. Proper system design must take into account: precisely matching fields of view; optical gain; and latency below 100 milliseconds. When properly designed, a head-aimed system does not cause nausea, even with prolonged use.

  2. Enhanced vision systems: results of simulation and operational tests

    NASA Astrophysics Data System (ADS)

    Hecker, Peter; Doehler, Hans-Ullrich

    1998-07-01

    Today's aircrews have to handle more and more complex situations. Most critical tasks in the field of civil aviation are landing approaches and taxiing. Especially under bad weather conditions the crew has to handle a tremendous workload. Therefore DLR's Institute of Flight Guidance has developed a concept for an enhanced vision system (EVS), which increases performance and safety of the aircrew and provides comprehensive situational awareness. In previous contributions some elements of this concept have been presented, i.e. the 'Simulation of Imaging Radar for Obstacle Detection and Enhanced Vision' by Doehler and Bollmeyer 1996. Now the presented paper gives an overview about the DLR's enhanced vision concept and research approach, which consists of two main components: simulation and experimental evaluation. In a first step the simulational environment for enhanced vision research with a pilot-in-the-loop is introduced. An existing fixed base flight simulator is supplemented by real-time simulations of imaging sensors, i.e. imaging radar and infrared. By applying methods of data fusion an enhanced vision display is generated combining different levels of information, such as terrain model data, processed images acquired by sensors, aircraft state vectors and data transmitted via datalink. The second part of this contribution presents some experimental results. In cooperation with Daimler Benz Aerospace Sensorsystems Ulm, a test van and a test aircraft were equipped with a prototype of an imaging millimeter wave radar. This sophisticated HiVision Radar is up to now one of the most promising sensors for all weather operations. Images acquired by this sensor are shown as well as results of data fusion processes based on digital terrain models. The contribution is concluded by a short video presentation.

  3. 77 FR 16890 - Eighteenth Meeting: RTCA Special Committee 213, Enhanced Flight Visions Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-22

    ... Federal Aviation Administration Eighteenth Meeting: RTCA Special Committee 213, Enhanced Flight Visions... of Transportation (DOT). ACTION: Notice of meeting RTCA Special Committee 213, Enhanced Flight... public of the eighteenth meeting of RTCA Special Committee 213, Enhanced Flight Visions...

  4. Design and control of active vision based mechanisms for intelligent robots

    NASA Technical Reports Server (NTRS)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  5. Crew and Display Concepts Evaluation for Synthetic / Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III

    2006-01-01

    NASA s Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot s Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.

  6. A machine vision system for the calibration of digital thermometers

    NASA Astrophysics Data System (ADS)

    Vázquez-Fernández, Esteban; Dacal-Nieto, Angel; González-Jorge, Higinio; Martín, Fernando; Formella, Arno; Alvarez-Valado, Victor

    2009-06-01

    Automation is a key point in many industrial tasks such as calibration and metrology. In this context, machine vision has shown to be a useful tool for automation support, especially when there is no other option available. A system for the calibration of portable measurement devices has been developed. The system uses machine vision to obtain the numerical values shown by displays. A new approach based on human perception of digits, which works in parallel with other more classical classifiers, has been created. The results show the benefits of the system in terms of its usability and robustness, obtaining a success rate higher than 99% in display recognition. The system saves time and effort, and offers the possibility of scheduling calibration tasks without excessive attention by the laboratory technicians.

  7. Fiber optic coherent laser radar 3D vision system

    SciTech Connect

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  8. Stereo vision based hand-held laser scanning system design

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Wang, Jinming

    2011-11-01

    Although 3D scanning system is used more and more broadly in many fields, such computer animate, computer aided design, digital museums, and so on, a convenient scanning device is expansive for most people to afford. In another hand, imaging devices are becoming cheaper, a stereo vision system with two video cameras cost little. In this paper, a hand held laser scanning system is design based on stereo vision principle. The two video cameras are fixed tighter, and are all calibrated in advance. The scanned object attached with some coded markers is in front of the stereo system, and can be changed its position and direction freely upon the need of scanning. When scanning, the operator swept a line laser source, and projected it on the object. At the same time, the stereo vision system captured the projected lines, and reconstructed their 3D shapes. The code markers are used to translate the coordinate system between scanned points under different view. Two methods are used to get more accurate results. One is to use NURBS curves to interpolate the sections of the laser lines to obtain accurate central points, and a thin plate spline is used to approximate the central points, and so, an exact laser central line is got, which guards an accurate correspondence between tow cameras. Another way is to incorporate the constraint of laser swept plane on the reconstructed 3D curves by a PCA (Principle Component Analysis) algorithm, and more accurate results are obtained. Some examples are given to verify the system.

  9. Enhanced and synthetic vision system (ESVS) flight demonstration

    NASA Astrophysics Data System (ADS)

    Sanders-Reed, John N.; Bernier, Ken; Güell, Jeff

    2008-04-01

    Boeing has developed and flight demonstrated a distributed aperture enhanced and synthetic vision system for integrated situational awareness. The system includes 10 sensors, 2 simultaneous users with head mounted displays (one via a wireless remote link), and intelligent agents for hostile fire detection, ground moving target detection and tracking, and stationary personnel and vehicle detection. Flight demonstrations were performed in 2006 and 2007 on a MD-530 "Little Bird" helicopter.

  10. Intelligent vision system for autonomous vehicle operations

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  11. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  12. Development of a machine vision system for automated structural assembly

    NASA Technical Reports Server (NTRS)

    Sydow, P. Daniel; Cooper, Eric G.

    1992-01-01

    Research is being conducted at the LaRC to develop a telerobotic assembly system designed to construct large space truss structures. This research program was initiated within the past several years, and a ground-based test-bed was developed to evaluate and expand the state of the art. Test-bed operations currently use predetermined ('taught') points for truss structural assembly. Total dependence on the use of taught points for joint receptacle capture and strut installation is neither robust nor reliable enough for space operations. Therefore, a machine vision sensor guidance system is being developed to locate and guide the robot to a passive target mounted on the truss joint receptacle. The vision system hardware includes a miniature video camera, passive targets mounted on the joint receptacles, target illumination hardware, and an image processing system. Discrimination of the target from background clutter is accomplished through standard digital processing techniques. Once the target is identified, a pose estimation algorithm is invoked to determine the location, in three-dimensional space, of the target relative to the robots end-effector. Preliminary test results of the vision system in the Automated Structural Assembly Laboratory with a range of lighting and background conditions indicate that it is fully capable of successfully identifying joint receptacle targets throughout the required operational range. Controlled optical bench test results indicate that the system can also provide the pose estimation accuracy to define the target position.

  13. Novel Corrosion Sensor for Vision 21 Systems

    SciTech Connect

    Heng Ban

    2005-12-01

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the metal loss caused by chemical reactions on surfaces exposed to the combustion environment. Such corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall objective of this project is to develop a technology for on-line corrosion monitoring based on a new concept. This objective is to be achieved by a laboratory development of the sensor and instrumentation, testing of the measurement system in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. The initial plan for testing at the coal-fired pilot-scale furnace was replaced by testing in a power plant, because the operation condition at the power plant is continuous and more stable. The first two-year effort was completed with the successful development sensor and measurement system, and successful testing in a muffle furnace. Because of the potential high cost in sensor fabrication, a different type of sensor was used and tested in a power plant burning eastern bituminous coals. This report summarize the experiences and results of the first two years of the three-year project, which include laboratory

  14. Novel Corrosion Sensor for Vision 21 Systems

    SciTech Connect

    Heng Ban; Bharat Soni

    2007-03-31

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall goal of this project is to develop a technology for on-line fireside corrosion monitoring. This objective is achieved by the laboratory development of sensors and instrumentation, testing them in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. This project successfully developed two types of sensors and measurement systems, and successful tested them in a muffle furnace in the laboratory. The capacitance sensor had a high fabrication cost and might be more appropriate in other applications. The low-cost resistance sensor was tested in a power plant burning eastern bituminous coals. The results show that the fireside corrosion measurement system can be used to determine the corrosion rate at waterwall and superheater locations. Electron microscope analysis of the corroded sensor surface provided detailed picture of the corrosion process.

  15. Development of a distributed vision system for industrial conditions

    NASA Astrophysics Data System (ADS)

    Weiss, Michael; Schiller, Arnulf; O'Leary, Paul; Fauster, Ewald; Schalk, Peter

    2003-04-01

    This paper presents a prototype system to monitor a hot glowing wire during the rolling process in quality relevant aspects. Therefore a measurement system based on image vision and a communication framework integrating distributed measurement nodes is introduced. As a technologically approach, machine vision is used to evaluate the wire quality parameters. Therefore an image processing algorithm, based on dual Grassmannian coordinates fitting parallel lines by singular value decomposition, is formulated. Furthermore a communication framework which implements anonymous tuplespace communication, a private network based on TCP/IP and a consequent Java implementation of all used components is presented. Additionally, industrial requirements such as realtime communication to IEC-61131 conform digital IO"s (Modbus TCP/IP protocol), the implementation of a watchdog pattern and the integration of multiple operating systems (LINUX, QNX and WINDOWS) are lined out. The deployment of such a framework to the real world problem statement of the wire rolling mill is presented.

  16. Image processing in an enhanced and synthetic vision system

    NASA Astrophysics Data System (ADS)

    Mueller, Rupert M.; Palubinskas, Gintautas; Gemperlein, Hans

    2002-07-01

    'Synthetic Vision' and 'Sensor Vision' complement to an ideal system for the pilot's situation awareness. To fuse these two data sets the sensor images are first segmented by a k-means algorithm and then features are extracted by blob analysis. These image features are compared with the features of the projected airport data using fuzzy logic in order to identify the runway in the sensor image and to improve the aircraft navigation data. This process is necessary due to inaccurate input data i.e. position and attitude of the aircraft. After identifying the runway, obstacles can be detected using the sensor image. The extracted information is presented to the pilot's display system and combined with the appropriate information from the MMW radar sensor in a subsequent fusion processor. A real time image processing procedure is discussed and demonstrated with IR measurements of a FLIR system during landing approaches.

  17. Low Cost Night Vision System for Intruder Detection

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  18. Early light vision isomorphic singular (ELVIS) system

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz P.; Ternovskiy, Igor V.; DeBacker, Theodore A.; Caulfield, H. John

    2000-07-01

    In the shallow water military scenarios, UUVs (Unmanned Underwater Vehicles) are required to protect assets against mines, swimmers, and other underwater military objects. It would be desirable if such UUVs could autonomously see in a similar way as humans, at least, at the primary visual cortex-level. In this paper, an attempt to such a UUV system development is proposed.

  19. The Systemic Vision of the Educational Learning

    ERIC Educational Resources Information Center

    Lima, Nilton Cesar; Penedo, Antonio Sergio Torres; de Oliveira, Marcio Mattos Borges; de Oliveira, Sonia Valle Walter Borges; Queiroz, Jamerson Viegas

    2012-01-01

    As the sophistication of technology is increasing, also increased the demand for quality in education. The expectation for quality has promoted broad range of products and systems, including in education. These factors include the increased diversity in the student body, which requires greater emphasis that allows a simple and dynamic model in the…

  20. NOVEL CORROSION SENSOR FOR VISION 21 SYSTEMS

    SciTech Connect

    Heng Ban

    2004-12-01

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the metal loss caused by chemical reactions on surfaces exposed to the combustion environment. Such corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall objective of this proposed project is to develop a technology for on-line corrosion monitoring based on a new concept. This report describes the initial results from the first-year effort of the three-year study that include laboratory development and experiment, and pilot combustor testing.

  1. Development of a vision system for an intelligent ground vehicle

    NASA Astrophysics Data System (ADS)

    Nagel, Robert L.; Perry, Kenneth; Stone, Robert B.; McAdams, Daniel A.

    2009-01-01

    The development of a vision system for an autonomous ground vehicle designed and constructed for the Intelligent Ground Vehicle Competition (IGVC) is discussed. The requirements for the vision system of the autonomous vehicle are explored via functional analysis considering the flows (materials, energies and signals) into the vehicle and the changes required of each flow within the vehicle system. Functional analysis leads to a vision system based on a laser range finder (LIDAR) and a camera. Input from the vision system is processed via a ray-casting algorithm whereby the camera data and the LIDAR data are analyzed as a single array of points representing obstacle locations, which for the IGVC, consist of white lines on the horizontal plane and construction markers on the vertical plane. Functional analysis also leads to a multithreaded application where the ray-casting algorithm is a single thread of the vehicle's software, which consists of multiple threads controlling motion, providing feedback, and processing the data from the camera and LIDAR. LIDAR data is collected as distances and angles from the front of the vehicle to obstacles. Camera data is processed using an adaptive threshold algorithm to identify color changes within the collected image; the image is also corrected for camera angle distortion, adjusted to the global coordinate system, and processed using least-squares method to identify white boundary lines. Our IGVC robot, MAX, is utilized as the continuous example for all methods discussed in the paper. All testing and results provided are based on our IGVC robot, MAX, as well.

  2. Displacement measurement system for inverters using computer micro-vision

    NASA Astrophysics Data System (ADS)

    Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Li, Hai; Ge, Peng

    2016-06-01

    We propose a practical system for noncontact displacement measurement of inverters using computer micro-vision at the sub-micron scale. The measuring method of the proposed system is based on a fast template matching algorithm with an optical microscopy. A laser interferometer measurement (LIM) system is built up for comparison. Experimental results demonstrate that the proposed system can achieve the same performance as the LIM system but shows a higher operability and stability. The measuring accuracy is 0.283 μm.

  3. Healthcare Information Systems - Requirements and Vision

    NASA Astrophysics Data System (ADS)

    Williams, John G.

    The introduction of sophisticated information, communications and technology into health care is not a simple task, as demonstrated by the difficulties encountered by the Department of Health's multi-billion programme for the NHS. This programme has successfully implemented much of the infrastructure needed to support the activities of the NHS, but has made less progress with electronic patient records. The case for health records that are focused on the individual patient will be outlined, and the need for these to be underpinned by professionally agreed standards for structure and content. Some of the challenges will be discussed, and the benefits to health care and clinical research will be explored.

  4. Bionic vision: system architectures: a review.

    PubMed

    Guenther, Thomas; Lovell, Nigel H; Suaning, Gregg J

    2012-01-01

    The concept of an electronic visual prosthesis has been investigated since the early 20th century. While the first generation of long-term implantable devices were defined by the turn of the millennium, the greatest progress has been achieved in the past decade. This review describes the current state of the art of visual prosthesis investigated by more than two dozen active groups in this field of research. The focus is on technological solutions in regard to long-term safety of materials, electrode-tissue interfaces and encapsulation technologies. Furthermore, we critically assess the maximum number of stimulating electrodes each technological approach is likely to provide.

  5. Telerobotic rendezvous and docking vision system architecture

    NASA Technical Reports Server (NTRS)

    Gravely, Ben; Myers, Donald; Moody, David

    1992-01-01

    This research program has successfully demonstrated a new target label architecture that allows a microcomputer to determine the position, orientation, and identity of an object. It contains a CAD-like database with specific geometric information about the object for approach, grasping, and docking maneuvers. Successful demonstrations were performed selecting and docking an ORU box with either of two ORU receptacles. Small, but significant differences were seen in the two camera types used in the program, and camera sensitive program elements have been identified. The software has been formatted into a new co-autonomy system which provides various levels of operator interaction and promises to allow effective application of telerobotic systems while code improvements are continuing.

  6. Passive millimeter wave camera for enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Shoucri, Merit; Dow, G. Samuel; Fornaca, Steven W.; Hauss, Bruce I.; Yujiri, Larry; Shannon, James; Summers, Leland

    1996-05-01

    Passive millimeter wave (PMMW) sensors have been proposed as forward vision sensors for enhanced vision systems used in low visibility aircraft landing. This work reports on progress achieved to date in the development and manufacturing of a demonstration PMMW camera. The unit is designed to be ground and flight tested starting 1996. The camera displays on a head-up or head-down display unit a real time true image of the forward scene. With appropriate head-up symbology and accurate navigation guidance provided by global positioning satellite receivers on-board the aircraft, pilots can autonomously (without ground assist) execute category 3 low visibility take-offs and landings on non-equipped runways. We shall discuss utility of fielding these systems to airlines and other users.

  7. A VISION of Advanced Nuclear System Cost Uncertainty

    SciTech Connect

    J'Tia Taylor; David E. Shropshire; Jacob J. Jacobson

    2008-08-01

    VISION (VerifIable fuel cycle SImulatiON) is the Advanced Fuel Cycle Initiative’s and Global Nuclear Energy Partnership Program’s nuclear fuel cycle systems code designed to simulate the US commercial reactor fleet. The code is a dynamic stock and flow model that tracks the mass of materials at the isotopic level through the entire nuclear fuel cycle. As VISION is run, it calculates the decay of 70 isotopes including uranium, plutonium, minor actinides, and fission products. VISION.ECON is a sub-model of VISION that was developed to estimate fuel cycle and reactor costs. The sub-model uses the mass flows generated by VISION for each of the fuel cycle functions (referred to as modules) and calculates the annual cost based on cost distributions provided by the Advanced Fuel Cycle Cost Basis Report1. Costs are aggregated for each fuel cycle module, and the modules are aggregated into front end, back end, recycling, reactor, and total fuel cycle costs. The software also has the capability to perform system sensitivity analysis. This capability may be used to analyze the impacts on costs due to system uncertainty effects. This paper will provide a preliminary evaluation of the cost uncertainty affects attributable to 1) key reactor and fuel cycle system parameters and 2) scheduling variations. The evaluation will focus on the uncertainty on the total cost of electricity and fuel cycle costs. First, a single light water reactor (LWR) using mixed oxide fuel is examined to ascertain the effects of simple parameter changes. Three system parameters; burnup, capacity factor and reactor power are varied from nominal cost values and the affect on the total cost of electricity is measured. These simple parameter changes are measured in more complex scenarios 2-tier systems including LWRs with mixed fuel and fast recycling reactors using transuranic fuel. Other system parameters are evaluated and results will be presented in the paper. Secondly, the uncertainty due to

  8. International Border Management Systems (IBMS) Program : visions and strategies.

    SciTech Connect

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  9. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  10. The advantages of stereo vision in a face recognition system

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2014-06-01

    Humans can recognize a face with binocular vision, while computers typically use a single face image. It is known that the performance of face recognition (by a computer) can be improved using the score fusion of multimodal images and multiple algorithms. A question is: Can we apply stereo vision to a face recognition system? We know that human binocular vision has many advantages such as stereopsis (3D vision), binocular summation, and singleness of vision including fusion of binocular images (cyclopean image). For face recognition, a 3D face or 3D facial features are typically computed from a pair of stereo images. In human visual processes, the binocular summation and singleness of vision are similar as image fusion processes. In this paper, we propose an advanced face recognition system with stereo imaging capability, which is comprised of two 2-in-1 multispectral (visible and thermal) cameras and three recognition algorithms (circular Gaussian filter, face pattern byte, and linear discriminant analysis [LDA]). Specifically, we present and compare stereo fusion at three levels (images, features, and scores) by using stereo images (from left camera and right camera). Image fusion is achieved with three methods (Laplacian pyramid, wavelet transform, average); feature fusion is done with three logical operations (AND, OR, XOR); and score fusion is implemented with four classifiers (LDA, k-nearest neighbor, support vector machine, binomial logical regression). The system performance is measured by probability of correct classification (PCC) rate (reported as accuracy rate in this paper) and false accept rate (FAR). The proposed approaches were validated with a multispectral stereo face dataset from 105 subjects. Experimental results show that any type of stereo fusion can improve the PCC, meanwhile reduce the FAR. It seems that stereo image/feature fusion is superior to stereo score fusion in terms of recognition performance. Further score fusion after image

  11. Machine vision system for automated detection of stained pistachio nuts

    NASA Astrophysics Data System (ADS)

    Pearson, Tom C.

    1995-01-01

    A machine vision system was developed to separate stained pistachio nuts, which comprise of about 5% of the California crop, from unstained nuts. The system may be used to reduce labor involved with manual grading or to remove aflatoxin contaminated product from low grade process streams. The system was tested on two different pistachio process streams: the bi- chromatic color sorter reject stream and the small nut shelling stock stream. The system had a minimum overall error rate of 14% for the bi-chromatic sorter reject stream and 15% for the small shelling stock stream.

  12. Smart LED light source driver for machine vision system

    NASA Astrophysics Data System (ADS)

    Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2008-02-01

    The unique properties of LEDs offer significant advantages in terms of lifetime, intensity and color control, response time and efficiency, all of which are very important for illumination in machine vision applications. However, there are some drawbacks of LEDs, such as the high thermal dependency and temporal degradation of the intensity and color. Dealing with these drawbacks requires complex LED drivers, which are able to compensate for the abovementioned changes in the intensity and color, thereby maintaining higher stability over a wide range of ambient temperature throughout the lifetime of a LED light source. Moreover, state-of-the-art machine vision systems usually consist of a large number of independent LED light sources that enable real-time switching between different illumination setups at frequencies of up to 100 kHz. In this paper, we discuss the concepts of smart LED drivers with the emphasis on the flexibility and applicability. All the most important characteristics are being considered and discussed in detail: the accurate generation of high frequency waveforms, the efficiency of the current driver, thermal and temporal stabilization of the LED intensity and color, communication with a camera and personal computer or embedded system, and practicalities of implementing a large number of independent drive channels. Finally, a practical solution addressing all of the abovementioned issues is proposed with the aim of providing a flexible and highly stable smart LED light source driver for state-of-the-art machine vision systems.

  13. [A Meridian Visualization System Based on Impedance and Binocular Vision].

    PubMed

    Su, Qiyan; Chen, Xin

    2015-03-01

    To ensure the meridian can be measured and displayed correctly on the human body surface, a visualization method based on impedance and binocular vision is proposed. First of all, using alternating constant current source to inject current signal into the human skin surface, then according to the low impedance characteristics of meridian, the multi-channel detecting instrument detects voltage of each pair of electrodes, thereby obtaining the channel of the meridian location, through the serial port communication, data is transmitted to the host computer. Secondly, intrinsic and extrinsic parameters of cameras are obtained by Zhang's camera calibration method, and 3D information of meridian location is got by corner selection and matching of the optical target, and then transform coordinate of 3D information according to the binocular vision principle. Finally, using curve fitting and image fusion technology realizes the meridian visualization. The test results show that the system can realize real-time detection and accurate display of meridian. PMID:26524777

  14. Scene segmentation in a machine vision system for histopathology

    NASA Astrophysics Data System (ADS)

    Thompson, Deborah B.; Bartels, H. G.; Haddad, J. W.; Bartels, Peter H.

    1990-07-01

    Algorithms and procedures employed to attain reliable and exhaustive segmentation in histopathologic imagery of colon and prostate sections are detailed. The algorithms are controlled and selectively called by a scene segmentation expert system as part of a machine vision system for the diagnostic interpretation of histopathologic sections. At this time, effective segmentation of scenes of glandular tissues is produced, with the system being conservative in the identification of glands; for the segmentation of overlapping glandular nuclei an overall success rate of approximately 90% has been achieved.

  15. A stereo vision-based obstacle detection system in vehicles

    NASA Astrophysics Data System (ADS)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  16. 75 FR 38863 - Tenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-06

    ... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration (FAA.../Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of a... Systems/Synthetic Vision Systems (EFVS/SVS) meeting. The agenda will include: Tuesday, 27 July...

  17. Low Cost Vision Based Personal Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Amami, M. M.; Smith, M. J.; Kokkas, N.

    2014-03-01

    Mobile mapping systems (MMS) can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS). A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  18. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  19. Configuration assistant for versatile vision-based inspection systems

    NASA Astrophysics Data System (ADS)

    Huesser, Olivier; Hugli, Heinz

    2000-03-01

    Nowadays, vision-based inspection systems are present in many stages of the industrial manufacturing process. Their versatility, which permits to accommodate a broad range of inspection requirements, is however limited by the time consuming system setup performed at each production change. This work aims at providing a configuration assistant that helps to speed up this system setup, considering the peculiarities of industrial vision systems. The pursued principle, which is to maximize the discriminating power of the features involved in the inspection decision, leads to an optimization problem based on a high dimensional objective function. Several objective functions based on various metrics are proposed, their optimization being performed with the help of various search heuristics such as genetic methods and simulated annealing methods. The experimental results obtained with an industrial inspection system are presented, considering the particular case of the visual inspection of markings found on top of molded integrated circuits. These results show the effectiveness of the presented objective functions and search methods, and validate the configuration assistant as well.

  20. Improving CAR Navigation with a Vision-Based System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  1. Research on machine vision system of monitoring injection molding processing

    NASA Astrophysics Data System (ADS)

    Bai, Fan; Zheng, Huifeng; Wang, Yuebing; Wang, Cheng; Liao, Si'an

    2016-01-01

    With the wide development of injection molding process, the embedded monitoring system based on machine vision has been developed to automatically monitoring abnormality of injection molding processing. First, the construction of hardware system and embedded software system were designed. Then camera calibration was carried on to establish the accurate model of the camera to correct distortion. Next the segmentation algorithm was applied to extract the monitored objects of the injection molding process system. The realization procedure of system included the initialization, process monitoring and product detail detection. Finally the experiment results were analyzed including the detection rate of kinds of the abnormality. The system could realize the multi-zone monitoring and product detail detection of injection molding process with high accuracy and good stability.

  2. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem

  3. Retinal Structures and Visual Cortex Activity are Impaired Prior to Clinical Vision Loss in Glaucoma

    PubMed Central

    Murphy, Matthew C.; Conner, Ian P.; Teng, Cindy Y.; Lawrence, Jesse D.; Safiullah, Zaid; Wang, Bo; Bilonick, Richard A.; Kim, Seong-Gi; Wollstein, Gadi; Schuman, Joel S.; Chan, Kevin C.

    2016-01-01

    Glaucoma is the second leading cause of blindness worldwide and its pathogenesis remains unclear. In this study, we measured the structure, metabolism and function of the visual system by optical coherence tomography and multi-modal magnetic resonance imaging in healthy subjects and glaucoma patients with different degrees of vision loss. We found that inner retinal layer thinning, optic nerve cupping and reduced visual cortex activity occurred before patients showed visual field impairment. The primary visual cortex also exhibited more severe functional deficits than higher-order visual brain areas in glaucoma. Within the visual cortex, choline metabolism was perturbed along with increasing disease severity in the eye, optic radiation and visual field. In summary, this study showed evidence that glaucoma deterioration is already present in the eye and the brain before substantial vision loss can be detected clinically using current testing methods. In addition, cortical cholinergic abnormalities are involved during trans-neuronal degeneration and can be detected non-invasively in glaucoma. The current results can be of impact for identifying early glaucoma mechanisms, detecting and monitoring pathophysiological events and eye-brain-behavior relationships, and guiding vision preservation strategies in the visual system, which may help reduce the burden of this irreversible but preventable neurodegenerative disease. PMID:27510406

  4. Retinal Structures and Visual Cortex Activity are Impaired Prior to Clinical Vision Loss in Glaucoma.

    PubMed

    Murphy, Matthew C; Conner, Ian P; Teng, Cindy Y; Lawrence, Jesse D; Safiullah, Zaid; Wang, Bo; Bilonick, Richard A; Kim, Seong-Gi; Wollstein, Gadi; Schuman, Joel S; Chan, Kevin C

    2016-01-01

    Glaucoma is the second leading cause of blindness worldwide and its pathogenesis remains unclear. In this study, we measured the structure, metabolism and function of the visual system by optical coherence tomography and multi-modal magnetic resonance imaging in healthy subjects and glaucoma patients with different degrees of vision loss. We found that inner retinal layer thinning, optic nerve cupping and reduced visual cortex activity occurred before patients showed visual field impairment. The primary visual cortex also exhibited more severe functional deficits than higher-order visual brain areas in glaucoma. Within the visual cortex, choline metabolism was perturbed along with increasing disease severity in the eye, optic radiation and visual field. In summary, this study showed evidence that glaucoma deterioration is already present in the eye and the brain before substantial vision loss can be detected clinically using current testing methods. In addition, cortical cholinergic abnormalities are involved during trans-neuronal degeneration and can be detected non-invasively in glaucoma. The current results can be of impact for identifying early glaucoma mechanisms, detecting and monitoring pathophysiological events and eye-brain-behavior relationships, and guiding vision preservation strategies in the visual system, which may help reduce the burden of this irreversible but preventable neurodegenerative disease. PMID:27510406

  5. 75 FR 71183 - Twelfth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of a meeting of Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight...

  6. Design of optimal correlation filters for hybrid vision systems

    NASA Technical Reports Server (NTRS)

    Rajan, Periasamy K.

    1990-01-01

    Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.

  7. Vision-Based People Detection System for Heavy Machine Applications

    PubMed Central

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  8. Vision-Based People Detection System for Heavy Machine Applications.

    PubMed

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  9. Beam Splitter For Welding-Torch Vision System

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.

    1991-01-01

    Compact welding torch equipped with along-the-torch vision system includes cubic beam splitter to direct preview light on weldment and to reflect light coming from welding scene for imaging. Beam splitter integral with torch; requires no external mounting brackets. Rugged and withstands vibrations and wide range of temperatures. Commercially available, reasonably priced, comes in variety of sizes and optical qualities with antireflection and interference-filter coatings on desired faces. Can provide 50 percent transmission and 50 percent reflection of incident light to exhibit minimal ghosting of image.

  10. Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.

    1992-01-01

    This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.

  11. Computer-aided 3D display system and its application in 3D vision test

    NASA Astrophysics Data System (ADS)

    Shen, XiaoYun; Ma, Lan; Hou, Chunping; Wang, Jiening; Tang, Da; Li, Chang

    1998-08-01

    The computer aided 3D display system, flicker-free field sequential stereoscopic image display system, is newly developed. This system is composed of personal computer, liquid crystal glasses driving card, stereoscopic display software and liquid crystal glasses. It can display field sequential stereoscopic images at refresh rate of 70 Hz to 120 Hz. A typical application of this system, 3D vision test system, is mainly discussed in this paper. This stereoscopic vision test system can test stereoscopic acuity, cross disparity, uncross disparity and dynamic stereoscopic vision quantitatively. We have taken the use of random-dot- stereograms as stereoscopic vision test charts. Through practical test experiment between Anaglyph Stereoscopic Vision Test Charts and this stereoscopic vision test system, the statistical figures and test result is given out.

  12. 75 FR 28852 - Ninth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-24

    ... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration (FAA.../Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of a meeting of Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision...

  13. Vision-Based SLAM System for Unmanned Aerial Vehicles

    PubMed Central

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  14. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    PubMed

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  15. Analysis of Risk Compensation Behavior on Night Vision Enhancement System

    NASA Astrophysics Data System (ADS)

    Hiraoka, Toshihiro; Masui, Junya; Nishikawa, Seimei

    Advanced driver assistance systems (ADAS) such as a forward obstacle collision warning system (FOCWS) and a night vision enhancement system (NVES) aim to decrease driver's mental workload and enhance vehicle safety by provision of useful information to support driver's perception process and judgment process. On the other hand, the risk homeostasis theory (RHT) cautions that an enhanced safety and a reduced risk would cause a risk compensation behavior such as increasing the vehicle velocity. Therefore, the present paper performed the driving simulator experiments to discuss dependence on the NVES and emergence of the risk compensation behavior. Moreover, we verified the side-effects of spontaneous behavioral adaptation derived from the presentation of the fuel-consumption meter on the risk compensation behavior.

  16. Development of Non-contact Respiratory Monitoring System for Newborn Using a FG Vision Sensor

    NASA Astrophysics Data System (ADS)

    Kurami, Yoshiyuki; Itoh, Yushi; Natori, Michiya; Ohzeki, Kazuo; Aoki, Yoshimitsu

    In recent years, development of neonatal care is strongly hoped, with increase of the low-birth-weight baby birth rate. Especially respiration of low-birth-weight baby is incertitude because central nerve and respiratory function is immature. Therefore, a low-birth-weight baby often causes a disease of respiration. In a NICU (Neonatal Intensive Care Unit), neonatal respiration is monitored using cardio-respiratory monitor and pulse oximeter at all times. These contact-type sensors can measure respiratory rate and SpO2 (Saturation of Peripheral Oxygen). However, because a contact-type sensor might damage the newborn's skin, it is a real burden to monitor neonatal respiration. Therefore, we developed the respiratory monitoring system for newborn using a FG (Fiber Grating) vision sensor. FG vision sensor is an active stereo vision sensor, it is possible for non-contact 3D measurement. A respiratory waveform is calculated by detecting the vertical motion of the thoracic and abdominal region with respiration. We attempted clinical experiment in the NICU, and confirmed the accuracy of the obtained respiratory waveform was high. Non-contact respiratory monitoring of newborn using a FG vision sensor enabled the minimally invasive procedure.

  17. Recognition of Activities of Daily Living with Egocentric Vision: A Review

    PubMed Central

    Nguyen, Thi-Hoa-Cuc; Nebel, Jean-Christophe; Florez-Revuelta, Francisco

    2016-01-01

    Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory. PMID:26751452

  18. Vision-aided inertial navigation system for robotic mobile mapping

    NASA Astrophysics Data System (ADS)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  19. MARVEL: A system that recognizes world locations with stereo vision

    SciTech Connect

    Braunegg, D.J. . Artificial Intelligence Lab.)

    1993-06-01

    MARVEL is a system that supports autonomous navigation by building and maintaining its own models of world locations and using these models and stereo vision input to recognize its location in the world and its position and orientation within that location. The system emphasizes the use of simple, easily derivable features for recognition, whose aggregate identifies a location, instead of complex features that also require recognition. MARVEL is designed to be robust with respect to input errors and to respond to a gradually changing world by updating its world location models. In over 1,000 recognition tests using real-world data, MARVEL yielded a false negative rate under 10% with zero false positives.

  20. Occlusion-free monocular three-dimensional vision system

    NASA Astrophysics Data System (ADS)

    Theodoracatos, Vassilios E.

    1994-10-01

    This paper describes a new, occlusion-free, monocular three-dimensional vision system. A matrix of light beams (lasers, fiber optics, etc.), substantially parallel to the optic axis of the lens of a video camera, is projected onto a scene. The corresponding coordinates of the perspective image generated on the video-camera sensor, the focal length of the camera lens, and the lateral position of the projected beams of light are used to determine the 'perspective depth' z* of the three-dimensional real image in the space between the lens and the image plane. Direct inverse perspective transformations are used to reconstruct the three- dimensional real-world scene. This system can lead to the development of three-dimensional real-image sensing devices for manufacturing, medical, and defense-related applications. If combined with existing technology, it has high potential for the development of three- dimensional television.

  1. A database/knowledge structure for a robotics vision system

    NASA Technical Reports Server (NTRS)

    Dearholt, D. W.; Gonzales, N. N.

    1987-01-01

    Desirable properties of robotics vision database systems are given, and structures which possess properties appropriate for some aspects of such database systems are examined. Included in the structures discussed is a family of networks in which link membership is determined by measures of proximity between pairs of the entities stored in the database. This type of network is shown to have properties which guarantee that the search for a matching feature vector is monotonic. That is, the database can be searched with no backtracking, if there is a feature vector in the database which matches the feature vector of the external entity which is to be identified. The construction of the database is discussed, and the search procedure is presented. A section on the support provided by the database for description of the decision-making processes and the search path is also included.

  2. Wearable design issues for electronic vision enhancement systems

    NASA Astrophysics Data System (ADS)

    Dvorak, Joe

    2006-09-01

    As the baby boomer generation ages, visual impairment will overtake a significant portion of the US population. At the same time, more and more of our world is becoming digital. These two trends, coupled with the continuing advances in digital electronics, argue for a rethinking in the design of aids for the visually impaired. This paper discusses design issues for electronic vision enhancement systems (EVES) [R.C. Peterson, J.S. Wolffsohn, M. Rubinstein, et al., Am. J. Ophthalmol. 136 1129 (2003)] that will facilitate their wearability and continuous use. We briefly discuss the factors affecting a person's acceptance of wearable devices. We define the concept of operational inertia which plays an important role in our design of wearable devices and systems. We then discuss how design principles based upon operational inertia can be applied to the design of EVES.

  3. Cryogenics Vision Workshop for High-Temperature Superconducting Electric Power Systems Proceedings

    SciTech Connect

    Energetics, Inc.

    2000-01-01

    The US Department of Energy's Superconductivity Program for Electric Systems sponsored the Cryogenics Vision Workshop, which was held on July 27, 1999 in Washington, D.C. This workshop was held in conjunction with the Program's Annual Peer Review meeting. Of the 175 people attending the peer review meeting, 31 were selected in advance to participate in the Cryogenics Vision Workshops discussions. The participants represented cryogenic equipment manufactures, industrial gas manufacturers and distributors, component suppliers, electric power equipment manufacturers (Superconductivity Partnership Initiative participants), electric utilities, federal agencies, national laboratories, and consulting firms. Critical factors were discussed that need to be considered in describing the successful future commercialization of cryogenic systems. Such systems will enable the widespread deployment of high-temperature superconducting (HTS) electric power equipment. Potential research, development, and demonstration (RD and D) activities and partnership opportunities for advancing suitable cryogenic systems were also discussed. The workshop agenda can be found in the following section of this report. Facilitated sessions were held to discuss the following specific focus topics: identifying Critical Factors that need to be included in a Cryogenics Vision for HTS Electric Power Systems (From the HTS equipment end-user perspective) identifying R and D Needs and Partnership Roles (From the cryogenic industry perspective) The findings of the facilitated Cryogenics Vision Workshop were then presented in a plenary session of the Annual Peer Review Meeting. Approximately 120 attendees participated in the afternoon plenary session. This large group heard summary reports from the workshop session leaders and then held a wrap-up session to discuss the findings, cross-cutting themes, and next steps. These summary reports are presented in this document. The ideas and suggestions raised during

  4. neu-VISION: an explosives detection system for transportation security

    NASA Astrophysics Data System (ADS)

    Warman, Kieffer; Penn, David

    2008-04-01

    Terrorists were targeting commercial airliners long before the 9/11 attacks on the World Trade Center and the Pentagon. Despite heightened security measures, commercial airliners remain an attractive target for terrorists, as evidenced by the August 2006 terrorist plot to destroy as many as ten aircraft in mid-flight from the United Kingdom to the United States. As a response to the security threat air carriers are now required to screen 100-percent of all checked baggage for explosives. The scale of this task is enormous and the Transportation Security Administration has deployed thousands of detection systems. Although this has resulted in improved security, the performance of the installed systems is not ideal. Further improvements are needed and can only be made with new technologies that ensure a flexible Concept of Operations and provide superior detection along with low false alarm rates and excellent dependability. To address security needs Applied Signal Technology, Inc. is developing an innovative and practical solution to meet the performance demands of aviation security. The neu-VISION TM system is expected to provide explosives detection performance for checked baggage that both complements and surpasses currently deployed performance. The neu-VISION TM system leverages a 5 year R&D program developing the Associated Particle Imaging (API) technique; a neutron based non-intrusive material identification and imaging technique. The superior performance afforded by this neutron interrogation technique delivers false alarm rates much lower than deployed technologies and "sees through" dense, heavy materials. Small quantities of explosive material are identified even in the cluttered environments.

  5. Stereoscopic Machine-Vision System Using Projected Circles

    NASA Technical Reports Server (NTRS)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  6. HiVision millimeter-wave radar for enhanced vision systems in civil and military transport aircraft

    NASA Astrophysics Data System (ADS)

    Pirkl, Martin; Tospann, Franz-Jose

    1997-06-01

    This paper presents a guideline to meet the requirements of forward looking sensors of an enhanced vision system for both military and civil transport aircraft. It gives an update of a previous publication with special respect to airborne application. For civil transport aircraft an imaging mm-wave radar is proposed as the vision sensor for an enhanced vision system. For military air transport an additional high-performance weather radar should be combined with the mm-wave radar to enable advanced situation awareness, e.g. spot-SAR or air to air operation. For tactical navigation the mm-wave radar is useful due to its ranging capabilities. To meet these requirements the HiVision radar was developed and tested. It uses a robust concept of electronic beam steering and will meet the strict price constraints of transport aircraft. Advanced image processing and high frequency techniques are currently developed to enhance the performance of both the radar image and integration techniques. The advantages FMCW waveform even enables a sensor with low probability of intercept and a high resistance against jammer. The 1997 highlight will be the optimizing of the sensor and flight trials with an enhanced radar demonstrator.

  7. MMW radar enhanced vision systems: the Helicopter Autonomous Landing System (HALS) and Radar-Enhanced Vision System (REVS) are rotary and fixed wing enhanced flight vision systems that enable safe flight operations in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Cross, Jack; Schneider, John; Cariani, Pete

    2013-05-01

    Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.

  8. Model-based vision system for mobile robot position estimation

    NASA Astrophysics Data System (ADS)

    D'Orazio, Tiziana; Capozzo, Liborio; Ianigro, Massimo; Distante, Arcangelo

    1994-02-01

    The development of an autonomous mobile robot is a central problem in artificial intelligence and robotics. A vision system can be used to recognize naturally occurring landmarks located in known positions. The problem considered here is that of finding the location and orientation of a mobile robot using a 3-D image taken by a CCD camera located on the robot. The naturally occurring landmarks that we use are the corners of the room extracted by an edge detection algorithm from a 2-D image of the indoor scene. Then, the location and orientation of the vehicle are calculated by perspective information of the landmarks in the scene of the room where the robot moves.

  9. Visual tracking in stereo. [by computer vision system

    NASA Technical Reports Server (NTRS)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  10. Expert System Architecture for Rocket Engine Numerical Simulators: A Vision

    NASA Technical Reports Server (NTRS)

    Mitra, D.; Babu, U.; Earla, A. K.; Hemminger, Joseph A.

    1998-01-01

    Simulation of any complex physical system like rocket engines involves modeling the behavior of their different components using mostly numerical equations. Typically a simulation package would contain a set of subroutines for these modeling purposes and some other ones for supporting jobs. A user would create an input file configuring a system (part or whole of a rocket engine to be simulated) in appropriate format understandable by the package and run it to create an executable module corresponding to the simulated system. This module would then be run on a given set of input parameters in another file. Simulation jobs are mostly done for performance measurements of a designed system, but could be utilized for failure analysis or a design job such as inverse problems. In order to use any such package the user needs to understand and learn a lot about the software architecture of the package, apart from being knowledgeable in the target domain. We are currently involved in a project in designing an intelligent executive module for the rocket engine simulation packages, which would free any user from this burden of acquiring knowledge on a particular software system. The extended abstract presented here will describe the vision, methodology and the problems encountered in the project. We are employing object-oriented technology in designing the executive module. The problem is connected to the areas like the reverse engineering of any simulation software, and the intelligent systems for simulation.

  11. A Novel Vision Sensing System for Tomato Quality Detection

    PubMed Central

    Srivastava, Satyam; Boyat, Sachin; Sadistap, Shashikant

    2014-01-01

    Producing tomato is a daunting task as the crop of tomato is exposed to attacks from various microorganisms. The symptoms of the attacks are usually changed in color, bacterial spots, special kind of specks, and sunken areas with concentric rings having different colors on the tomato outer surface. This paper addresses a vision sensing based system for tomato quality inspection. A novel approach has been developed for tomato fruit detection and disease detection. Developed system consists of USB based camera module having 12.0 megapixel interfaced with ARM-9 processor. Zigbee module has been interfaced with developed system for wireless transmission from host system to PC based server for further processing. Algorithm development consists of three major steps, preprocessing steps like noise rejection, segmentation and scaling, classification and recognition, and automatic disease detection and classification. Tomato samples have been collected from local market and data acquisition has been performed for data base preparation and various processing steps. Developed system can detect as well as classify the various diseases in tomato samples. Various pattern recognition and soft computing techniques have been implemented for data analysis as well as different parameters prediction like shelf life of the tomato, quality index based on disease detection and classification, freshness detection, maturity index detection, and different suggestions for detected diseases. Results are validated with aroma sensing technique using commercial Alpha Mos 3000 system. Accuracy has been calculated from extracted results, which is around 92%. PMID:26904620

  12. A Novel Vision Sensing System for Tomato Quality Detection.

    PubMed

    Srivastava, Satyam; Boyat, Sachin; Sadistap, Shashikant

    2014-01-01

    Producing tomato is a daunting task as the crop of tomato is exposed to attacks from various microorganisms. The symptoms of the attacks are usually changed in color, bacterial spots, special kind of specks, and sunken areas with concentric rings having different colors on the tomato outer surface. This paper addresses a vision sensing based system for tomato quality inspection. A novel approach has been developed for tomato fruit detection and disease detection. Developed system consists of USB based camera module having 12.0 megapixel interfaced with ARM-9 processor. Zigbee module has been interfaced with developed system for wireless transmission from host system to PC based server for further processing. Algorithm development consists of three major steps, preprocessing steps like noise rejection, segmentation and scaling, classification and recognition, and automatic disease detection and classification. Tomato samples have been collected from local market and data acquisition has been performed for data base preparation and various processing steps. Developed system can detect as well as classify the various diseases in tomato samples. Various pattern recognition and soft computing techniques have been implemented for data analysis as well as different parameters prediction like shelf life of the tomato, quality index based on disease detection and classification, freshness detection, maturity index detection, and different suggestions for detected diseases. Results are validated with aroma sensing technique using commercial Alpha Mos 3000 system. Accuracy has been calculated from extracted results, which is around 92%. PMID:26904620

  13. X-Eye: a novel wearable vision system

    NASA Astrophysics Data System (ADS)

    Wang, Yuan-Kai; Fan, Ching-Tang; Chen, Shao-Ang; Chen, Hou-Ye

    2011-03-01

    This paper proposes a smart portable device, named the X-Eye, which provides a gesture interface with a small size but a large display for the application of photo capture and management. The wearable vision system is implemented with embedded systems and can achieve real-time performance. The hardware of the system includes an asymmetric dualcore processer with an ARM core and a DSP core. The display device is a pico projector which has a small volume size but can project large screen size. A triple buffering mechanism is designed for efficient memory management. Software functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a color classification which is based on the expectation-maximization algorithm and Gaussian mixture model (GMM). To improve the performance of the GMM, we devise a LUT (Look Up Table) technique. Fingertips are extracted and geometrical features of fingertip's shape are matched to recognize user's gesture commands finally. In order to verify the accuracy of the gesture recognition module, experiments are conducted in eight scenes with 400 test videos including the challenge of colorful background, low illumination, and flickering. The processing speed of the whole system including the gesture recognition is with the frame rate of 22.9FPS. Experimental results give 99% recognition rate. The experimental results demonstrate that this small-size large-screen wearable system has effective gesture interface with real-time performance.

  14. Evolution of activity patterns and chromatic vision in primates: morphometrics, genetics and cladistics.

    PubMed

    Heesy, C P; Ross, C F

    2001-02-01

    Hypotheses for the adaptive origin of primates have reconstructed nocturnality as the primitive activity pattern for the entire order based on functional/adaptive interpretations of the relative size and orientation of the orbits, body size and dietary reconstruction. Based on comparative data from extant taxa this reconstruction implies that basal primates were also solitary, faunivorous, and arboreal. Recently, primates have been hypothesized to be primitively diurnal, based in part on the distribution of color-sensitive photoreceptor opsin genes and active trichromatic color vision in several extant strepsirrhines, as well as anthropoid primates (Tan & Li, 1999 Nature402, 36; Li, 2000 Am. J. phys. Anthrop. Supple.30, 318). If diurnality is primitive for all primates then the functional and adaptive significance of aspects of strepsirrhine retinal morphology and other adaptations of the primate visual system such as high acuity stereopsis, have been misinterpreted for decades. This hypothesis also implies that nocturnality evolved numerous times in primates. However, the hypothesis that primates are primitively diurnal has not been analyzed in a phylogenetic context, nor have the activity patterns of several fossil primates been considered. This study investigated the evolution of activity patterns and trichromacy in primates using a new method for reconstructing activity patterns in fragmentary fossils and by reconstructing visual system character evolution at key ancestral nodes of primate higher taxa. Results support previous studies that reconstruct omomyiform primates as nocturnal. The larger body sizes of adapiform primates confound inferences regarding activity pattern evolution in this group. The hypothesis of diurnality and trichromacy as primitive for primates is not supported by the phylogenetic data. On the contrary, nocturnality and dichromatic vision are not only primitive for all primates, but also for extant strepsirrhines. Diurnality, and

  15. Information Systems in the University of Saskatchewan Libraries: A Vision for the 1990s.

    ERIC Educational Resources Information Center

    Saskatchewan Univ., Saskatoon. Libraries.

    This report describes the vision of the Information Systems Advisory Committee (ISAC) of an Information Systems Model for the 1990s. It includes an evaluation of the present automation environment at the university, a vision of library automation at the University of Saskatchewan between 1994 and 1999, and specific recommendations on such issues…

  16. 76 FR 8278 - Special Conditions: Gulfstream Model GVI Airplane; Enhanced Flight Vision System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-14

    ... Flight Vision System AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final special conditions..., Airplane and Flight Crew Interface Branch, ANM-111, Transport Standards Staff, Transport Airplane... Design Features The enhanced flight vision system (EFVS) is a novel or unusual design feature because...

  17. 78 FR 32078 - Special Conditions: Gulfstream Model G280 Airplane, Enhanced Flight Vision System (EFVS) With...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-29

    ... Privacy Act Statement can be found in the Federal Register published on April 11, 2000 (65 FR 19477-19478..., Enhanced Flight Vision System (EFVS) With Head-Up Display (HUD) AGENCY: Federal Aviation Administration... Aerospace Corporation, will have an advanced, enhanced-flight-vision system (EFVS). The EFVS is a novel...

  18. Precision calibration method for binocular vision measurement systems based on arbitrary translations and 3D-connection information

    NASA Astrophysics Data System (ADS)

    Yang, Jinghao; Jia, Zhenyuan; Liu, Wei; Fan, Chaonan; Xu, Pengtao; Wang, Fuji; Liu, Yang

    2016-10-01

    Binocular vision systems play an important role in computer vision, and high-precision system calibration is a necessary and indispensable process. In this paper, an improved calibration method for binocular stereo vision measurement systems based on arbitrary translations and 3D-connection information is proposed. First, a new method for calibrating the intrinsic parameters of binocular vision system based on two translations with an arbitrary angle difference is presented, which reduces the effect of the deviation of the motion actuator on calibration accuracy. This method is simpler and more accurate than existing active-vision calibration methods and can provide a better initial value for the determination of extrinsic parameters. Second, a 3D-connection calibration and optimization method is developed that links the information of the calibration target in different positions, further improving the accuracy of the system calibration. Calibration experiments show that the calibration error can be reduced to 0.09%, outperforming traditional methods for the experiments of this study.

  19. The Application of Lidar to Synthetic Vision System Integrity

    NASA Technical Reports Server (NTRS)

    Campbell, Jacob L.; UijtdeHaag, Maarten; Vadlamani, Ananth; Young, Steve

    2003-01-01

    One goal in the development of a Synthetic Vision System (SVS) is to create a system that can be certified by the Federal Aviation Administration (FAA) for use at various flight criticality levels. As part of NASA s Aviation Safety Program, Ohio University and NASA Langley have been involved in the research and development of real-time terrain database integrity monitors for SVS. Integrity monitors based on a consistency check with onboard sensors may be required if the inherent terrain database integrity is not sufficient for a particular operation. Sensors such as the radar altimeter and weather radar, which are available on most commercial aircraft, are currently being investigated for use in a real-time terrain database integrity monitor. This paper introduces the concept of using a Light Detection And Ranging (LiDAR) sensor as part of a real-time terrain database integrity monitor. A LiDAR system consists of a scanning laser ranger, an inertial measurement unit (IMU), and a Global Positioning System (GPS) receiver. Information from these three sensors can be combined to generate synthesized terrain models (profiles), which can then be compared to the stored SVS terrain model. This paper discusses an initial performance evaluation of the LiDAR-based terrain database integrity monitor using LiDAR data collected over Reno, Nevada. The paper will address the consistency checking mechanism and test statistic, sensitivity to position errors, and a comparison of the LiDAR-based integrity monitor to a radar altimeter-based integrity monitor.

  20. Helmet-mounted pilot night vision systems: Human factors issues

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.; Brickner, Michael S.

    1989-01-01

    Helmet-mounted displays of infrared imagery (forward-looking infrared (FLIR)) allow helicopter pilots to perform low level missions at night and in low visibility. However, pilots experience high visual and cognitive workload during these missions, and their performance capabilities may be reduced. Human factors problems inherent in existing systems stem from three primary sources: the nature of thermal imagery; the characteristics of specific FLIR systems; and the difficulty of using FLIR system for flying and/or visually acquiring and tracking objects in the environment. The pilot night vision system (PNVS) in the Apache AH-64 provides a monochrome, 30 by 40 deg helmet-mounted display of infrared imagery. Thermal imagery is inferior to television imagery in both resolution and contrast ratio. Gray shades represent temperatures differences rather than brightness variability, and images undergo significant changes over time. The limited field of view, displacement of the sensor from the pilot's eye position, and monocular presentation of a bright FLIR image (while the other eye remains dark-adapted) are all potential sources of disorientation, limitations in depth and distance estimation, sensations of apparent motion, and difficulties in target and obstacle detection. Insufficient information about human perceptual and performance limitations restrains the ability of human factors specialists to provide significantly improved specifications, training programs, or alternative designs. Additional research is required to determine the most critical problem areas and to propose solutions that consider the human as well as the development of technology.

  1. Robot vision system for pedestrian-flow detection

    NASA Astrophysics Data System (ADS)

    Tang, Yuan Y.; Lu, Yean J.; Suen, Ching Y.

    1992-04-01

    Traffic and transportation engineers continually require a more accurate and large amount of pedestrian flow data for numerous purposes. For example, the increasing use of pedestrian facilities such as building complexes, shopping malls, and airports in densely populated cities demands pedestrian flow data for planning, design, operation, and monitoring of these facilities. Currently, measurement of pedestrian flow data is often performed manually. This paper proposes a robot vision system to measure the number and walking direction of pedestrians using difference image and shape reconstruction techniques. The system consists of eight steps: (1) conversion of video images, (2) digitization of frozen frames, (3) conversion of 256-grey-level images into bilevel images, (4) extraction of rough sketch of pedestrian using difference images, (5) removal of line-noise, (6) reconstruction of shape of the pedestrian, (7) measurement of the number of pedestrians, and (8) determination of the direction of pedestrian movement. In this system, the operations in each step depend only on local information. Thus, they can be performed independently in parallel. A very large scale integration architecture can be implemented in this system to speed up calibration. The accuracy in measuring the number of pedestrians and their direction of travel is about 93% and 92%, respectively.

  2. Vision aided inertial navigation system augmented with a coded aperture

    NASA Astrophysics Data System (ADS)

    Morrison, Jamie R.

    plate aperture produces diffraction patterns that change the shape of the focal blur pattern. When used as an aperture, the Fresnel zone plate produces multiple focal planes in the scene. The interference between the multiple focal planes produce changes in the aperture that can be observed both between the focal planes and beyond the most distant focal plane. The Fresnel zone plate aperture and lens may be designed to change in the focal blur pattern at greater depths, thereby improving measurement performance of the coded aperture system. This research provides an in-depth study of the Fresnel zone plate used as a coded aperture, and the performance improvement obtained by augmenting a single camera vision aided inertial navigation system with a Fresnel zone plate coded aperture. Design and analysis of a generalized coded aperture is presented and demonstrated, and special considerations for the Fresnel zone plate are given. Also techniques to determine a continuous depth measurement from a coded image are presented and evaluated through measurement. Finally the measurement results from different aperture configurations are statistically modeled and compared with a simulated vision aided navigation environment to predict the change in performance of a vision aided inertial navigation system when augmented with a coded aperture.

  3. Machine vision inspection system for automobile gauge panel

    NASA Astrophysics Data System (ADS)

    Liu, Ming-Yuan; Wang, Dong-Wen; Shi, Hao

    1995-03-01

    A machine vision inspection system is designed and built for automatic inspection at the end of automobile gauge panel production line. The inspection items on the gauge panel are pointing errors on all scales of 5 indicators and possible damage or missing assembled warning lights and light bulbs for indicators. Image acquisition camera is set to have a small field of view, a CNC system is established to drive the camera focusing on any target on the gauge panel. The position of the camera is close-loop controlled by a image character feedback control strategy. Automatic calibration is performed by using a stochastic adaptive control scheme. A two-CPU computer system is established to assure real time image processing and CNC control as well as test signal source management working in parallel way. Precision test signal source for speedometer, petrol gauge, oil pressure indicator, water-thermometer and rheometer are designed and made integrated under computer management and control. Each scale and pointer on the gauge panel has a set of image processing parameters, therefore a learning sequence method is designed to reduce programming load and increase flexibility which allows quick adaptation to various products inspection.

  4. New vision solar system mission study. Final report

    SciTech Connect

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    The vision for the future of the planetary exploration program includes the capability to deliver {open_quotes}constellations{close_quotes} or {open_quotes}fleets{close_quotes} of microspacecraft to a planetary destination. These fleets will act in a coordinated manner to gather science data from a variety of locations on or around the target body, thus providing detailed, global coverage without requiring development of a single large, complex and costly spacecraft. Such constellations of spacecraft, coupled with advanced information processing and visualization techniques and high-rate communications, could provide the basis for development of a {open_quotes}virtual{close_quotes} {open_quotes}presence{close_quotes} in the solar system. A goal could be the near real-time delivery of planetary images and video to a wide variety of users in the general public and the science community. This will be a major step in making the solar system accessible to the public and will help make solar system exploration a part of the human experience on Earth.

  5. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    PubMed

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. PMID:26948877

  6. A reconfigurable real-time morphological system for augmented vision

    NASA Astrophysics Data System (ADS)

    Gibson, Ryan M.; Ahmadinia, Ali; McMeekin, Scott G.; Strang, Niall C.; Morison, Gordon

    2013-12-01

    There is a significant number of visually impaired individuals who suffer sensitivity loss to high spatial frequencies, for whom current optical devices are limited in degree of visual aid and practical application. Digital image and video processing offers a variety of effective visual enhancement methods that can be utilised to obtain a practical augmented vision head-mounted display device. The high spatial frequencies of an image can be extracted by edge detection techniques and overlaid on top of the original image to improve visual perception among the visually impaired. Augmented visual aid devices require highly user-customisable algorithm designs for subjective configuration per task, where current digital image processing visual aids offer very little user-configurable options. This paper presents a highly user-reconfigurable morphological edge enhancement system on field-programmable gate array, where the morphological, internal and external edge gradients can be selected from the presented architecture with specified edge thickness and magnitude. In addition, the morphology architecture supports reconfigurable shape structuring elements and configurable morphological operations. The proposed morphology-based visual enhancement system introduces a high degree of user flexibility in addition to meeting real-time constraints capable of obtaining 93 fps for high-definition image resolution.

  7. Natural language understanding and speech recognition for industrial vision systems

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.

    1992-11-01

    The accepted method of programming machine vision systems for a new application is to incorporate sub-routines from a standard library into code, written specially for the given task. Typical programming languages that might be used here are Pascal, C, and assembly code, although other `conventional' (i.e., imperative) languages are often used instead. The representation of an algorithm to recognize a certain object, in the form of, say, a C language program is clumsy and unnatural, compared to the alternative process of describing the object itself and leaving the software to search for it. The latter method, known as declarative programming, is used extensively both when programming in Prolog and when people talk to one another in English, or other natural languages. Programs to understand a limited sub-set of a natural language can also be written conveniently in Prolog. The article considers the prospects for talking to an image processing system, using only slightly constrained English. Moderately priced speech recognition devices, which interface to a standard desk-top computer and provide a limited repertoire (200 words) as well as the ability to identify isolated words, are already available commercially. At the moment, the goal of talking in English to a computer is incompletely fulfilled. Yet, sufficient progress has been made to encourage greater effort in this direction.

  8. Combination of a vision system and a coordinate measuring machine for rapid coordinate metrology

    NASA Astrophysics Data System (ADS)

    Qu, Yufu; Pu, Zhaobang; Liu, Guodong

    2002-09-01

    This paper presents a novel methodology that integrates a vision system and a coordinate measuring machine for rapid coordinate metrology. Rapid acquisition of coordinate data from parts having tiny dimension, complex geometry and soft or fragile material has many applications. Typical examples include Large Scale Integrated circuit, glass or plastic part measurement, and reverse engineering in rapid product design and realization. In this paper, a novel approach to a measuring methodology for a vision integrated coordinate measuring system is developed and demonstrated. The vision coordinate measuring system is characterized by an integrated use of a high precision coordinate measuring machine (CMM), a vision system, advanced computational software, and the associated electronics. The vision system includes a charge-coupled device (CCD) camera, a self-adapt brightness power, and a graphics workstation with an image processing board. The vision system along with intelligent feature recognition and auto-focus algorithms provides the feature point space coordinate of global part profile after the system has been calibrated. The measured data may be fitted to geometry element of part profile. The obtained results are subsequently used to compute parameters consist of curvature radius, distance, shape error and surface reconstruction. By integrating the vision system with the CMM, a highly automated, high speed, 3D coordinate acquisition system is developed. It has potential applications in a whole spectrum of manufacturing problems with a major impact on metrology, inspection, and reverse engineering.

  9. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  10. Development of image processing LSI "SuperVchip" for real-time vision systems

    NASA Astrophysics Data System (ADS)

    Muramatsu, Shoji; Kobayashi, Yoshiki; Otsuka, Yasuo; Shojima, Hiroshi; Tsutsumi, Takayuki; Imai, Toshihiko; Yamada, Shigeyoshi

    2002-03-01

    A new image processing LSI SuperVchip with high-performance computing power has been developed. The SuperVchip has powerful capability for vision systems as follows: 1. General image processing by 3x3, 5x5, 7x7 kernel for high speed filtering function. 2. 16-parallel gray search engine units for robust template matching. 3. 49 block matching Pes to calculate the summation of the absolution difference in parallel for stereo vision function. 4. A color extraction unit for color object recognition. The SuperVchip also has peripheral function of vision systems, such as video interface, PCI extended interface, RISC engine interface and image memory controller on a chip. Therefore, small and high performance vision systems are realized via SuperVchip. In this paper, the above specific circuits are presented, and an architecture of a vision device equipped with SuperVchip and its performance are also described.

  11. Fostering a regional vision on river systems by remote sensing

    NASA Astrophysics Data System (ADS)

    Bizzi, S.; Piegay, H.; Demarchi, L.

    2015-12-01

    River classification and the derived knowledge about river systems have been relying until recently on discontinuous field campaigns and visual interpretation of aerial images. For this reason, building a regional vision on river systems based on a systematic and coherent set of hydromorphological indicators was, and still is, a research challenge. Remote sensing data, since some years, offer notable opportunities to shift this paradigm offering an unprecedented amount of spatially distributed data over large scales, such as regional. Here, we have implemented a river characterization framework based on color infrared orthophotos at 40 cm and a LIDAR derived DTM at 5 m acquired simultaneously in 2009-2010 for all Piedmont Region Italy (25400 kmq). 1500 km of river systems have been characterized in terms typology, geometry and topography of hydromorphological features. The framework delineates the valley bottom of each river course, and maps by a semi-automated procedure water channels, unvegetated and vegetated sediment bars, islands, and riparian corridors. Using a range of statistical techniques the river systems have been segmented and classified with an objective, quantitative, and then repeatable approach. Such regional database enhances our ability to address a number of research and management challenges, such as: i) quantify shape and topography of channel forms for different river functional types, and investigate their relationships with potential drivers like hydrology, geology, land use and historical contingency; ii) localize most degraded and better functioning river stretches so to prioritize finer scale monitoring and set quantifiable restoration targets; iii) provide indication for future RS acquisition campaigns so to start monitoring river processes at the regional scale. The Piedmont Region in Italy is here used as a laboratory of concrete examples and analyses to discuss our current ability to answer to these challenges in river science.

  12. Machine Vision Giving Eyes to Robots. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1990

    1990-01-01

    This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)

  13. Vision System for Remote Strain/Deformation Measurement

    SciTech Connect

    Hovis, G.L.

    1999-01-26

    Machine vision metrology is ideally suited to the task of non-contact/non-intrusive deformation and strain measurement in a remote system. The objective of this work-in-progress is to develop a compact instrument for strain measurement consisting of a camera, image capture card, PC, software, and light source. The instrument is portable and useful in a variety of applications and environments. A digital camera with a microscopic lens is connected to an image capture card in a PC. Commercially available image processing software is used to control the image capture and image processing steps leading up to displacement/strain measurement. Image processing steps include filtering and edge/feature enhancement. Custom software is required to control/automate certain elements of the acquisition and processing. Images of a region on the surface of a specimen are acquired at hold points (during static tests) or at regular time intervals (during transients). Salient features in the image scene (microstructure, oxide deposits, etc.) are observed in subsequent images. The strain measurement algorithm characterizes relative motion of the salient features with individual displacement vectors yielding 2-D deformation equations. The set of deformation equations is solved simultaneously to yield unknown deformation gradient terms that are used to express 2-D strain. The overall concept, theory, and test results to date are presented herein.

  14. ARM-based visual processing system for prosthetic vision.

    PubMed

    Matteucci, Paul B; Byrnes-Preston, Philip; Chen, Spencer C; Lovell, Nigel H; Suaning, Gregg J

    2011-01-01

    A growing number of prosthetic devices have been shown to provide visual perception to the profoundly blind through electrical neural stimulation. These first-generation devices offer promising outcomes to those affected by degenerative disorders such as retinitis pigmentosa. Although prosthetic approaches vary in their placement of the stimulating array (visual cortex, optic-nerve, epi-retinal surface, sub-retinal surface, supra-choroidal space, etc.), most of the solutions incorporate an externally-worn device to acquire and process video to provide the implant with instructions on how to deliver electrical stimulation to the patient, in order to elicit phosphenized vision. With the significant increase in availability and performance of low power-consumption smart phone and personal device processors, the authors investigated the use of a commercially available ARM (Advanced RISC Machine) device as an externally-worn processing unit for a prosthetic neural stimulator for the retina. A 400 MHz Samsung S3C2440A ARM920T single-board computer was programmed to extract 98 values from a 1.3 Megapixel OV9650 CMOS camera using impulse, regional averaging and Gaussian sampling algorithms. Power consumption and speed of video processing were compared to results obtained to similar reported devices. The results show that by using code optimization, the system is capable of driving a 98 channel implantable device for the restoration of visual percepts to the blind.

  15. The forms of knowledge mobilized in some machine vision systems.

    PubMed Central

    Brady, M

    1997-01-01

    This paper describes a number of computer vision systems that we have constructed, and which are firmly based on knowledge of diverse sorts. However, that knowledge is often represented in a way that is only accessible to a limited set of processes, that make limited use of it, and though the knowledge is amenable to change, in practice it can only be changed in rather simple ways. The rest of the paper addresses the questions: (i) what knowledge is mobilized in the furtherance of a perceptual task?; (ii) how is that knowledge represented?; and (iii) how is that knowledge mobilized? First we review some cases of early visual processing where the mobilization of knowledge seems to be a key contributor to success yet where the knowledge is deliberately represented in a quite inflexible way. After considering the knowledge that is involved in overcoming the projective nature of images, we move the discussion to the knowledge that was required in programs to match, register, and recognize shapes in a range of applications. Finally, we discuss the current state of process architectures for knowledge mobilization. PMID:9304690

  16. Multi-image registration for an enhanced vision system

    NASA Astrophysics Data System (ADS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2003-08-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  17. ARM-based visual processing system for prosthetic vision.

    PubMed

    Matteucci, Paul B; Byrnes-Preston, Philip; Chen, Spencer C; Lovell, Nigel H; Suaning, Gregg J

    2011-01-01

    A growing number of prosthetic devices have been shown to provide visual perception to the profoundly blind through electrical neural stimulation. These first-generation devices offer promising outcomes to those affected by degenerative disorders such as retinitis pigmentosa. Although prosthetic approaches vary in their placement of the stimulating array (visual cortex, optic-nerve, epi-retinal surface, sub-retinal surface, supra-choroidal space, etc.), most of the solutions incorporate an externally-worn device to acquire and process video to provide the implant with instructions on how to deliver electrical stimulation to the patient, in order to elicit phosphenized vision. With the significant increase in availability and performance of low power-consumption smart phone and personal device processors, the authors investigated the use of a commercially available ARM (Advanced RISC Machine) device as an externally-worn processing unit for a prosthetic neural stimulator for the retina. A 400 MHz Samsung S3C2440A ARM920T single-board computer was programmed to extract 98 values from a 1.3 Megapixel OV9650 CMOS camera using impulse, regional averaging and Gaussian sampling algorithms. Power consumption and speed of video processing were compared to results obtained to similar reported devices. The results show that by using code optimization, the system is capable of driving a 98 channel implantable device for the restoration of visual percepts to the blind. PMID:22255197

  18. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  19. Neural correlates of active vision: An fMRI comparison of natural reading and scene viewing.

    PubMed

    Choi, Wonil; Henderson, John M

    2015-08-01

    Theories of eye movement control during active vision tasks such as reading and scene viewing have primarily been developed and tested using data from eye tracking and computational modeling, and little is currently known about the neurocognition of active vision. The current fMRI study was conducted to examine the nature of the cortical networks that are associated with active vision. Subjects were asked to read passages for meaning and view photographs of scenes for a later memory test. The eye movement control network comprising frontal eye field (FEF), supplementary eye fields (SEF), and intraparietal sulcus (IPS), commonly activated during single-saccade eye movement tasks, were also involved in reading and scene viewing, suggesting that a common control network is engaged when eye movements are executed. However, the activated locus of the FEF varied across the two tasks, with medial FEF more activated in scene viewing relative to passage reading and lateral FEF more activated in reading than scene viewing. The results suggest that eye movements during active vision are associated with both domain-general and domain-specific components of the eye movement control network.

  20. Neural correlates of active vision: An fMRI comparison of natural reading and scene viewing.

    PubMed

    Choi, Wonil; Henderson, John M

    2015-08-01

    Theories of eye movement control during active vision tasks such as reading and scene viewing have primarily been developed and tested using data from eye tracking and computational modeling, and little is currently known about the neurocognition of active vision. The current fMRI study was conducted to examine the nature of the cortical networks that are associated with active vision. Subjects were asked to read passages for meaning and view photographs of scenes for a later memory test. The eye movement control network comprising frontal eye field (FEF), supplementary eye fields (SEF), and intraparietal sulcus (IPS), commonly activated during single-saccade eye movement tasks, were also involved in reading and scene viewing, suggesting that a common control network is engaged when eye movements are executed. However, the activated locus of the FEF varied across the two tasks, with medial FEF more activated in scene viewing relative to passage reading and lateral FEF more activated in reading than scene viewing. The results suggest that eye movements during active vision are associated with both domain-general and domain-specific components of the eye movement control network. PMID:26026255

  1. Application of edge detection algorithm for vision guided robotics assembly system

    NASA Astrophysics Data System (ADS)

    Balabantaray, Bunil Kumar; Jha, Panchanand; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system has a major role in making robotic assembly system autonomous. Part detection and identification of the correct part are important tasks which need to be carefully done by a vision system to initiate the process. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Edge detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus one needs to choose the correct tool for the process with respect to the given environment. In this paper the comparative study of edge detection algorithm with grasping the object in robot assembly system is presented. The proposed work is performed on the Matlab R2010a Simulink. This paper proposes four algorithms i.e. Canny's, Robert, Prewitt and Sobel edge detection algorithm. An attempt has been made to find the best algorithm for the problem. It is found that Canny's edge detection algorithm gives better result and minimum error for the intended task.

  2. Evaluating the Effects of Dimensionality in Advanced Avionic Display Concepts for Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Alexander, Amy L.; Prinzel, Lawrence J., III; Wickens, Christopher D.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.

    2007-01-01

    Synthetic vision systems provide an in-cockpit view of terrain and other hazards via a computer-generated display representation. Two experiments examined several display concepts for synthetic vision and evaluated how such displays modulate pilot performance. Experiment 1 (24 general aviation pilots) compared three navigational display (ND) concepts: 2D coplanar, 3D, and split-screen. Experiment 2 (12 commercial airline pilots) evaluated baseline 'blue sky/brown ground' or synthetic vision-enabled primary flight displays (PFDs) and three ND concepts: 2D coplanar with and without synthetic vision and a dynamic multi-mode rotatable exocentric format. In general, the results pointed to an overall advantage for a split-screen format, whether it be stand-alone (Experiment 1) or available via rotatable viewpoints (Experiment 2). Furthermore, Experiment 2 revealed benefits associated with utilizing synthetic vision in both the PFD and ND representations and the value of combined ego- and exocentric presentations.

  3. Design of a perspective flight guidance display for a synthetic vision system

    NASA Astrophysics Data System (ADS)

    Gross, Martin; Mayer, Udo; Kaufhold, Rainer

    1998-07-01

    Adverse weather conditions affect flight safety as well as productivity of the air traffic industry. The problem becomes evident in the airport area (Taxiing, takeoff, approach and landing). The productivity of the air traffic industry goes down because the resources of the airport can not be used optimally. Canceled and delayed flights lead directly to additional costs for the airlines. Against the background of aggravated problems due to a predicted increasing air traffic the European Union launched the project AWARD (All Weather ARrival and Departure) in June 1996. Eleven European aerospace companies and research institutions are participating. The project will be finished by the end of 1999. Subject of AWARD is the development of a Synthetic Vision System (based on database and navigation) and an Enhanced Vision System (based on sensors like FLIR and MMWR). Darmstadt University of Technology is responsible for the development of the SVS prototype. The SVS application is depending on precise navigation, databases for terrain and flight relevant information, and a flight guidance display. The objective is to allow landings under CAT III a/b conditions independently from CAT III ILS airport installations. One goal of SVS is to enhance the situation awareness of pilots during all airport area operations by designing an appropriate man-machine- interface for the display. This paper describes the current state of the research and development of the Synthetic Vision System being developed in AWARD. The paper describes which methodology was used to identify the information that should be displayed. Human factors which influenced the basic design of the SVS are portrayed and some of the planned activities for the flight simulation tests are summarized.

  4. Using Vision Metrology System for Quality Control in Automotive Industries

    NASA Astrophysics Data System (ADS)

    Mostofi, N.; Samadzadegan, F.; Roohy, Sh.; Nozari, M.

    2012-07-01

    The need of more accurate measurements in different stages of industrial applications, such as designing, producing, installation, and etc., is the main reason of encouraging the industry deputy in using of industrial Photogrammetry (Vision Metrology System). With respect to the main advantages of Photogrammetric methods, such as greater economy, high level of automation, capability of noncontact measurement, more flexibility and high accuracy, a good competition occurred between this method and other industrial traditional methods. With respect to the industries that make objects using a main reference model without having any mathematical model of it, main problem of producers is the evaluation of the production line. This problem will be so complicated when both reference and product object just as a physical object is available and comparison of them will be possible with direct measurement. In such case, producers make fixtures fitting reference with limited accuracy. In practical reports sometimes available precision is not better than millimetres. We used a non-metric high resolution digital camera for this investigation and the case study that studied in this paper is a chassis of automobile. In this research, a stable photogrammetric network designed for measuring the industrial object (Both Reference and Product) and then by using the Bundle Adjustment and Self-Calibration methods, differences between the Reference and Product object achieved. These differences will be useful for the producer to improve the production work flow and bringing more accurate products. Results of this research, demonstrate the high potential of proposed method in industrial fields. Presented results prove high efficiency and reliability of this method using RMSE criteria. Achieved RMSE for this case study is smaller than 200 microns that shows the fact of high capability of implemented approach.

  5. Measurement of meat color using a computer vision system.

    PubMed

    Girolami, Antonio; Napolitano, Fabio; Faraone, Daniela; Braghieri, Ada

    2013-01-01

    The limits of the colorimeter and a technique of image analysis in evaluating the color of beef, pork, and chicken were investigated. The Minolta CR-400 colorimeter and a computer vision system (CVS) were employed to measure colorimetric characteristics. To evaluate the chromatic fidelity of the image of the sample displayed on the monitor, a similarity test was carried out using a trained panel. The panelists found the digital images of the samples visualized on the monitor very similar to the actual ones (P<0.001). During the first similarity test the panelists observed at the same time both the actual meat sample and the sample image on the monitor in order to evaluate the similarity between them (test A). Moreover, the panelists were asked to evaluate the similarity between two colors, both generated by the software Adobe Photoshop CS3 one using the L, a and b values read by the colorimeter and the other obtained using the CVS (test B); which of the two colors was more similar to the sample visualized on the monitor was also assessed (test C). The panelists found the digital images very similar to the actual samples (P<0.001). As to the similarity (test B) between the CVS- and colorimeter-based colors the panelists found significant differences between them (P<0.001). Test C showed that the color of the sample on the monitor was more similar to the CVS generated color than to the colorimeter generated color. The differences between the values of the L, a, b, hue angle and chroma obtained with the CVS and the colorimeter were statistically significant (P<0.05-0.001). These results showed that the colorimeter did not generate coordinates corresponding to the true color of meat. Instead, the CVS method seemed to give valid measurements that reproduced a color very similar to the real one.

  6. Design of a dynamic test platform for autonomous robot vision systems

    NASA Technical Reports Server (NTRS)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  7. Lidar multi-range integrated Dewar assembly (IDA) for active-optical vision navigation sensor

    NASA Astrophysics Data System (ADS)

    Mayner, Philip; Clemet, Ed; Asbrock, Jim; Chen, Isabel; Getty, Jonathan; Malone, Neil; De Loo, John; Giroux, Mark

    2013-09-01

    A multi-range focal plane was developed and delivered by Raytheon Vision Systems for a docking system that was demonstrated on STS-134. This required state of the art focal plane and electronics synchronization to capture nanosecond length laser pulses to determine ranges with an accuracy of less than 1 inch.

  8. An Inquiry-Based Vision Science Activity for Graduate Students and Postdoctoral Research Scientists

    NASA Astrophysics Data System (ADS)

    Putnam, N. M.; Maness, H. L.; Rossi, E. A.; Hunter, J. J.

    2010-12-01

    The vision science activity was originally designed for the 2007 Center for Adaptive Optics (CfAO) Summer School. Participants were graduate students, postdoctoral researchers, and professionals studying the basics of adaptive optics. The majority were working in fields outside vision science, mainly astronomy and engineering. The primary goal of the activity was to give participants first-hand experience with the use of a wavefront sensor designed for clinical measurement of the aberrations of the human eye and to demonstrate how the resulting wavefront data generated from these measurements can be used to assess optical quality. A secondary goal was to examine the role wavefront measurements play in the investigation of vision-related scientific questions. In 2008, the activity was expanded to include a new section emphasizing defocus and astigmatism and vision testing/correction in a broad sense. As many of the participants were future post-secondary educators, a final goal of the activity was to highlight the inquiry-based approach as a distinct and effective alternative to traditional laboratory exercises. Participants worked in groups throughout the activity and formative assessment by a facilitator (instructor) was used to ensure that participants made progress toward the content goals. At the close of the activity, participants gave short presentations about their work to the whole group, the major points of which were referenced in a facilitator-led synthesis lecture. We discuss highlights and limitations of the vision science activity in its current format (2008 and 2009 summer schools) and make recommendations for its improvement and adaptation to different audiences.

  9. Poor Vision, Functioning, and Depressive Symptoms: A Test of the Activity Restriction Model

    ERIC Educational Resources Information Center

    Bookwala, Jamila; Lawson, Brendan

    2011-01-01

    Purpose: This study tested the applicability of the activity restriction model of depressed affect to the context of poor vision in late life. This model hypothesizes that late-life stressors contribute to poorer mental health not only directly but also indirectly by restricting routine everyday functioning. Method: We used data from a national…

  10. The study of calibration and epipolar geometry for the stereo vision system built by fisheye lenses

    NASA Astrophysics Data System (ADS)

    Zhang, Baofeng; Lu, Chunfang; Röning, Juha; Feng, Weijia

    2015-01-01

    Fish-eye lens is a kind of short focal distance (f=6~16mm) camera. The field of view (FOV) of it is near or even exceeded 180×180 degrees. A lot of literatures show that the multiple view geometry system built by fish-eye lens will get larger stereo field than traditional stereo vision system which based on a pair of perspective projection images. Since a fish-eye camera usually has a wider-than-hemispherical FOV, the most of image processing approaches based on the pinhole camera model for the conventional stereo vision system are not satisfied to deal with the applications of this category of stereo vision which built by fish-eye lenses. This paper focuses on discussing the calibration and the epipolar rectification method for a novel machine vision system set up by four fish-eye lenses, which is called Special Stereo Vision System (SSVS). The characteristic of SSVS is that it can produce 3D coordinate information from the whole global observation space and acquiring no blind area 360º×360º panoramic image simultaneously just using single vision equipment with one time static shooting. Parameters calibration and epipolar rectification is the basic for SSVS to realize 3D reconstruction and panoramic image generation.

  11. Synthetic and Enhanced Vision System for Altair Lunar Lander

    NASA Technical Reports Server (NTRS)

    Prinzell, Lawrence J., III; Kramer, Lynda J.; Norman, Robert M.; Arthur, Jarvis J., III; Williams, Steven P.; Shelton, Kevin J.; Bailey, Randall E.

    2009-01-01

    Past research has demonstrated the substantial potential of synthetic and enhanced vision (SV, EV) for aviation (e.g., Prinzel & Wickens, 2009). These augmented visual-based technologies have been shown to significantly enhance situation awareness, reduce workload, enhance aviation safety (e.g., reduced propensity for controlled flight -into-terrain accidents/incidents), and promote flight path control precision. The issues that drove the design and development of synthetic and enhanced vision have commonalities to other application domains; most notably, during entry, descent, and landing on the moon and other planetary surfaces. NASA has extended SV/EV technology for use in planetary exploration vehicles, such as the Altair Lunar Lander. This paper describes an Altair Lunar Lander SV/EV concept and associated research demonstrating the safety benefits of these technologies.

  12. Attention in Active Vision: A Perspective on Perceptual Continuity Across Saccades.

    PubMed

    Rolfs, Martin

    2015-01-01

    Alfred L. Yarbus was among the first to demonstrate that eye movements actively serve our perceptual and cognitive goals, a crucial recognition that is at the heart of today's research on active vision. He realized that not the changes in fixation stick in memory but the changes in shifts of attention. Indeed, oculomotor control is tightly coupled to functions as fundamental as attention and memory. This tight relationship offers an intriguing perspective on transsaccadic perceptual continuity, which we experience despite the fact that saccades cause rapid shifts of the image across the retina. Here, I elaborate this perspective based on a series of psychophysical findings. First, saccade preparation shapes the visual system's priorities; it enhances visual performance and perceived stimulus intensity at the targets of the eye movement. Second, before saccades, the deployment of visual attention is updated, predictively facilitating perception at those retinal locations that will be relevant once the eyes land. Third, saccadic eye movements strongly affect the contents of visual memory, highlighting their crucial role for which parts of a scene we remember or forget. Together, these results provide insights on how attentional processes enable the visual system to cope with the retinal consequences of saccades.

  13. Machine vision system for quality control assessment of bareroot pine seedlings

    NASA Astrophysics Data System (ADS)

    Wilhoit, John H.; Kutz, L. J.; Vandiver, W. A.

    1995-01-01

    A PC-based machine vision system was used at a forest nursery for two months to make quality control measurements of bareroot pine seedlings. In tests conducted during the lifting season, there was close agreement between machine vision and manual measurement distribution results for seedling samples for both root collar diameter and tap root length. During a second set of tests conducted after adding a bud tip height measurement routine, measurement distribution results for seedling samples were in close agreement for root collar diameter, tap root length, and bud tip height. Machine vision measurements of root collar diameter and tap root length also correlated well with manual measurements on a seedling-to- seedling basis for the second test. With the machine vision system, seedling samples could be measured by one person in approximately the same amount of time that it took two people to measure them manually.

  14. Simulation assessment of synthetic vision system concepts for UAV operations

    NASA Astrophysics Data System (ADS)

    Calhoun, Gloria L.; Draper, Mark H.; Ruff, Heath A.; Nelson, Jeremy T.; Lefebvre, Austen T.

    2006-05-01

    The Air Force Research Laboratory's Human Effectiveness Directorate supports research addressing human factors associated with Unmanned Aerial Vehicle (UAV) operator control stations. One research thrust explores the value of combining synthetic vision data with live camera video presented on a UAV control station display. Information is constructed from databases (e.g., terrain, etc.), as well as numerous information updates via networked communication with other sources. This information is overlaid conformal, in real time, onto the dynamic camera video image display presented to operators. Synthetic vision overlay technology is expected to improve operator situation awareness by highlighting elements of interest within the video image. Secondly, it can assist the operator in maintaining situation awareness of an environment if the video datalink is temporarily degraded. Synthetic vision overlays can also serve to facilitate intuitive communications of spatial information between geographically separated users. This paper discusses results from a high-fidelity UAV simulation evaluation of synthetic symbology overlaid on a (simulated) live camera display. Specifically, the effects of different telemetry data update rates for synthetic visual data were examined for a representative sensor operator task. Participants controlled the zoom and orientation of the camera to find and designate targets. The results from both performance and subjective data demonstrated the potential benefit of an overlay of synthetic symbology for improving situation awareness, reducing workload, and decreasing time required to designate points of interest. Implications of symbology update rate are discussed, as well as other human factors issues.

  15. Focal-Plane Change Triggered Video Compression for Low-Power Vision Sensor Systems

    PubMed Central

    Chi, Yu M.; Etienne-Cummings, Ralph; Cauwenberghs, Gert

    2009-01-01

    Video sensors with embedded compression offer significant energy savings in transmission but incur energy losses in the complexity of the encoder. Energy efficient video compression architectures for CMOS image sensors with focal-plane change detection are presented and analyzed. The compression architectures use pixel-level computational circuits to minimize energy usage by selectively processing only pixels which generate significant temporal intensity changes. Using the temporal intensity change detection to gate the operation of a differential DCT based encoder achieves nearly identical image quality to traditional systems (4dB decrease in PSNR) while reducing the amount of data that is processed by 67% and reducing overall power consumption reduction of 51%. These typical energy savings, resulting from the sparsity of motion activity in the visual scene, demonstrate the utility of focal-plane change triggered compression to surveillance vision systems. PMID:19629187

  16. Robust and efficient vision system for group of cooperating mobile robots with application to soccer robots.

    PubMed

    Klancar, Gregor; Kristan, Matej; Kovacic, Stanislav; Orqueda, Omar

    2004-07-01

    In this paper a global vision scheme for estimation of positions and orientations of mobile robots is presented. It is applied to robot soccer application which is a fast dynamic game and therefore needs an efficient and robust vision system implemented. General applicability of the vision system can be found in other robot applications such as mobile transport robots in production, warehouses, attendant robots, fast vision tracking of targets of interest and entertainment robotics. Basic operation of the vision system is divided into two steps. In the first, the incoming image is scanned and pixels are classified into a finite number of classes. At the same time, a segmentation algorithm is used to find corresponding regions belonging to one of the classes. In the second step, all the regions are examined. Selection of the ones that are a part of the observed object is made by means of simple logic procedures. The novelty is focused on optimization of the processing time needed to finish the estimation of possible object positions. Better results of the vision system are achieved by implementing camera calibration and shading correction algorithm. The former corrects camera lens distortion, while the latter increases robustness to irregular illumination conditions.

  17. Human Factors Engineering as a System in the Vision for Exploration

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Smith, Danielle; Holden, Kritina

    2006-01-01

    In order to accomplish NASA's Vision for Exploration, while assuring crew safety and productivity, human performance issues must be well integrated into system design from mission conception. To that end, a two-year Technology Development Project (TDP) was funded by NASA Headquarters to develop a systematic method for including the human as a system in NASA's Vision for Exploration. The specific goals of this project are to review current Human Systems Integration (HSI) standards (i.e., industry, military, NASA) and tailor them to selected NASA Exploration activities. Once the methods are proven in the selected domains, a plan will be developed to expand the effort to a wider scope of Exploration activities. The methods will be documented for inclusion in NASA-specific documents (such as the Human Systems Integration Standards, NASA-STD-3000) to be used in future space systems. The current project builds on a previous TDP dealing with Human Factors Engineering processes. That project identified the key phases of the current NASA design lifecycle, and outlined the recommended HFE activities that should be incorporated at each phase. The project also resulted in a prototype of a webbased HFE process tool that could be used to support an ideal HFE development process at NASA. This will help to augment the limited human factors resources available by providing a web-based tool that explains the importance of human factors, teaches a recommended process, and then provides the instructions, templates and examples to carry out the process steps. The HFE activities identified by the previous TDP are being tested in situ for the current effort through support to a specific NASA Exploration activity. Currently, HFE personnel are working with systems engineering personnel to identify HSI impacts for lunar exploration by facilitating the generation of systemlevel Concepts of Operations (ConOps). For example, medical operations scenarios have been generated for lunar habitation

  18. A Future Vision of Nuclear Material Information Systems

    SciTech Connect

    Wimple, C.; Suski, N.; Kreek, S.; Buckley, W.; Romine, B.

    1999-09-17

    Modern nuclear materials accounting and safeguards measurement systems are becoming increasingly advanced as they embrace emerging technologies. However, many facilities still rely on human intervention to update materials accounting records. The demand for nuclear materials safeguards information continues to increase while general industry and government down-sizing has resulted in less availability of qualified staff. Future safeguards requirements will necessitate access to information through unattended and/or remote monitoring systems requiring minimal human intervention. Under the auspices of the Department of Energy (DOE), LLNL is providing assistance in the development of standards for minimum raw data file contents, methodology for comparing shipper-receiver values and generation of total propagated measurement uncertainties, as well as the implementation of modern information technology to improve reliability of and accessibility to nuclear materials information. An integrated safeguards and accounting system is described, along with data and methodology standards that ultimately speed access to this information. This system will semi-automate activities such as material balancing, reconciliation of shipper/receiver differences, and report generation. In addition, this system will implement emerging standards that utilize secure direct electronic linkages throughout several phases of safeguards accounting and reporting activities. These linkages will demonstrate integration of equipment in the facility that measures material quantities, a site-level computerized Materials Control and Accounting (MC&A) inventory system, and a country-level state system of accounting and control.

  19. A vision-aided alignment datum system for coordinate measuring machines

    NASA Astrophysics Data System (ADS)

    Wang, L.; Lin, G. C. I.

    1997-07-01

    This paper presents the development of a CAD-based and vision-aided precision measurement system. A new coordinate system alignment technique for coordinate measuring machines (CMMs) is described. This alignment technique involves a machine vision system with CAD-based planning and execution of inspection. The determination method for measuring datums for the coordinate measuring technique, using the AutoCAD development system, is described in more detail. To improve image quality in the machine vision system, a contrast enhancement technique is used on the image background to reduce image noise, and an on-line calibration technique is applied. Some systematic errors may be caused by imperfect geometric features in components during coordinate system alignment. This measurement system approach, with its new measuring coordinate alignment method, can be used for high-precision measurement to overcome such errors.

  20. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  1. Computational vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1981-01-01

    The range of fundamental computational principles underlying human vision that equally apply to artificial and natural systems is surveyed. There emerges from research a view of the structuring of vision systems as a sequence of levels of representation, with the initial levels being primarily iconic (edges, regions, gradients) and the highest symbolic (surfaces, objects, scenes). Intermediate levels are constrained by information made available by preceding levels and information required by subsequent levels. In particular, it appears that physical and three-dimensional surface characteristics provide a critical transition from iconic to symbolic representations. A plausible vision system design incorporating these principles is outlined, and its key computational processes are elaborated.

  2. Visions of the Future. Social Science Activities Text. Teacher's Edition.

    ERIC Educational Resources Information Center

    Melnick, Rob; Ronan, Bernard

    Intended to put both national and global issues into perspective and help students make decisions about their futures, this teacher's edition provides instructional objectives, ideas for discussion and inquiries, test blanks for each section, and answer keys for the 22 activities provided in the accompanying student text. Designed to provide high…

  3. Approximate world models: Incorporating qualitative and linguistic information into vision systems

    SciTech Connect

    Pinhanez, C.S.; Bobick, A.F.

    1996-12-31

    Approximate world models are coarse descriptions of the elements of a scene, and are intended to be used in the selection and control of vision routines in a vision system. In this paper we present a control architecture in which the approximate models represent the complex relationships among the objects in the world, allowing the vision routines to be situation or context specific. Moreover, because of their reduced accuracy requirements, approximate world models can employ qualitative information such as those provided by linguistic descriptions of the scene. The concept is demonstrated in the development of automatic cameras for a TV studio-SmartCams. Results are shown where SmartCams use vision processing of real imagery and information written in the script of a TV show to achieve TV-quality framing.

  4. Real time image processing with an analog vision chip system.

    PubMed

    Kameda, S; Honda, A; Yagi, T

    1999-10-01

    A linear analog network model is proposed to characterize the function of the outer retinal circuit in terms of the standard regularization theory. Inspired by the function and the architecture of the model, a vision chip has been designed using analog CMOS Very Large Scale Integrated circuit technology. In the chip, sample/hold amplifier circuits are incorporated to compensate for statistic transistor mismatches. Accordingly, extremely low noise outputs were obtained from the chip. Using the chip and a zero-crossing detector, edges of given images were effectively extracted in indoor illumination.

  5. The Glenn A. Fry Award Lecture 2012: Plasticity of the visual system following central vision loss.

    PubMed

    Chung, Susana T L

    2013-06-01

    Following the onset of central vision loss, most patients develop an eccentric retinal location outside the affected macular region, the preferred retinal locus (PRL), as their new reference for visual tasks. The first goal of this article is to present behavioral evidence showing the presence of experience-dependent plasticity in people with central vision loss. The evidence includes the presence of oculomotor re-referencing of fixational saccades to the PRL; the characteristics of the shape of the crowding zone (spatial region within which the presence of other objects affects the recognition of a target) at the PRL are more "foveal-like" instead of resembling those of the normal periphery; and the change in the shape of the crowding zone at a para-PRL location that includes a component referenced to the PRL. These findings suggest that there is a shift in the referencing locus of the oculomotor and the sensory visual system from the fovea to the PRL for people with central vision loss, implying that the visual system for these individuals is still plastic and can be modified through experiences. The second goal of the article is to demonstrate the feasibility of applying perceptual learning, which capitalizes on the presence of plasticity, as a tool to improve functional vision for people with central vision loss. Our finding that visual function could improve with perceptual learning presents an exciting possibility for the development of an alternative rehabilitative strategy for people with central vision loss.

  6. Using Vision System Technologies for Offset Approaches in Low Visibility Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.

    2015-01-01

    Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen

  7. Improving vision-based motor rehabilitation interactive systems for users with disabilities using mirror feedback.

    PubMed

    Jaume-i-Capó, Antoni; Martínez-Bueso, Pau; Moyà-Alcover, Biel; Varona, Javier

    2014-01-01

    Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (T(s)) and time-to-complete (T(c))). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (T(s) = 7.09 (P < 0.001) and T(c) = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems.

  8. Improving Vision-Based Motor Rehabilitation Interactive Systems for Users with Disabilities Using Mirror Feedback

    PubMed Central

    Martínez-Bueso, Pau; Moyà-Alcover, Biel

    2014-01-01

    Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (Ts) and time-to-complete (Tc)). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (Ts = 7.09 (P < 0.001) and Tc = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems. PMID:25295310

  9. Eye vision system using programmable micro-optics and micro-electronics

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.; Amin, M. Junaid; Riza, Mehdi N.

    2014-02-01

    Proposed is a novel eye vision system that combines the use of advanced micro-optic and microelectronic technologies that includes programmable micro-optic devices, pico-projectors, Radio Frequency (RF) and optical wireless communication and control links, energy harvesting and storage devices and remote wireless energy transfer capabilities. This portable light weight system can measure eye refractive powers, optimize light conditions for the eye under test, conduct color-blindness tests, and implement eye strain relief and eye muscle exercises via time sequenced imaging. Described is the basic design of the proposed system and its first stage system experimental results for vision spherical lens refractive error correction.

  10. Development of a machine vision system for a real-time precision sprayer

    NASA Astrophysics Data System (ADS)

    Bossu, Jérémie; Gée, Christelle; Truchetet, Frédéric

    2007-01-01

    In the context of precision agriculture, we have developed a machine vision system for a real time precision sprayer. From a monochrome CCD camera located in front of the tractor, the discrimination between crop and weeds is obtained with an image processing based on spatial information using a Gabor filter. This method allows to detect the periodic signals from the non periodic one and it enables to enhance the crop rows whereas weeds have patchy distribution. Thus, weed patches were clearly identified by a blob-coloring method. Finally, we use a pinhole model to transform the weed patch coordinates image in world coordinates in order to activate the right electro-pneumatic valve of the sprayer at the right moment.

  11. A Customized Vision System for Tracking Humans Wearing Reflective Safety Clothing from Industrial Vehicles and Machinery

    PubMed Central

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  12. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  13. Development and modeling of a stereo vision focusing system for a field programmable gate array robot

    NASA Astrophysics Data System (ADS)

    Tickle, Andrew J.; Buckle, James; Grindley, Josef E.; Smith, Jeremy S.

    2010-10-01

    Stereo vision is a situation where an imaging system has two or more cameras in order to make it more robust by mimicking the human vision system. By using two inputs, knowledge of their own relative geometry can be exploited to derive depth information from the two views they receive. 3D co-ordinates of an object in an observed scene can be computed from the intersection of the two sets of rays. Presented here is the development of a stereo vision system to focus on an object at the centre of a baseline between two cameras at varying distances. This has been developed primarily for use on a Field Programmable Gate Array (FPGA) but an adaptation of this developed methodology is also presented for use with a PUMA 560 Robotic Manipulator with a single camera attachment. The two main vision systems considered here are a fixed baseline with an object moving at varying distances from this baseline, and a system with a fixed distance and a varying baseline. These two differing situations provide enough data so that the co-efficient variables that determine the system operation can be calibrated automatically with only the baseline value needing to be entered, the system performs all the required calculations for the user for use with a baseline of any distance. The limits of system with regards to the focusing accuracy obtained are also presented along with how the PUMA 560 controls its joints for the stereo vision and how it moves from one position to another to attend stereo vision compared to the two camera system for the FPGA. The benefits of such a system for range finding in mobile robotics are discussed and how this approach is more advantageous when compared against laser range finders or echolocation using ultrasonics.

  14. Night vision imaging systems design, integration, and verification in military fighter aircraft

    NASA Astrophysics Data System (ADS)

    Sabatini, Roberto; Richardson, Mark A.; Cantiello, Maurizio; Toscano, Mario; Fiorini, Pietro; Jia, Huamin; Zammit-Mangion, David

    2012-04-01

    This paper describes the developmental and testing activities conducted by the Italian Air Force Official Test Centre (RSV) in collaboration with Alenia Aerospace, Litton Precision Products and Cranfiled University, in order to confer the Night Vision Imaging Systems (NVIS) capability to the Italian TORNADO IDS (Interdiction and Strike) and ECR (Electronic Combat and Reconnaissance) aircraft. The activities consisted of various Design, Development, Test and Evaluation (DDT&E) activities, including Night Vision Goggles (NVG) integration, cockpit instruments and external lighting modifications, as well as various ground test sessions and a total of eighteen flight test sorties. RSV and Litton Precision Products were responsible of coordinating and conducting the installation activities of the internal and external lights. Particularly, an iterative process was established, allowing an in-site rapid correction of the major deficiencies encountered during the ground and flight test sessions. Both single-ship (day/night) and formation (night) flights were performed, shared between the Test Crews involved in the activities, allowing for a redundant examination of the various test items by all participants. An innovative test matrix was developed and implemented by RSV for assessing the operational suitability and effectiveness of the various modifications implemented. Also important was definition of test criteria for Pilot and Weapon Systems Officer (WSO) workload assessment during the accomplishment of various operational tasks during NVG missions. Furthermore, the specific technical and operational elements required for evaluating the modified helmets were identified, allowing an exhaustive comparative evaluation of the two proposed solutions (i.e., HGU-55P and HGU-55G modified helmets). The results of the activities were very satisfactory. The initial compatibility problems encountered were progressively mitigated by incorporating modifications both in the front and

  15. 78 FR 68475 - Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-14

    ... COMMISSION Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...-based driver assistance system cameras and components thereof by reason of infringement of certain... assistance system cameras and components thereof by reason of infringement of one or more of claims 1, 2,...

  16. Simple and inexpensive stereo vision system for 3D data acquisition

    NASA Astrophysics Data System (ADS)

    Mermall, Samuel E.; Lindner, John F.

    2014-10-01

    We describe a simple stereo-vision system for tracking motion in three dimensions using a single ordinary camera. A simple mirror system divides the camera's field of view into left and right stereo pairs. We calibrate the system by tracking a point on a spinning wheel and demonstrate its use by tracking the corner of a flapping flag.

  17. Function-based design process for an intelligent ground vehicle vision system

    NASA Astrophysics Data System (ADS)

    Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.

    2010-10-01

    An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.

  18. Vision Underwater.

    ERIC Educational Resources Information Center

    Levine, Joseph S.

    1980-01-01

    Provides information regarding underwater vision. Includes a discussion of optically important interfaces, increased eye size of organisms at greater depths, visual peculiarities regarding the habitat of the coastal environment, and various pigment visual systems. (CS)

  19. Vision System To Identify Car Body Types For Spray Painting Robot

    NASA Astrophysics Data System (ADS)

    Uartlam, Peter; Neilson, Geoff

    1984-02-01

    The automation of car body spray booth operations employing paint spraying robots generally requires the robots to execute one of a number of defined routines according to the car body type. A vision system is described which identifies a car body type by its shape and provides an identity code to the robot controller thus enabling the correct routine to be executed. The vision system consists of a low cost linescan camera, a flucrescens light source and a microprocessor image analyser and is an example of a cost effective, reliable, industrially engineered robot vision system for a demanding production environment. Extension of the system with additional cameras will increase the application to the other automatic operations on a car assembly line where it becomes essential to reliably differentiate between up to 40 vatiations of body types.

  20. Long-Term Instrumentation, Information, and Control Systems (II&C) Modernization Future Vision and Strategy

    SciTech Connect

    Kenneth Thomas

    2012-02-01

    digital environment to enhance nuclear safety, increase productivity, and improve overall plant performance. The long-term goal is to transform the operating model of the nuclear power plants (NPP)s from one that is highly reliant on a large staff performing mostly manual activities to an operating model based on highly integrated technology with a smaller staff. This digital transformation is critical to addressing an array of issues facing the plants, including aging of legacy analog systems, potential shortage of technical workers, ever-increasing expectations for nuclear safety improvement, and relentless pressure to reduce cost. The Future Vision is based on research is being conducted in the following major areas of plant function: (1) Highly integrated control rooms; (2) Highly automated plant; (3) Integrated operations; (4) Human performance improvement for field workers; and (5) Outage safety and efficiency. Pilot projects will be conducted in each of these areas as the means for industry to collectively integrate these new technologies into nuclear plant work activities. The pilot projects introduce new digital technologies into the nuclear plant operating environment at host operating plants to demonstrate and validate them for production usage. In turn, the pilot project technologies serve as the stepping stones to the eventual seamless digital environment as described in the Future Vision.

  1. Long-Term Instrumentation, Information, and Control Systems (II&C) Modernization Future Vision and Strategy

    SciTech Connect

    Kenneth Thomas; Bruce Hallbert

    2013-02-01

    seamless digital environment to enhance nuclear safety, increase productivity, and improve overall plant performance. The long-term goal is to transform the operating model of the nuclear power plants (NPP)s from one that is highly reliant on a large staff performing mostly manual activities to an operating model based on highly integrated technology with a smaller staff. This digital transformation is critical to addressing an array of issues facing the plants, including aging of legacy analog systems, potential shortage of technical workers, ever-increasing expectations for nuclear safety improvement, and relentless pressure to reduce cost. The Future Vision is based on research is being conducted in the following major areas of plant function: 1. Highly integrated control rooms 2. Highly automated plant 3. Integrated operations 4. Human performance improvement for field workers 5. Outage safety and efficiency. Pilot projects will be conducted in each of these areas as the means for industry to collectively integrate these new technologies into nuclear plant work activities. The pilot projects introduce new digital technologies into the nuclear plant operating environment at host operating plants to demonstrate and validate them for production usage. In turn, the pilot project technologies serve as the stepping stones to the eventual seamless digital environment as described in the Future Vision.

  2. Design of a system for vision screening and follow-up eye care for children in Milwaukee Public Schools.

    PubMed

    Murphy, Kathleen; Wu, Min; Steber, Dale; Cisler, Ron A

    2005-01-01

    Vision problems affect many school age children, while only few of children are adequately screened for vision problems. The design of an information system supporting vision screening and follow-up eye care for Milwaukee Public Schools is discussed in this paper, which includes wireless data collection and web-based data management. The implementation of the system is ongoing. The information system will provide service to approximately 5,000 students annually in 30 urban elementary schools.

  3. A computer vision system for the recognition of trees in aerial photographs

    NASA Technical Reports Server (NTRS)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  4. Computer Vision System For Locating And Identifying Defects In Hardwood Lumber

    NASA Astrophysics Data System (ADS)

    Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.

    1989-03-01

    This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.

  5. GARGOYLE: An environment for real-time, context-sensitive active vision

    SciTech Connect

    Prokopowicz, P.N.; Swain, M.J.; Firby, R.J.; Kahn, R.E.

    1996-12-31

    Researchers in robot vision have access to several excellent image processing packages (e.g., Khoros, Vista, Susan, MIL, and X Vision to name only a few) as a base for any new vision software needed in most navigation and recognition tasks. Our work in automonous robot control and human-robot interaction, however, has demanded a new level of run-time flexibility and performance: on-the-fly configuration of visual routines that exploit up-to-the-second context from the task, image, and environment. The result is Gargoyle: an extendible, on-board, real-time vision software package that allows a robot to configure, parameterize, and execute image-processing pipelines at run-time. Each operator in a pipeline works at a level of resolution and over regions of interest that are computed by upstream operators or set by the robot according to task constraints. Pipeline configurations and operator parameters can be stored as a library of visual methods appropriate for different sensing tasks and environmental conditions. Beyond this, a robot may reason about the current task and environmental constraints to construct novel visual routines that are too specialized to work under general conditions, but that are well-suited to the immediate environment and task. We use the RAP reactive plan-execution system to select and configure pre-compiled processing pipelines, and to modify them for specific constraints determined at run-time.

  6. Comparison of a multispectral vision system and a colorimeter for the assessment of meat color.

    PubMed

    Trinderup, Camilla H; Dahl, Anders; Jensen, Kirsten; Carstensen, Jens Michael; Conradsen, Knut

    2015-04-01

    The color assessment ability of a multispectral vision system is investigated by a comparison study with color measurements from a traditional colorimeter. The experiment involves fresh and processed meat samples. Meat is a complex material; heterogeneous with varying scattering and reflectance properties, so several factors can influence the instrumental assessment of meat color. In order to assess whether two methods are equivalent, the variation due to these factors must be taken into account. A statistical analysis was conducted and showed that on a calibration sheet the two instruments are equally capable of measuring color. Moreover the vision system provides a more color rich assessment of fresh meat samples with a glossier surface, than the colorimeter. Careful studies of the different sources of variation enable an assessment of the order of magnitude of the variability between methods accounting for other sources of variation leading to the conclusion that color assessment using a multispectral vision system is superior to traditional colorimeter assessments. PMID:25498302

  7. Comparison of a multispectral vision system and a colorimeter for the assessment of meat color.

    PubMed

    Trinderup, Camilla H; Dahl, Anders; Jensen, Kirsten; Carstensen, Jens Michael; Conradsen, Knut

    2015-04-01

    The color assessment ability of a multispectral vision system is investigated by a comparison study with color measurements from a traditional colorimeter. The experiment involves fresh and processed meat samples. Meat is a complex material; heterogeneous with varying scattering and reflectance properties, so several factors can influence the instrumental assessment of meat color. In order to assess whether two methods are equivalent, the variation due to these factors must be taken into account. A statistical analysis was conducted and showed that on a calibration sheet the two instruments are equally capable of measuring color. Moreover the vision system provides a more color rich assessment of fresh meat samples with a glossier surface, than the colorimeter. Careful studies of the different sources of variation enable an assessment of the order of magnitude of the variability between methods accounting for other sources of variation leading to the conclusion that color assessment using a multispectral vision system is superior to traditional colorimeter assessments.

  8. Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision

    PubMed Central

    Van Dromme, Ilse C.; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter

    2016-01-01

    The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams. PMID:27082854

  9. Human factors and safety considerations of night-vision systems flight using thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Rash, Clarence E.; Verona, Robert W.; Crowley, John S.

    1990-10-01

    Helmet Mounted Systems (HMS) must be lightweight, balanced and compatible with life support and head protection assemblies. This paper discusses the design of one particular HMS, the GEC Ferranti NITE-OP/NIGHTBIRD aviator's Night Vision Goggle (NVG) developed under contracts to the Ministry of Defence for all three services in the United Kingdom (UK) for Rotary Wing and fast jet aircraft. The existing equipment constraints, safety, human factor and optical performance requirements are discussed before the design solution is presented after consideration of these material and manufacturing options.

  10. Stereo vision and CMM-integrated intelligent inspection system in reverse engineering

    NASA Astrophysics Data System (ADS)

    Fang, Yong; Chen, Kangning; Lin, Zhihang

    1998-10-01

    3D coordinates acquisition and 3D model generation for existing parts or prototypes are the critical techniques in reverse engineering. This paper presents an integrated intelligent inspection system of stereo vision and coordinate measurement machine which is fast, flexible and accurate for reverse engineering. It also emphatically discusses the principle, structure and key technique of the system.

  11. Night vision: requirements and possible roadmap for FIR and NIR systems

    NASA Astrophysics Data System (ADS)

    Källhammer, Jan-Erik

    2006-04-01

    A night vision system must increase visibility in situations where only low beam headlights can be used today. As pedestrians and animals have the highest risk increase in night time traffic due to darkness, the ability of detecting those objects should be the main performance criteria, and the system must remain effective when facing the headlights of oncoming vehicles. Far infrared system has been shown to be superior to near infrared system in terms of pedestrian detection distance. Near infrared images were rated to have significantly higher visual clutter compared with far infrared images. Visual clutter has been shown to correlate with reduction in detection distance of pedestrians. Far infrared images are perceived as being more unusual and therefore more difficult to interpret, although the image appearance is likely related to the lower visual clutter. However, the main issue comparing the two technologies should be how well they solve the driver's problem with insufficient visibility under low beam conditions, especially of pedestrians and other vulnerable road users. With the addition of an automatic detection aid, a main issue will be whether the advantage of FIR systems will vanish given NIR systems with well performing automatic pedestrian detection functionality. The first night vision introductions did not generate the sales volumes initially expected. A renewed interest in night vision systems are however to be expected after the release of night vision systems by BMW, Mercedes and Honda, the latter with automatic pedestrian detection.

  12. 75 FR 38391 - Special Conditions: Boeing 757-200 With Enhanced Flight Vision System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-02

    ... Administration 14 CFR Part 25 Special Conditions: Boeing 757-200 With Enhanced Flight Vision System AGENCY... airplanes, as modified by the Federal Express Corporation, will have an advanced, enhanced-flight-visibility... symbolic flight information. However, the term has also been commonly used in reference to systems...

  13. New vision system and navigation algorithm for an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Tann, Hokchhay; Shakya, Bicky; Merchen, Alex C.; Williams, Benjamin C.; Khanal, Abhishek; Zhao, Jiajia; Ahlgren, David J.

    2013-12-01

    Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 Intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.

  14. Flight Test Evaluation of Situation Awareness Benefits of Integrated Synthetic Vision System Technology f or Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III

    2005-01-01

    Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.

  15. [Development of a new position-recognition system for robotic radiosurgery systems using machine vision].

    PubMed

    Mohri, Issai; Umezu, Yoshiyuki; Fukunaga, Junnichi; Tane, Hiroyuki; Nagata, Hironori; Hirashima, Hideaki; Nakamura, Katsumasa; Hirata, Hideki

    2014-08-01

    CyberKnife(®) provides continuous guidance through radiography, allowing instantaneous X-ray images to be obtained; it is also equipped with 6D adjustment for patient setup. Its disadvantage is that registration is carried out just before irradiation, making it impossible to perform stereo-radiography during irradiation. In addition, patient movement cannot be detected during irradiation. In this study, we describe a new registration system that we term "Machine Vision," which subjects the patient to no additional radiation exposure for registration purposes, can be set up promptly, and allows real-time registration during irradiation. Our technique offers distinct advantages over CyberKnife by enabling a safer and more precise mode of treatment. "Machine Vision," which we have designed and fabricated, is an automatic registration system that employs three charge coupled device cameras oriented in different directions that allow us to obtain a characteristic depiction of the shape of both sides of the fetal fissure and external ears in a human head phantom. We examined the degree of precision of this registration system and concluded it to be suitable as an alternative method of registration without radiation exposure when displacement is less than 1.0 mm in radiotherapy. It has potential for application to CyberKnife in clinical treatment. PMID:25142385

  16. [Development of a new position-recognition system for robotic radiosurgery systems using machine vision].

    PubMed

    Mohri, Issai; Umezu, Yoshiyuki; Fukunaga, Junnichi; Tane, Hiroyuki; Nagata, Hironori; Hirashima, Hideaki; Nakamura, Katsumasa; Hirata, Hideki

    2014-08-01

    CyberKnife(®) provides continuous guidance through radiography, allowing instantaneous X-ray images to be obtained; it is also equipped with 6D adjustment for patient setup. Its disadvantage is that registration is carried out just before irradiation, making it impossible to perform stereo-radiography during irradiation. In addition, patient movement cannot be detected during irradiation. In this study, we describe a new registration system that we term "Machine Vision," which subjects the patient to no additional radiation exposure for registration purposes, can be set up promptly, and allows real-time registration during irradiation. Our technique offers distinct advantages over CyberKnife by enabling a safer and more precise mode of treatment. "Machine Vision," which we have designed and fabricated, is an automatic registration system that employs three charge coupled device cameras oriented in different directions that allow us to obtain a characteristic depiction of the shape of both sides of the fetal fissure and external ears in a human head phantom. We examined the degree of precision of this registration system and concluded it to be suitable as an alternative method of registration without radiation exposure when displacement is less than 1.0 mm in radiotherapy. It has potential for application to CyberKnife in clinical treatment.

  17. Compact, self-contained enhanced-vision system (EVS) sensor simulator

    NASA Astrophysics Data System (ADS)

    Tiana, Carlo

    2007-04-01

    We describe the model SIM-100 PC-based simulator, for imaging sensors used, or planned for use, in Enhanced Vision System (EVS) applications. Typically housed in a small-form-factor PC, it can be easily integrated into existing out-the-window visual simulators for fixed-wing or rotorcraft, to add realistic sensor imagery to the simulator cockpit. Multiple bands of infrared (short-wave, midwave, extended-midwave and longwave) as well as active millimeter-wave RADAR systems can all be simulated in real time. Various aspects of physical and electronic image formation and processing in the sensor are accurately (and optionally) simulated, including sensor random and fixed pattern noise, dead pixels, blooming, B-C scope transformation (MMWR). The effects of various obscurants (fog, rain, etc.) on the sensor imagery are faithfully represented and can be selected by an operator remotely and in real-time. The images generated by the system are ideally suited for many applications, ranging from sensor development engineering tradeoffs (Field Of View, resolution, etc.), to pilot familiarization and operational training, and certification support. The realistic appearance of the simulated images goes well beyond that of currently deployed systems, and beyond that required by certification authorities; this level of realism will become necessary as operational experience with EVS systems grows.

  18. Novel approach to characterize and compare the performance of night vision systems in representative illumination conditions

    NASA Astrophysics Data System (ADS)

    Roy, Nathalie; Vallières, Alexandre; St-Germain, Daniel; Potvin, Simon; Dupuis, Michel; Bouchard, Jean-Claude; Villemaire, André; Bérubé, Martin; Breton, Mélanie; Gagné, Guillaume

    2016-05-01

    A novel approach is used to characterize and compare the performance of night vision systems in conditions more representative of night operation in terms of spectral content. Its main advantage compared to standard testing methodologies is that it provides a fast and efficient way for untrained observers to compare night vision system performances with realistic illumination spectra. The testing methodology relies on a custom tumbling-E target and on a new LED-based illumination source that better emulates night sky spectral irradiances from deep overcast starlight to quarter-moon conditions. In this paper, we describe the setup and we demonstrate that the novel approach can be an efficient method to characterize among others night vision goggles (NVG) performances with a small error on the photogenerated electrons compared to the STANAG 4351 procedure.

  19. A bio-inspired apposition compound eye machine vision sensor system.

    PubMed

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-12-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm. PMID:19901450

  20. A bio-inspired apposition compound eye machine vision sensor system.

    PubMed

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-12-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  1. Multi-Purpose Avionic Architecture for Vision Based Navigation Systems for EDL and Surface Mobility Scenarios

    NASA Astrophysics Data System (ADS)

    Tramutola, A.; Paltro, D.; Cabalo Perucha, M. P.; Paar, G.; Steiner, J.; Barrio, A. M.

    2015-09-01

    Vision Based Navigation (VBNAV) has been identified as a valid technology to support space exploration because it can improve autonomy and safety of space missions. Several mission scenarios can benefit from the VBNAV: Rendezvous & Docking, Fly-Bys, Interplanetary cruise, Entry Descent and Landing (EDL) and Planetary Surface exploration. For some of them VBNAV can improve the accuracy in state estimation as additional relative navigation sensor or as absolute navigation sensor. For some others, like surface mobility and terrain exploration for path identification and planning, VBNAV is mandatory. This paper presents the general avionic architecture of a Vision Based System as defined in the frame of the ESA R&T study “Multi-purpose Vision-based Navigation System Engineering Model - part 1 (VisNav-EM-1)” with special focus on the surface mobility application.

  2. G-MAP: a novel night vision system for satellites

    NASA Astrophysics Data System (ADS)

    Miletti, Thomas; Maresi, Luca; Zuccaro Marchi, Alessandro; Pontetti, Giorgia

    2015-10-01

    The recent developments of single-photon counting array detectors opens the door to a novel type of systems that could be used on satellites in low Earth orbit. One possible application is the detection of non-cooperative vessels or illegal fishing activities. Currently only surveillance operations conducted by Navy or coast guard address this topic, operations by nature costly and with limited coverage. This paper aims to describe the architectural design of a system based on a novel single-photon counting detector, which works mainly in the visible and features fast readout, low noise and a 256x256 matrix of 64 μm-pixels. This detector is positioned in the focal plane of a fully aspheric reflective f/6 telescope, to guarantee state of the art performance. The combination of the two grants optimal ground sampling distance, compatible with the average dimension of a vessel, and overall performance. A radiative analysis of the light transmitted from emission to detection is presented, starting from models of lamps used for attracting fishes and illuminating the deck of the boats. A radiative transfer model is used to estimate the amount of photons emitted by such vessels reaching the detector. Since the novel detector features high framerate and low noise, the system as it is envisaged is able to properly serve the proposed goal. The paper shows the results of a trade-off between instrument parameters and spacecraft operations to maximize the detection probability and the covered sea surface. The status of development of both detector and telescope are also described.

  3. Values and value--a vision for the Australian health care system.

    PubMed

    Bessler, J S; Ellies, M

    1995-01-01

    The Australian health care system is at a crossroads. Status quo is not a sustainable option for the future. Rising consumption, spiralling costs, the decline of private health insurance and a public sector 'bursting at the seams' threaten our traditional values of a universal, affordable, accessible, equitable, high-quality system. As a result, we believe that major reform of the health care system is both necessary and inevitable in order to ensure that the values of the system are maintained and to extract maximum value from limited health resources. In this article we lay out our vision for the Australian health care system. It is a vision characterised by transformational change--shifting of risk from patients and taxpayers to providers, downsizing of acute care capacity, integration of services across the system, rationalisation of State and Federal responsibilities and a 'shakeout' of providers and insurers resulting from intensified, but bounded, competition. We believe that the direction for health care players needs to be clarified so that, as a country, we can continue to have a best practice model of health care delivery. We present this vision as a 'stake in the ground' to set parameters around which this debate can emerge. It may be provocative and challenging, but it is our vision into the future. PMID:10152275

  4. Semi-autonomous wheelchair developed using a unique camera system configuration biologically inspired by equine vision.

    PubMed

    Nguyen, Jordan S; Tran, Yvonne; Su, Steven W; Nguyen, Hung T

    2011-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using cameras in a system configuration modeled on the vision system of a horse. This new camera configuration utilizes stereoscopic vision for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, combined with a spherical camera system for 360-degrees of monocular vision. This unique combination allows for static components of an unknown environment to be mapped and any surrounding dynamic obstacles to be detected, during real-time autonomous navigation, minimizing blind-spots and preventing accidental collisions with people or obstacles. This novel vision system combined with shared control strategies provides intelligent assistive guidance during wheelchair navigation and can accompany any hands-free wheelchair control technology. Leading up to experimental trials with patients at the Royal Rehabilitation Centre (RRC) in Ryde, results have displayed the effectiveness of this system to assist the user in navigating safely within the RRC whilst avoiding potential collisions. PMID:22255649

  5. Semi-autonomous wheelchair developed using a unique camera system configuration biologically inspired by equine vision.

    PubMed

    Nguyen, Jordan S; Tran, Yvonne; Su, Steven W; Nguyen, Hung T

    2011-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using cameras in a system configuration modeled on the vision system of a horse. This new camera configuration utilizes stereoscopic vision for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, combined with a spherical camera system for 360-degrees of monocular vision. This unique combination allows for static components of an unknown environment to be mapped and any surrounding dynamic obstacles to be detected, during real-time autonomous navigation, minimizing blind-spots and preventing accidental collisions with people or obstacles. This novel vision system combined with shared control strategies provides intelligent assistive guidance during wheelchair navigation and can accompany any hands-free wheelchair control technology. Leading up to experimental trials with patients at the Royal Rehabilitation Centre (RRC) in Ryde, results have displayed the effectiveness of this system to assist the user in navigating safely within the RRC whilst avoiding potential collisions.

  6. Insect vision based collision avoidance system for Remotely Piloted Aircraft

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger; Handley, James; Bevilacqua, Andrew

    2012-06-01

    Remotely Piloted Aircraft (RPA) are designed to operate in many of the same areas as manned aircraft; however, the limited instantaneous field of regard (FOR) that RPA pilots have limits their ability to react quickly to nearby objects. This increases the danger of mid-air collisions and limits the ability of RPA's to operate in environments such as terminals or other high-traffic environments. We present an approach based on insect vision that increases awareness while keeping size, weight, and power consumption at a minimum. Insect eyes are not designed to gather the same level of information that human eyes do. We present a novel Data Model and dynamically updated look-up-table approach to interpret non-imaging direction sensing only detectors observing a higher resolution video image of the aerial field of regard. Our technique is a composite hybrid method combining a small cluster of low resolution cameras multiplexed into a single composite air picture which is re-imaged by an insect eye to provide real-time scene understanding and collision avoidance cues. We provide smart camera application examples from parachute deployment testing and micro unmanned aerial vehicle (UAV) full motion video (FMV).

  7. Retinal stimulation strategies to restore vision: Fundamentals and systems.

    PubMed

    Yue, Lan; Weiland, James D; Roska, Botond; Humayun, Mark S

    2016-07-01

    Retinal degeneration, a leading cause of blindness worldwide, is primarily characterized by the dysfunctional/degenerated photoreceptors that impair the ability of the retina to detect light. Our group and others have shown that bioelectronic retinal implants restore useful visual input to those who have been blind for decades. This unprecedented approach of restoring sight demonstrates that patients can adapt to new visual input, and thereby opens up opportunities to not only improve this technology but also develop alternative retinal stimulation approaches. These future improvements or new technologies could have the potential of selectively stimulating specific cell classes in the inner retina, leading to improved visual resolution and color vision. In this review we will detail the progress of bioelectronic retinal implants and future devices in this genre as well as discuss other technologies such as optogenetics, chemical photoswitches, and ultrasound stimulation. We will discuss the principles, biological aspects, technology development, current status, clinical outcomes/prospects, and challenges for each approach. The review will cover functional imaging documented cortical responses to retinal stimulation in blind patients. PMID:27238218

  8. Prediction of pork color attributes using computer vision system.

    PubMed

    Sun, Xin; Young, Jennifer; Liu, Jeng Hung; Bachmeier, Laura; Somers, Rose Marie; Chen, Kun Jie; Newman, David

    2016-03-01

    Color image processing and regression methods were utilized to evaluate color score of pork center cut loin samples. One hundred loin samples of subjective color scores 1 to 5 (NPB, 2011; n=20 for each color score) were selected to determine correlation values between Minolta colorimeter measurements and image processing features. Eighteen image color features were extracted from three different RGB (red, green, blue) model, HSI (hue, saturation, intensity) and L*a*b* color spaces. When comparing Minolta colorimeter values with those obtained from image processing, correlations were significant (P<0.0001) for L* (0.91), a* (0.80), and b* (0.66). Two comparable regression models (linear and stepwise) were used to evaluate prediction results of pork color attributes. The proposed linear regression model had a coefficient of determination (R(2)) of 0.83 compared to the stepwise regression results (R(2)=0.70). These results indicate that computer vision methods have potential to be used as a tool in predicting pork color attributes. PMID:26619035

  9. Vision-based system identification technique for building structures using a motion capture system

    NASA Astrophysics Data System (ADS)

    Oh, Byung Kwan; Hwang, Jin Woo; Kim, Yousok; Cho, Tongjun; Park, Hyo Seon

    2015-11-01

    This paper presents a new vision-based system identification (SI) technique for building structures by using a motion capture system (MCS). The MCS with outstanding capabilities for dynamic response measurements can provide gage-free measurements of vibrations through the convenient installation of multiple markers. In this technique, from the dynamic displacement responses measured by MCS, the dynamic characteristics (natural frequency, mode shape, and damping ratio) of building structures are extracted after the processes of converting the displacement from MCS to acceleration and conducting SI by frequency domain decomposition. A free vibration experiment on a three-story shear frame was conducted to validate the proposed technique. The SI results from the conventional accelerometer-based method were compared with those from the proposed technique and showed good agreement, which confirms the validity and applicability of the proposed vision-based SI technique for building structures. Furthermore, SI directly employing MCS measured displacements to FDD was performed and showed identical results to those of conventional SI method.

  10. Human performance evaluation of enhanced vision systems for approach and landing

    NASA Astrophysics Data System (ADS)

    Yang, Lee C.; Hansman, R. John, Jr.

    1994-07-01

    A study was conducted to compare three types of enhanced vision systems (EVS) from the human pilot's perspective. The EVS images were generated on a silicon graphics workstation to represent: an active radar-mapping imaging system, an idealized forward-looking infrared (FLIR) sensor system, and a synthetic wireframe airport database system. The study involved six commercial airline pilots. The task was to make manual landings using a simulated head- up display superimposed on the EVS images. In addition to the image type, the sensor range was varied to examine the effect of atmospheric attenuation on landing performance. A third factor looked at the effect of runway touchdown and centerline markings. The low azimuthal resolution of the radar images (0.3 degree(s)) appeared to have affected the lateral precision of the landings. Subjectively, the pilots were split between the idealized FLIR and wireframe images while the radar image was judged to be significantly inferior. Runway markings provided better lateral accuracy in landing and better vertical accuracy during the approach. Runway markings were unanimously preferred by the six pilots.

  11. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    NASA Astrophysics Data System (ADS)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our

  12. Reducing field distortion for galvanometer scanning system using a vision system

    NASA Astrophysics Data System (ADS)

    Ortega Delgado, Moises Alberto; Lasagni, Andrés Fabián

    2016-11-01

    Laser galvanometer scanning systems are well-established devices for material processing, medical imaging and laser projection. Besides all the advantages of these devices like high resolution, repeatability and processing velocity, they are always affected by field distortions. Different pre-compensating techniques using iterative marking and measuring methods are applied in order to reduce such field distortions and increase in some extends the accuracy of the scanning systems. High-tech devices, temperature control systems and self-adjusting galvanometers are some expensive possibilities for reducing these deviations. This contribution presents a method for reducing field distortions using a coaxially coupled vision device and a self-designed calibration plate; this avoids, among others, the necessity of repetitive marking and measuring phases.

  13. External Vision Systems (XVS) Proof-of-Concept Flight Test Evaluation

    NASA Technical Reports Server (NTRS)

    Shelton, Kevin J.; Williams, Steven P.; Kramer, Lynda J.; Arthur, Jarvis J.; Prinzel, Lawrence, III; Bailey, Randall E.

    2014-01-01

    NASA's Fundamental Aeronautics Program, High Speed Project is performing research, development, test and evaluation of flight deck and related technologies to support future low-boom, supersonic configurations (without forward-facing windows) by use of an eXternal Vision System (XVS). The challenge of XVS is to determine a combination of sensor and display technologies which can provide an equivalent level of safety and performance to that provided by forward-facing windows in today's aircraft. This flight test was conducted with the goal of obtaining performance data on see-and-avoid and see-to-follow traffic using a proof-of-concept XVS design in actual flight conditions. Six data collection flights were flown in four traffic scenarios against two different sized participating traffic aircraft. This test utilized a 3x1 array of High Definition (HD) cameras, with a fixed forward field-of-view, mounted on NASA Langley's UC-12 test aircraft. Test scenarios, with participating NASA aircraft serving as traffic, were presented to two evaluation pilots per flight - one using the proof-of-concept (POC) XVS and the other looking out the forward windows. The camera images were presented on the XVS display in the aft cabin with Head-Up Display (HUD)-like flight symbology overlaying the real-time imagery. The test generated XVS performance data, including comparisons to natural vision, and post-run subjective acceptability data were also collected. This paper discusses the flight test activities, its operational challenges, and summarizes the findings to date.

  14. Integration of a Legacy System with Night Vision Training System (NVTS)

    NASA Astrophysics Data System (ADS)

    Anderson, Gretchen M.; Vrana, Craig A.; Riegler, Joseph T.; Martin, Elizabeth L.

    2002-08-01

    The increase in tactical night operations resulted in the requirement for improved night vision goggle (NVG) training and simulation. The Night Vision Training System (NVTS), developed at the Air Force Research Laboratory's Warfighter Training Research Division (AFRL/HEA), provides high-fidelity NVG imagery required to support effective NVG training and mission rehearsal. Acquisition of a multichannel NVTS, to drive both an out-the-window (OTW) view and a helmet-mounted display (HMD), may exceed resources of some training units. An alternative could be to add one channel of NVG imagery to the existing OTW imagery provided by the legacy system. This evaluation addressed engineering and training issues associated with integrating a single NVTS HMD channel with an existing legacy system. Pilots rated the degree of disparity between the HMD and OTW scenes for various scene attributes and effect on flight performance. Findings demonstrated the potential for integration of an NVTS channel with an existing legacy system. Latency and terrain elevation differences between the two databases were measured and did not significantly impact system integration or pilot ratings. When integrating other legacy systems with NVTS, significant disparities may exist between the two databases. Pilot ratings and comments indicate that (a) display brightness and contrast levels of the OTW scene should be set to correspond to real-world, (b) unaided luminance values for a given illumination condition; disparity in moon phase and position between the two sky models should be minimized; and (c) star quantity and brightness in the OTW scene and the NVG scene, as rendered on the HMD, should be as consistent with real-world conditions as possible.

  15. Design and implementation of high power LED machine vision lighting system

    NASA Astrophysics Data System (ADS)

    Xiao, Hua-peng; Li, Ming-dong; Gao, Xing-yu; Chen, Peng-bo

    2014-11-01

    The machine vision system has already become the optical mechanical electrical integration products or components of products in modern equipment manufacturing industry. The new LED is more excellent than the halogen tungsten lamp, laser and other traditional light source. It is used in machine vision system more and more. From the analysis of the functional characteristics, this article pointed out the difference between machine vision LED lighting system and traditional optical instrument lighting system. By the interactive methods which integrate with synthesis design analysis and Simulation, this paper import the element of field flattening theory into traditional lighting design, making it a kind of the new flat field lighting system. The effect when it was applied to high power LED lighting system is good. With the new design concept, through the interactive design method and the new image quality evaluation system, we have a contrast experiment on a kind of LED single lamp lighting system. The results show that the field flat lighting system is superior to the traditional one. The most distinctive feature of the new light system is that, it can improve the performance of critical illumination system in certain procedures -- poor illumination uniformity performance. This new lighting optical structure and the new lighting quality evaluation system have broad prospects.

  16. Novel compact panomorph lens based vision system for monitoring around a vehicle

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2008-04-01

    Automotive applications are one of the largest vision-sensor market segments and one of the fastest growing ones. The trend to use increasingly more sensors in cars is driven both by legislation and consumer demands for higher safety and better driving experiences. Awareness of what directly surrounds a vehicle affects safe driving and manoeuvring of a vehicle. Consequently, panoramic 360° Field of View imaging can contributes most to the perception of the world around the driver than any other sensors. However, to obtain a complete vision around the car, several sensor systems are necessary. To solve this issue, a customized imaging system based on a panomorph lens will provide the maximum information for the drivers with a reduced number of sensors. A panomorph lens is a hemispheric wide angle anamorphic lens with enhanced resolution in predefined zone of interest. Because panomorph lenses are optimized to a custom angle-to-pixel relationship, vision systems provide ideal image coverage that reduces and optimizes the processing. We present various scenarios which may benefit from the use of a custom panoramic sensor. We also discuss the technical requirements of such vision system. Finally we demonstrate how the panomorph based visual sensor is probably one of the most promising ways to fuse many sensors in one. For example, a single panoramic sensor on the front of a vehicle could provide all necessary information for assistance in crash avoidance, lane tracking, early warning, park aids, road sign detection, and various video monitoring views.

  17. Using an Interactive Information System to Expand Preservice Teachers' Visions of Effective Mathematics Teaching.

    ERIC Educational Resources Information Center

    Lambdin, Diana V.; Duffy, Thomas M.; Moore, Julie A.

    1997-01-01

    Describes research that investigated how use of an interactive videodisk information system helped preservice elementary school teachers expand their visions of teaching, learning, and assessment in mathematics. Teachers and lessons in the videos served as models for the preservice teachers and offered a springboard for student reflection and…

  18. 77 FR 21861 - Special Conditions: Boeing, Model 777F; Enhanced Flight Vision System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-12

    ... April 11, 2000 (65 FR 19477-19478), as well as at http://DocketsInfo.dot.gov/ . Docket: Background... Federal Aviation Administration 14 CFR Part 25 Special Conditions: Boeing, Model 777F; Enhanced Flight... feature associated with an advanced, enhanced flight vision system (EFVS). The EFVS consists of a...

  19. Enhanced Flight Vision Systems Operational Feasibility Study Using Radar and Infrared Sensors

    NASA Technical Reports Server (NTRS)

    Etherington, Timothy J.; Kramer, Lynda J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.

    2015-01-01

    Approach and landing operations during periods of reduced visibility have plagued aircraft pilots since the beginning of aviation. Although techniques are currently available to mitigate some of the visibility conditions, these operations are still ultimately limited by the pilot's ability to "see" required visual landing references (e.g., markings and/or lights of threshold and touchdown zone) and require significant and costly ground infrastructure. Certified Enhanced Flight Vision Systems (EFVS) have shown promise to lift the obscuration veil. They allow the pilot to operate with enhanced vision, in lieu of natural vision, in the visual segment to enable equivalent visual operations (EVO). An aviation standards document was developed with industry and government consensus for using an EFVS for approach, landing, and rollout to a safe taxi speed in visibilities as low as 300 feet runway visual range (RVR). These new standards establish performance, integrity, availability, and safety requirements to operate in this regime without reliance on a pilot's or flight crew's natural vision by use of a fail-operational EFVS. A pilot-in-the-loop high-fidelity motion simulation study was conducted at NASA Langley Research Center to evaluate the operational feasibility, pilot workload, and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 feet RVR by use of vision system technologies on a head-up display (HUD) without need or reliance on natural vision. Twelve crews flew various landing and departure scenarios in 1800, 1000, 700, and 300 RVR. This paper details the non-normal results of the study including objective and subjective measures of performance and acceptability. The study validated the operational feasibility of approach and departure operations and success was independent of visibility conditions. Failures were handled within the

  20. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  1. Using Vision System Technologies to Enable Operational Improvements for Low Visibility Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.

    2014-01-01

    Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  2. Modeling the L4 neuron of the fly (Musca domestica) vision system.

    PubMed

    Olson, T E; Wilcox, M J; Barrett, S F

    2001-01-01

    Vision systems based on digital image processing techniques are limited in a variety of areas, particularly speed and memory. Contrast enhancement, image segmentation, object recognition, and object tracking require extensive processing. Biological vision systems drastically outperform computer based digital vision systems in these areas. The animal retina is composed of processing layers with specialized neural cells designed to enhance contrast, segment images, and even produce temporal information. In the vision system of the fly, Musca domestica, the L1, L2, and L4 monopolar cells are of particular interest. The photoreceptor terminals R1 through R6 and L1 and L2 form a cartridge with current shunting inhibition that enhances contrast at the first synaptic contact. L1 and L2 cells are thought to exaggerate contrast while also providing a data reduction encoding scheme to increase communication efficiency with L4 cells and the inner plexiform layer. Research conducted by the authors attempts to simulate the encoding scheme of L1, L2, and L4, and the interactions of these three monopolar cells. This paper proposes that L1 and L2 encode edge information and orientation related to a single cartridge via a sinusoidal modulation scheme. L4 mediates information processing between cartridges via three bi-directional dendritic communication with adjacent L4 cells. Finally, we propose that L4 also synthesizes and forwards the edge orientation information and image movement information to the medulla. A single cartridge simulation was conducted using Matlab. Simulation results will be compared to actual signals taken from the fly eye. Because the fly eye is modular, the goal of this research is to implement the L1, L2, and L4 cell function in analog hardware--the result being a real-time parallel analog vision system. PMID:11347424

  3. Assessing Impact of Dual Sensor Enhanced Flight Vision Systems on Departure Performance

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.

    2016-01-01

    Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS) may serve as game-changing technologies to meet the challenges of the Next Generation Air Transportation System and the envisioned Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety and operational tempos of current-day Visual Flight Rules operations irrespective of the weather and visibility conditions. One significant obstacle lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility and pilot workload of conducting departures and approaches on runways without centerline lighting in visibility as low as 300 feet runway visual range (RVR) by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance and workload was assessed. Using EFVS concepts during 300 RVR terminal operations on runways without centerline lighting appears feasible as all EFVS concepts had equivalent (or better) departure performance and landing rollout performance, without any workload penalty, than those flown with a conventional HUD to runways having centerline lighting. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.

  4. Assessing impact of dual sensor enhanced flight vision systems on departure performance

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.

    2016-05-01

    Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS) may serve as game-changing technologies to meet the challenges of the Next Generation Air Transportation System and the envisioned Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety and operational tempos of current-day Visual Flight Rules operations irrespective of the weather and visibility conditions. One significant obstacle lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility and pilot workload of conducting departures and approaches on runways without centerline lighting in visibility as low as 300 feet runway visual range (RVR) by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance and workload was assessed. Using EFVS concepts during 300 RVR terminal operations on runways without centerline lighting appears feasible as all EFVS concepts had equivalent (or better) departure performance and landing rollout performance, without any workload penalty, than those flown with a conventional HUD to runways having centerline lighting. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.

  5. The experiment study of image acquisition system based on 3D machine vision

    NASA Astrophysics Data System (ADS)

    Zhou, Haiying; Xiao, Zexin; Zhang, Xuefei; Wei, Zhe

    2011-11-01

    Binocular vision is one of the key technology in three-dimensional reconstructed of scene of three-dimensional machine vision. Important information of three-dimensional image could be acquired by binocular vision. When use it, we first get two or more pictures by camera, then we could get three-dimensional imformation included in these pictures by geometry and other relationship. In order to measurement accuracy of image acquisition system improved, image acquisition system of binocular vision about scene three-dimensional reconstruction is studyed in this article. Base on parallax principle and human eye binocular imaging, image acquired system between double optical path and double CCD mothd is comed up with. Experiment could obtain the best angle of double optical path optical axis and the best operating distance of double optical path. Then, through the bset angle of optical axis of double optical path and the best operating distance of double optical path, the centre distance of double CCD could be made sure. The two images of the same scene with different viewpoints is shoot by double CCD. This two images could establish well foundation for three-dimensional reconstructed of image processing in the later period. Through the experimental data shows the rationality of this method.

  6. The effect of gender and level of vision on the physical activity level of children and adolescents with visual impairment.

    PubMed

    Aslan, Ummuhan Bas; Calik, Bilge Basakcı; Kitiş, Ali

    2012-01-01

    This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between 8 and 16 years participated in the study. The physical activity level of cases was evaluated with a physical activity diary (PAD) and one-mile run/walk test (OMR-WT). No difference was found between the PAD and the OMR-WT results of low vision and blind children and adolescents. The visually impaired children and adolescents were detected not to participate in vigorous physical activity. A difference was found in favor of low vision boys in terms of mild, moderate activities and OMR-WT durations. However, no difference was found between physical activity levels of blind girls and boys. The results of our study suggested that the physical activity level of visually impaired children and adolescents was low, and gender affected physical activity in low vision children and adolescents.

  7. Angle extended linear MEMS scanning system for 3D laser vision sensor

    NASA Astrophysics Data System (ADS)

    Pang, Yajun; Zhang, Yinxin; Yang, Huaidong; Zhu, Pan; Gai, Ye; Zhao, Jian; Huang, Zhanhua

    2016-09-01

    Scanning system is often considered as the most important part for 3D laser vision sensor. In this paper, we propose a method for the optical system design of angle extended linear MEMS scanning system, which has features of huge scanning degree, small beam divergence angle and small spot size for 3D laser vision sensor. The principle of design and theoretical formulas are derived strictly. With the help of software ZEMAX, a linear scanning optical system based on MEMS has been designed. Results show that the designed system can extend scanning angle from ±8° to ±26.5° with a divergence angle small than 3.5 mr, and the spot size is reduced for 4.545 times.

  8. The Use of a Tactile-Vision Sensory Substitution System as an Augmentative Tool for Individuals with Visual Impairments

    ERIC Educational Resources Information Center

    Williams, Michael D.; Ray, Christopher T.; Griffith, Jennifer; De l'Aune, William

    2011-01-01

    The promise of novel technological strategies and solutions to assist persons with visual impairments (that is, those who are blind or have low vision) is frequently discussed and held to be widely beneficial in countless applications and daily activities. One such approach involving a tactile-vision sensory substitution modality as a mechanism to…

  9. Assessing Dual Sensor Enhanced Flight Vision Systems to Enable Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.

    2016-01-01

    Flight deck-based vision system technologies, such as Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS), may serve as a revolutionary crew/vehicle interface enabling technologies to meet the challenges of the Next Generation Air Transportation System Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. One significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility, pilot workload and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 ft runway visual range by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs as they made approaches to runways with and without touchdown zone and centerline lights. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance, workload, and situation awareness during extremely low visibility approach and landing operations was assessed. Results indicate that all EFVS concepts flown resulted in excellent approach path tracking and touchdown performance without any workload penalty. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.

  10. Adaptive gain control for spike-based map communication in a neuromorphic vision system.

    PubMed

    Meng, Yicong; Shi, Bertram E

    2008-06-01

    To support large numbers of model neurons, neuromorphic vision systems are increasingly adopting a distributed architecture, where different arrays of neurons are located on different chips or processors. Spike-based protocols are used to communicate activity between processors. The spike activity in the arrays depends on the input statistics as well as internal parameters such as time constants and gains. In this paper, we investigate strategies for automatically adapting these parameters to maintain a constant firing rate in response to changes in the input statistics. We find that under the constraint of maintaining a fixed firing rate, a strategy based upon updating the gain alone performs as well as an optimal strategy where both the gain and the time constant are allowed to vary. We discuss how to choose the time constant and propose an adaptive gain control mechanism whose operation is robust to changes in the input statistics. Our experimental results on a mobile robotic platform validate the analysis and efficacy of the proposed strategy.

  11. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  12. Crew and display concepts evaluation for synthetic/enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III

    2006-05-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.

  13. Vision system using linear CCD cameras in fluorescent magnetic particle inspection of axles of railway wheelsets

    NASA Astrophysics Data System (ADS)

    Hao, Hongwei; Li, Luming; Deng, Yuanhui

    2005-05-01

    Automatic magnetic particle inspection based vision system using CCD camera is a new development of magnetic particle inspection. A vision system using linear CCD cameras in semiautomatic fluorescent magnetic particle inspection of axles of railway wheelsets is presented in this paper. The system includes four linear CCD cameras, a PCI data acquisition & logic control card, and an industrial computer. The unique characteristic of striation induced by UV light flicker in scanning image acquired by linear CCD camera are investigated, and some digital image processing methods for images of magnetic particle indications are designed to identify the cracks, including image pre-processing using wavelet, edge detection based connected region using Candy operator and double thresholds. The experimental results show that the system can detect the article cracks effectively, and may improve inspection quality highly and increase productivity practically.

  14. Adaptive optic vision correction system using the Z-View wavefront sensor

    NASA Astrophysics Data System (ADS)

    Liu, Yueai; Warden, Laurence; Sandler, David; Dreher, Andreas

    2005-12-01

    High order aberrations in human eye can deteriorate visual acuity and contrast sensitivity. Such aberrations can not be corrected with traditional low-order (defocus and astigmatism) spectacles or contact lenses. A state-of-the-art adaptive optics vision correction system was developed using Ophthonix's Z-View diffractive wavefront sensor and a commercial miniature deformable mirror. While being measured and corrected by this system, the patient can also view a Snellen chart or a Contrast Sensitivity chart through the system in order to experience the vision benefits both in visual acuity and contrast sensitivity. Preliminary study has shown the potential that this system could be used in a doctor's office to provide patients with a subjective feel of the objective high order prescription measured on Z-View.

  15. An Active Vision Approach to Understanding and Improving Visual Training in the Geosciences

    NASA Astrophysics Data System (ADS)

    Voronov, J.; Tarduno, J. A.; Jacobs, R. A.; Pelz, J. B.; Rosen, M. R.

    2009-12-01

    Experience in the field is a fundamental aspect of geologic training, and its effectiveness is largely unchallenged because of anecdotal evidence of its success among expert geologists. However, there have been only a few quantitative studies based on large data collection efforts to investigate how Earth Scientists learn in the field. In a recent collaboration between Earth scientists, Cognitive scientists and experts in Imaging science at the University of Rochester and Rochester Institute of Technology, we are investigating such a study. Within Cognitive Science, one school of thought, referred to as the Active Vision approach, emphasizes that visual perception is an active process requiring us to move our eyes to acquire new information about our environment. The Active Vision approach indicates the perceptual skills which experts possess and which novices will need to acquire to achieve expert performance. We describe data collection efforts using portable eye-trackers to assess how novice and expert geologists acquire visual knowledge in the field. We also discuss our efforts to collect images for use in a semi-immersive classroom environment, useful for further testing of novices and experts using eye-tracking technologies.

  16. Defining filled and empty space: reassessing the filled space illusion for active touch and vision.

    PubMed

    Collier, Elizabeth S; Lawson, Rebecca

    2016-09-01

    In the filled space illusion, an extent filled with gratings is estimated as longer than an equivalent extent that is apparently empty. However, researchers do not seem to have carefully considered the terms filled and empty when describing this illusion. Specifically, for active touch, smooth, solid surfaces have typically been used to represent empty space. Thus, it is not known whether comparing gratings to truly empty space (air) during active exploration by touch elicits the same illusionary effect. In Experiments 1 and 2, gratings were estimated as longer if they were compared to smooth, solid surfaces rather than being compared to truly empty space. Consistent with this, Experiment 3 showed that empty space was perceived as longer than solid surfaces when the two were compared directly. Together these results are consistent with the hypothesis that, for touch, the standard filled space illusion only occurs if gratings are compared to smooth, solid surfaces and that it may reverse if gratings are compared to empty space. Finally, Experiment 4 showed that gratings were estimated as longer than both solid and empty extents in vision, so the direction of the filled space illusion in vision was not affected by the nature of the comparator. These results are discussed in relation to the dual nature of active touch.

  17. Defining filled and empty space: reassessing the filled space illusion for active touch and vision.

    PubMed

    Collier, Elizabeth S; Lawson, Rebecca

    2016-09-01

    In the filled space illusion, an extent filled with gratings is estimated as longer than an equivalent extent that is apparently empty. However, researchers do not seem to have carefully considered the terms filled and empty when describing this illusion. Specifically, for active touch, smooth, solid surfaces have typically been used to represent empty space. Thus, it is not known whether comparing gratings to truly empty space (air) during active exploration by touch elicits the same illusionary effect. In Experiments 1 and 2, gratings were estimated as longer if they were compared to smooth, solid surfaces rather than being compared to truly empty space. Consistent with this, Experiment 3 showed that empty space was perceived as longer than solid surfaces when the two were compared directly. Together these results are consistent with the hypothesis that, for touch, the standard filled space illusion only occurs if gratings are compared to smooth, solid surfaces and that it may reverse if gratings are compared to empty space. Finally, Experiment 4 showed that gratings were estimated as longer than both solid and empty extents in vision, so the direction of the filled space illusion in vision was not affected by the nature of the comparator. These results are discussed in relation to the dual nature of active touch. PMID:27233286

  18. Street Viewer: An Autonomous Vision Based Traffic Tracking System.

    PubMed

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-06-03

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time.

  19. Street Viewer: An Autonomous Vision Based Traffic Tracking System

    PubMed Central

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-01-01

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time. PMID:27271627

  20. Street Viewer: An Autonomous Vision Based Traffic Tracking System.

    PubMed

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-01-01

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time. PMID:27271627

  1. Research of vision measurement system of the instruction sheet caliper rack

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Kong, Ming; Dong, Ying-Jun

    2010-12-01

    This article proposes a method of rack measurement based on computer vision. It establishes a computer vision measurement system; the system consists of precision linear guide, camera, computer and other several parts. The entire system can be divided into displacement platform design system and image acquisition system two parts. The displacement platform system is that the linear guide campaigns driven by the driver controlled by the computer, to expand the scope of this measure realizing the measurement for the whole tooth. Image acquisition system is the use of computer vision technology to analysis and identification the capture images, the light source emitting light to the caliper rack, camerawork is to be the image which acquisitioned. Then input the images to the computer through the USB interface in order to the image analysis, such as Edge Detection, Feature Extraction and so on. And the detection accuracy reaches to sub-pixel level. Experiment with the rack modulus of 0.19894 instruction sheet calipers to measure, using image processing technology to realize the edge detection, and getting the edge of rack. Finally get the basic parameters of the rack such as p and s, and calculated individual circular pitch deviation fpt, total cumulative pitch deviation Fp, tooth thickness deviation fsn. Then comparison the measurement results with the Accretech S1910DX3. It turned out that the accuracy of this method can meet the requirements for the measurement of such rack. And the measurement method is simple and practical, providing technical support for the rack online testing.

  2. Research of vision measurement system of the instruction sheet caliper rack

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Kong, Ming; Dong, Ying-jun

    2011-05-01

    This article proposes a method of rack measurement based on computer vision. It establishes a computer vision measurement system; the system consists of precision linear guide, camera, computer and other several parts. The entire system can be divided into displacement platform design system and image acquisition system two parts. The displacement platform system is that the linear guide campaigns driven by the driver controlled by the computer, to expand the scope of this measure realizing the measurement for the whole tooth. Image acquisition system is the use of computer vision technology to analysis and identification the capture images, the light source emitting light to the caliper rack, camerawork is to be the image which acquisitioned. Then input the images to the computer through the USB interface in order to the image analysis, such as Edge Detection, Feature Extraction and so on. And the detection accuracy reaches to sub-pixel level. Experiment with the rack modulus of 0.19894 instruction sheet calipers to measure, using image processing technology to realize the edge detection, and getting the edge of rack. Finally get the basic parameters of the rack such as p and s, and calculated individual circular pitch deviation fpt, total cumulative pitch deviation Fp, tooth thickness deviation fsn. Then comparison the measurement results with the Accretech S1910DX3. It turned out that the accuracy of this method can meet the requirements for the measurement of such rack. And the measurement method is simple and practical, providing technical support for the rack online testing.

  3. Utilization of the Space Vision System as an Augmented Reality System For Mission Operations

    NASA Technical Reports Server (NTRS)

    Maida, James C.; Bowen, Charles

    2003-01-01

    Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to

  4. [Analysis of key vision position technologies in robot assisted surgical system for total knee replacement].

    PubMed

    Zhao, Zijian; Liu, Yuncai; Wu, Xiaojuan; Liu, Hongjian

    2008-02-01

    Robot assisted surgery is becoming a widely popular technology and is now entering the total knee replacement. The development of total knee replacement and the operation system structure are introduced in this paper. The vision position technology and the related calibration technology, which are very important, are also analyzed. The experiments of error analysis in our WATO system demonstrate that the position and related calibration technologies have a high precision and can satisfy surgical requirement.

  5. Virtual vision system with actual flavor by olfactory display

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Kanazawa, Fumihiro

    2010-11-01

    The authors have researched multimedia system and support system for nursing studies on and practices of reminiscence therapy and life review therapy. The concept of the life review is presented by Butler in 1963. The process of thinking back on one's life and communicating about one's life to another person is called life review. There is a famous episode concerning the memory. It is called as Proustian effects. This effect is mentioned on the Proust's novel as an episode that a story teller reminds his old memory when he dipped a madeleine in tea. So many scientists research why smells trigger the memory. The authors pay attention to the relation between smells and memory although the reason is not evident yet. Then we have tried to add an olfactory display to the multimedia system so that the smells become a trigger of reminding buried memories. An olfactory display is a device that delivers smells to the nose. It provides us with special effects, for example to emit smell as if you were there or to give a trigger for reminding us of memories. The authors have developed a tabletop display system connected with the olfactory display. For delivering a flavor to user's nose, the system needs to recognition and measure positions of user's face and nose. In this paper, the authors describe an olfactory display which enables to detect the nose position for an effective delivery.

  6. A vision for an ultra-high resolution integrated water cycle observation and prediction system

    NASA Astrophysics Data System (ADS)

    Houser, P. R.

    2013-05-01

    Society's welfare, progress, and sustainable economic growth—and life itself—depend on the abundance and vigorous cycling and replenishing of water throughout the global environment. The water cycle operates on a continuum of time and space scales and exchanges large amounts of energy as water undergoes phase changes and is moved from one part of the Earth system to another. We must move toward an integrated observation and prediction paradigm that addresses broad local-to-global science and application issues by realizing synergies associated with multiple, coordinated observations and prediction systems. A central challenge of a future water and energy cycle observation strategy is to progress from single variable water-cycle instruments to multivariable integrated instruments in electromagnetic-band families. The microwave range in the electromagnetic spectrum is ideally suited for sensing the state and abundance of water because of water's dielectric properties. Eventually, a dedicated high-resolution water-cycle microwave-based satellite mission may be possible based on large-aperture antenna technology that can harvest the synergy that would be afforded by simultaneous multichannel active and passive microwave measurements. A partial demonstration of these ideas can even be realized with existing microwave satellite observations to support advanced multivariate retrieval methods that can exploit the totality of the microwave spectral information. The simultaneous multichannel active and passive microwave retrieval would allow improved-accuracy retrievals that are not possible with isolated measurements. Furthermore, the simultaneous monitoring of several of the land, atmospheric, oceanic, and cryospheric states brings synergies that will substantially enhance understanding of the global water and energy cycle as a system. The multichannel approach also affords advantages to some constituent retrievals—for instance, simultaneous retrieval of vegetation

  7. Awareness and Detection of Traffic and Obstacles Using Synthetic and Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.

    2012-01-01

    Research literature are reviewed and summarized to evaluate the awareness and detection of traffic and obstacles when using Synthetic Vision Systems (SVS) and Enhanced Vision Systems (EVS). The study identifies the critical issues influencing the time required, accuracy, and pilot workload associated with recognizing and reacting to potential collisions or conflicts with other aircraft, vehicles and obstructions during approach, landing, and surface operations. This work considers the effect of head-down display and head-up display implementations of SVS and EVS as well as the influence of single and dual pilot operations. The influences and strategies of adding traffic information and cockpit alerting with SVS and EVS were also included. Based on this review, a knowledge gap assessment was made with recommendations for ground and flight testing to fill these gaps and hence, promote the safe and effective implementation of SVS/EVS technologies for the Next Generation Air Transportation System

  8. Occlusion detection for highway traffic analysis with vision-based surveillance systems

    NASA Astrophysics Data System (ADS)

    Yoneyama, Akio; Yeh, Chia-Hung; Kuo, C.-C. Jay J.

    2003-08-01

    The vision-based traffic monitoring system provides an attractive solution in extracting various traffic parameters such as the count, speed, flow and concentration from the processing of video data captured by a camera system. The detection accuracy is however affected by various environment factors such as shadow, occlusion, and lighting. Among these, the occurrence of occlusion is one of the major problems. In this work, a new scheme is proposed to detect the occlusion and determine the exact location of each vehicle. The proposed algorithm is based on the matching of images from multiple cameras. In the proposed scheme, we do not need edge detection, region segmentation, and camera calibration operations, which often suffer from the variation of environmental conditions. Experimental results are given to verify that the proposed technique is effective for vision-based highway surveillance systems.

  9. Automatic Welding System of Aluminum Pipe by Monitoring Backside Image of Molten Pool Using Vision Sensor

    NASA Astrophysics Data System (ADS)

    Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo

    An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.

  10. The design and realization of a sort of robot vision measure system

    NASA Astrophysics Data System (ADS)

    Ren, Yong-jie; Zhu, Ji-gui; Yang, Xue-you; Ye, Sheng-hua

    2006-06-01

    The robot vision measure system based on stereovision is a very meaningful research realm within the engineering application. In this system, the industry robot is the movable carrier of the stereovision sensor, not only extending the work space of the sensor, but also reserving the characteristics of vision measure technology such as non-contact, quickness, etc. Controlling the pose of the robot in space, the stereovision sensor can arrive at the given point to collect the image signal of the given point one by one, and then obtain the 3D coordinate data after computing the image data. The method based on the technique of binocular stereovision sensor, which uses two transit instruments and one precision drone to carry out the whole calibration, is presented. At the same time, the measurement program of the robot and the computer was written in different program language. In the end, the system was tested carefully, and the feasibility was proved simultaneously.

  11. Snapshot hyperspectral fovea vision system (HyperVideo)

    NASA Astrophysics Data System (ADS)

    Kriesel, Jason; Scriven, Gordon; Gat, Nahum; Nagaraj, Sheela; Willson, Paul; Swaminathan, V.

    2012-06-01

    The development and demonstration of a new snapshot hyperspectral sensor is described. The system is a significant extension of the four dimensional imaging spectrometer (4DIS) concept, which resolves all four dimensions of hyperspectral imaging data (2D spatial, spectral, and temporal) in real-time. The new sensor, dubbed "4×4DIS" uses a single fiber optic reformatter that feeds into four separate, miniature visible to near-infrared (VNIR) imaging spectrometers, providing significantly better spatial resolution than previous systems. Full data cubes are captured in each frame period without scanning, i.e., "HyperVideo". The current system operates up to 30 Hz (i.e., 30 cubes/s), has 300 spectral bands from 400 to 1100 nm (~2.4 nm resolution), and a spatial resolution of 44×40 pixels. An additional 1.4 Megapixel video camera provides scene context and effectively sharpens the spatial resolution of the hyperspectral data. Essentially, the 4×4DIS provides a 2D spatially resolved grid of 44×40 = 1760 separate spectral measurements every 33 ms, which is overlaid on the detailed spatial information provided by the context camera. The system can use a wide range of off-the-shelf lenses and can either be operated so that the fields of view match, or in a "spectral fovea" mode, in which the 4×4DIS system uses narrow field of view optics, and is cued by a wider field of view context camera. Unlike other hyperspectral snapshot schemes, which require intensive computations to deconvolve the data (e.g., Computed Tomographic Imaging Spectrometer), the 4×4DIS requires only a linear remapping, enabling real-time display and analysis. The system concept has a range of applications including biomedical imaging, missile defense, infrared counter measure (IRCM) threat characterization, and ground based remote sensing.

  12. Altered Vision-Related Resting-State Activity in Pituitary Adenoma Patients with Visual Damage

    PubMed Central

    Qian, Haiyan; Wang, Xingchao; Wang, Zhongyan; Wang, Zhenmin; Liu, Pinan

    2016-01-01

    Objective To investigate changes of vision-related resting-state activity in pituitary adenoma (PA) patients with visual damage through comparison to healthy controls (HCs). Methods 25 PA patients with visual damage and 25 age- and sex-matched corrected-to-normal-vision HCs underwent a complete neuro-ophthalmologic evaluation, including automated perimetry, fundus examinations, and a magnetic resonance imaging (MRI) protocol, including structural and resting-state fMRI (RS-fMRI) sequences. The regional homogeneity (ReHo) of the vision-related cortex and the functional connectivity (FC) of 6 seeds within the visual cortex (the primary visual cortex (V1), the secondary visual cortex (V2), and the middle temporal visual cortex (MT+)) were evaluated. Two-sample t-tests were conducted to identify the differences between the two groups. Results Compared with the HCs, the PA group exhibited reduced ReHo in the bilateral V1, V2, V3, fusiform, MT+, BA37, thalamus, postcentral gyrus and left precentral gyrus and increased ReHo in the precuneus, prefrontal cortex, posterior cingulate cortex (PCC), anterior cingulate cortex (ACC), insula, supramarginal gyrus (SMG), and putamen. Compared with the HCs, V1, V2, and MT+ in the PAs exhibited decreased FC with the V1, V2, MT+, fusiform, BA37, and increased FC primarily in the bilateral temporal lobe (especially BA20,21,22), prefrontal cortex, PCC, insular, angular gyrus, ACC, pre-SMA, SMG, hippocampal formation, caudate and putamen. It is worth mentioning that compared with HCs, V1 in PAs exhibited decreased or similar FC with the thalamus, whereas V2 and MT+ exhibited increased FCs with the thalamus, especially pulvinar. Conclusions In our study, we identified significant neural reorganization in the vision-related cortex of PA patients with visual damage compared with HCs. Most subareas within the visual cortex exhibited remarkable neural dysfunction. Some subareas, including the MT+ and V2, exhibited enhanced FC with the thalamic

  13. Finger mouse system based on computer vision in complex backgrounds

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Zhang, Xiong

    2013-12-01

    This paper presents a human-computer interaction system and realizes a real-time virtual mouse. Our system emulates the dragging and selecting functions of a mouse by recognizing bare hands, hence the control style is simple and intuitive. A single camera is used to capture hand images and a DSP chip is embedded as the image processing platform. To deal with complex backgrounds, particularly where skin-like or moving objects appear, we develop novel hand recognition algorithms. Hand segmentation is achieved by skin color cue and background difference. Each input image is corrected according to the luminance and then skin color is extracted by Gaussian model. We employ a Camshift tracking algorithm which receives feedbacks from the recognition module. In fingertip recognition, a method combining template matching and circle drawing is proposed. Our system has advantages of good real-time performance, easy integration and energy conservation. Experiments show that the system is robust to the scaling and rotation of hands.

  14. Multispectral image-fused head-tracked vision system (HTVS) for driving applications

    NASA Astrophysics Data System (ADS)

    Reese, Colin E.; Bender, Edward J.

    2001-08-01

    Current military thermal driver vision systems consist of a single Long Wave Infrared (LWIR) sensor mounted on a manually operated gimbal, which is normally locked forward during driving. The sensor video imagery is presented on a large area flat panel display for direct view. The Night Vision and Electronics Sensors Directorate and Kaiser Electronics are cooperatively working to develop a driver's Head Tracked Vision System (HTVS) which directs dual waveband sensors in a more natural head-slewed imaging mode. The HTVS consists of LWIR and image intensified sensors, a high-speed gimbal, a head mounted display, and a head tracker. The first prototype systems have been delivered and have undergone preliminary field trials to characterize the operational benefits of a head tracked sensor system for tactical military ground applications. This investigation will address the advantages of head tracked vs. fixed sensor systems regarding peripheral sightings of threats, road hazards, and nearby vehicles. An additional thrust will investigate the degree to which additive (A+B) fusion of LWIR and image intensified sensors enhances overall driving performance. Typically, LWIR sensors are better for detecting threats, while image intensified sensors provide more natural scene cues, such as shadows and texture. This investigation will examine the degree to which the fusion of these two sensors enhances the driver's overall situational awareness.

  15. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  16. Calibration for stereo vision system based on phase matching and bundle adjustment algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Huijie; Wang, Zhen; Jiang, Hongzhi; Xu, Yang; Dong, Chao

    2015-05-01

    Calibration for stereo vision system plays an important role in the field of machine vision applications. The existing accurate calibration methods are usually carried out by capturing a high-accuracy calibration target with the same size as the measurement view. In in-situ 3D measurement and in large field of view measurement, the extrinsic parameters of the system usually need to be calibrated in real-time. Furthermore, the large high-accuracy calibration target in the field is a big challenge for manufacturing. Therefore, an accurate and rapid calibration method in the in-situ measurement is needed. In this paper, a novel calibration method for stereo vision system is proposed based on phase-based matching method and the bundle adjustment algorithm. As the camera is usually mechanically locked once adjusted appropriately after calibrated in lab, the intrinsic parameters are usually stable. We emphasize on the extrinsic parameters calibration in the measurement field. Firstly, the matching method based on heterodyne multi-frequency phase-shifting technique is applied to find thousands of pairs of corresponding points between images of two cameras. The large amount of pairs of corresponding points can help improve the accuracy of the calibration. Then the method of bundle adjustment in photogrammetry is used to optimize the extrinsic parameters and the 3D coordinates of the measured objects. Finally, the quantity traceability is carried out to transform the optimized extrinsic parameters from the 3D metric coordinate system into Euclid coordinate system to obtain the ultimate optimal extrinsic parameters. Experiment results show that the procedure of calibration takes less than 3 s. And, based on the stereo vision system calibrated by the proposed method, the measurement RMS (Root Mean Square) error can reach 0.025 mm when measuring the calibrated gauge with nominal length of 999.576 mm.

  17. Evaluation of surface mount component misalignment using an automatic machine vision system. Final report

    SciTech Connect

    Yerganian, S.S.

    1997-01-01

    A system manufactured by Synthetic Vision Systems Inc. was evaluated for its ability to automatically inspect surface mount components on a densely populated printed wiring board assembly for component presence and proper alignment before and after soldering. The system was evaluated for its use as a process verification tool in the presoldered mode and as a supplement to visual inspection in the postsoldered mode. To test the ability of the three-dimensional imaging system to locate the component edges in both the presoldered and postsoldered cases, data was gathered by inspecting four printed wiring board assemblies with the system.

  18. Visual Advantage of Enhanced Flight Vision System During NextGen Flight Test Evaluation

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Harrison, Stephanie J.; Bailey, Randall E.; Shelton, Kevin J.; Ellis, Kyle K.

    2014-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities. Nine test flights were flown in Gulfstream's G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast). Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision.

  19. Visual advantage of enhanced flight vision system during NextGen flight test evaluation

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Harrison, Stephanie J.; Bailey, Randall E.; Shelton, Kevin J.; Ellis, Kyle K. E.

    2014-06-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities. Nine test flights were flown in Gulfstream's G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast). Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision.

  20. LAPLACE: A mission to Europa and the Jupiter System for ESA's Cosmic Vision Programme

    NASA Astrophysics Data System (ADS)

    Blanc, Michel; Alibert, Yann; André, Nicolas; Atreya, Sushil; Beebe, Reta; Benz, Willy; Bolton, Scott J.; Coradini, Angioletta; Coustenis, Athena; Dehant, Véronique; Dougherty, Michele; Drossart, Pierre; Fujimoto, Masaki; Grasset, Olivier; Gurvits, Leonid; Hartogh, Paul; Hussmann, Hauke; Kasaba, Yasumasa; Kivelson, Margaret; Khurana, Krishan; Krupp, Norbert; Louarn, Philippe; Lunine, Jonathan; McGrath, Melissa; Mimoun, David; Mousis, Olivier; Oberst, Juergen; Okada, Tatsuaki; Pappalardo, Robert; Prieto-Ballesteros, Olga; Prieur, Daniel; Regnier, Pascal; Roos-Serote, Maarten; Sasaki, Sho; Schubert, Gerald; Sotin, Christophe; Spilker, Tom; Takahashi, Yukihiro; Takashima, Takeshi; Tosi, Federico; Turrini, Diego; van Hoolst, Tim; Zelenyi, Lev

    2009-03-01

    The exploration of the Jovian System and its fascinating satellite Europa is one of the priorities presented in ESA’s “Cosmic Vision” strategic document. The Jovian System indeed displays many facets. It is a small planetary system in its own right, built-up out of the mixture of gas and icy material that was present in the external region of the solar nebula. Through a complex history of accretion, internal differentiation and dynamic interaction, a very unique satellite system formed, in which three of the four Galilean satellites are locked in the so-called Laplace resonance. The energy and angular momentum they exchange among themselves and with Jupiter contribute to various degrees to the internal heating sources of the satellites. Unique among these satellites, Europa is believed to shelter an ocean between its geodynamically active icy crust and its silicate mantle, one where the main conditions for habitability may be fulfilled. For this very reason, Europa is one of the best candidates for the search for life in our Solar System. So, is Europa really habitable, representing a “habitable zone” in the Jupiter system? To answer this specific question, we need a dedicated mission to Europa. But to understand in a more generic way the habitability conditions around giant planets, we need to go beyond Europa itself and address two more general questions at the scale of the Jupiter system: to what extent is its possible habitability related to the initial conditions and formation scenario of the Jovian satellites? To what extent is it due to the way the Jupiter system works? ESA’s Cosmic Vision programme offers an ideal and timely framework to address these three key questions. Building on the in-depth reconnaissance of the Jupiter System by Galileo (and the Voyager, Ulysses, Cassini and New Horizons fly-by’s) and on the anticipated accomplishments of NASA’s JUNO mission, it is now time to design and fly a new mission which will focus on these

  1. Commercial machine vision system for traffic monitoring and control

    NASA Astrophysics Data System (ADS)

    D Agostino, Salvatore A.

    1992-03-01

    Traffic imaging covers a range of current and potential applications. These include traffic control and analysis, license plate finding, reading and storage, violation detection and archiving, vehicle sensors, and toll collection/enforcement. Experience from commercial installations and knowledge of the system requirements have been gained over the past 10 years. Recent improvements in system component cost and performance now allow products to be applied that provide cost effective solutions to the requirements for truly intelligent vehicle/highway systems (IVHS). The United States is a country that loves to drive. The infrastructure built in the 1950s and 1960s along with the low price of gasoline created an environment where the automobiles became an accessible and intricate part of American life. The United States has spent $DLR103 billion to build 40,000 highway miles since 1956, the start of the interstate program which is nearly complete. Unfortunately, a situation has arisen where the options for dramatically improving the ability of our roadways to absorb the increasing amount of traffic is limited. This is true in other countries as well as in the United States. The number of vehicles in the world increases by over 10,000,000 each year. In the United States there are about 180 million cars, trucks, and buses and this is estimated to double in the next 30 years. Urban development, and development in general, pushes from the edge of our roadways out. This leaves little room to increase the physical amount of roadway. Americans now spend more than 1.6 billion hours a year waiting in traffic jams. It is estimated that this congestion wasted 3 billion gallons of oil or 4% of the nation's annual gas consumption. The way out of the dilemma is to increase road use efficiency as well as improve mass transportation alternatives.

  2. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    NASA Astrophysics Data System (ADS)

    D'Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-06-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  3. Estimation of Theaflavins (TF) and Thearubigins (TR) Ratio in Black Tea Liquor Using Electronic Vision System

    NASA Astrophysics Data System (ADS)

    Akuli, Amitava; Pal, Abhra; Ghosh, Arunangshu; Bhattacharyya, Nabarun; Bandhopadhyya, Rajib; Tamuly, Pradip; Gogoi, Nagen

    2011-09-01

    Quality of black tea is generally assessed using organoleptic tests by professional tea tasters. They determine the quality of black tea based on its appearance (in dry condition and during liquor formation), aroma and taste. Variation in the above parameters is actually contributed by a number of chemical compounds like, Theaflavins (TF), Thearubigins (TR), Caffeine, Linalool, Geraniol etc. Among the above, TF and TR are the most important chemical compounds, which actually contribute to the formation of taste, colour and brightness in tea liquor. Estimation of TF and TR in black tea is generally done using a spectrophotometer instrument. But, the analysis technique undergoes a rigorous and time consuming effort for sample preparation; also the operation of costly spectrophotometer requires expert manpower. To overcome above problems an Electronic Vision System based on digital image processing technique has been developed. The system is faster, low cost, repeatable and can estimate the amount of TF and TR ratio for black tea liquor with accuracy. The data analysis is done using Principal Component Analysis (PCA), Multiple Linear Regression (MLR) and Multiple Discriminate Analysis (MDA). A correlation has been established between colour of tea liquor images and TF, TR ratio. This paper describes the newly developed E-Vision system, experimental methods, data analysis algorithms and finally, the performance of the E-Vision System as compared to the results of traditional spectrophotometer.

  4. Concept of operations for the use of Synthetic Vision System (SVS) display during precision instrument approach

    NASA Astrophysics Data System (ADS)

    Domino, David A.

    2007-04-01

    Synthetic Vision Systems (SVS) create images for display in the cockpit from the information contained in databases of terrain, obstacles and cultural features like runways and taxiways, and the known own-ship position in space. Displays are rendered egocentrically, from the point of view of the pilot. Certified synthetic vision systems, however, do not yet qualify for operational credit in any domain, other than to provide enhanced situation awareness. It is not known at this time whether the information provided by the system is sufficiently robust to substitute for natural vision in a specific application. In this paper an operations concept is described for the use of SVS information during a precision instrument approach in lieu of visual contact with a runway approach light system. It proposes an operation within the existing framework of regulations, and identifies specific areas that may require additional research data to support certification of the proposed operational credit. The larger purpose is to set out an example application and intended function which will require the elaboration and resolution of operational and human performance concerns. To this end, issues in several categories are identified.

  5. Optical calculation of correlation filters for a robotic vision system

    NASA Technical Reports Server (NTRS)

    Knopp, Jerome

    1989-01-01

    A method is presented for designing optical correlation filters based on measuring three intensity patterns: the Fourier transform of a filter object, a reference wave and the interference pattern produced by the sum of the object transform and the reference. The method can produce a filter that is well matched to both the object, its transforming optical system and the spatial light modulator used in the correlator input plane. A computer simulation was presented to demonstrate the approach for the special case of a conventional binary phase-only filter. The simulation produced a workable filter with a sharp correlation peak.

  6. Vision-based position measurement system for indoor mobile robots

    SciTech Connect

    Schreiber, M.J.; Dickerson, S.

    1994-12-31

    This paper discusses a stand-alone position measurement system for mobile nuclear waste management robots traveling in warehouses. The task is to provide two-dimensional position information to help the automated guided vehicle (AGV) guide itself along the aisle`s centerline and mark the location of defective barrels containing low-level radiation. The AGV is 0.91 m wide and must travel along straight aisles 1.12 m wide and up to 36 m long. Radioactive testing limits the AGV`s speed to 25 mm/s. The design objectives focus on cost, power consumption, accuracy, and robustness.

  7. Mosad and Stream Vision For A Telerobotic, Flying Camera System

    NASA Technical Reports Server (NTRS)

    Mandl, William

    2002-01-01

    Two full custom camera systems using the Multiplexed OverSample Analog to Digital (MOSAD) conversion technology for visible light sensing were built and demonstrated. They include a photo gate sensor and a photo diode sensor. The system includes the camera assembly, driver interface assembly, a frame stabler board with integrated decimeter and Windows 2000 compatible software for real time image display. An array size of 320X240 with 16 micron pixel pitch was developed for compatibility with 0.3 inch CCTV optics. With 1.2 micron technology, a 73% fill factor was achieved. Noise measurements indicated 9 to 11 bits operating with 13.7 bits best case. Power measured under 10 milliwatts at 400 samples per second. Nonuniformity variation was below noise floor. Pictures were taken with different cameras during the characterization study to demonstrate the operable range. The successful conclusion of this program demonstrates the utility of the MOSAD for NASA missions, providing superior performance over CMOS and lower cost and power consumption over CCD. The MOSAD approach also provides a path to radiation hardening for space based applications.

  8. Color night vision system for ground vehicle navigation

    NASA Astrophysics Data System (ADS)

    Ali, E. A.; Qadir, H.; Kozaitis, S. P.

    2014-06-01

    Operating in a degraded visual environment due to darkness can pose a threat to navigation safety. Systems have been developed to navigate in darkness that depend upon differences between objects such as temperature or reflectivity at various wavelengths. However, adding sensors for these systems increases the complexity by adding multiple components that may create problems with alignment and calibration. An approach is needed that is passive and simple for widespread acceptance. Our approach uses a type of augmented display to show fused images from visible and thermal sensors that are continuously updated. Because the raw fused image gave an unnatural color appearance, we used a color transfer process based on a look-up table to replace the false colors with a colormap derived from a daytime reference image obtained from a public database using the GPS coordinates of the vehicle. Although the database image was not perfectly registered, we were able to produce imagery acquired at night that appeared with daylight colors. Such an approach could improve the safety of nighttime navigation.

  9. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  10. Multispectral uncooled infrared enhanced-vision system for flight test

    NASA Astrophysics Data System (ADS)

    Tiana, Carlo L.; Kerr, Richard; Harrah, Steven D.

    2001-08-01

    The 1997 Final Report of the 'White House Commission on Aviation Safety and Security' challenged industrial and government concerns to reduce aviation accident rates by a factor of five within 10 years. In the report, the commission encourages NASA, FAA and others 'to expand their cooperative efforts in aviation safety research and development'. As a result of this publication, NASA has since undertaken a number of initiatives aimed at meeting the stated goal. Among these, the NASA Aviation Safety Program was initiated to encourage and assist in the development of technologies for the improvement of aviation safety. Among the technologies being considered are certain sensor technologies that may enable commercial and general aviation pilots to 'see to land' at night or in poor visibility conditions. Infrared sensors have potential applicability in this field, and this paper describes a system, based on such sensors, that is being deployed on the NASA Langley Research Center B757 ARIES research aircraft. The system includes two infrared sensors operating in different spectral bands, and a visible-band color CCD camera for documentation purposes. The sensors are mounted in an aerodynamic package in a forward position on the underside of the aircraft. Support equipment in the aircraft cabin collects and processes all relevant sensor data. Display of sensor images is achieved in real time on the aircraft's Head Up Display (HUD), or other display devices.

  11. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    SciTech Connect

    Kinoshita, Robert Arthur; Anderson, Matthew Oley; Mckay, Mark D; Willis, Walter David

    1999-04-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the "feel" of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  12. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    SciTech Connect

    M. D. McKay; M. O. Anderson; R. A. Kinoshita; W. D. Willis

    1999-02-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an ongoing effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the ''feel'' of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  13. Insect-inspired high-speed motion vision system for robot control.

    PubMed

    Wu, Haiyan; Zou, Ke; Zhang, Tianguang; Borst, Alexander; Kühnlenz, Kolja

    2012-10-01

    The mechanism for motion detection in a fly's vision system, known as the Reichardt correlator, suffers from a main shortcoming as a velocity estimator: low accuracy. To enable accurate velocity estimation, responses of the Reichardt correlator to image sequences are analyzed in this paper. An elaborated model with additional preprocessing modules is proposed. The relative error of velocity estimation is significantly reduced by establishing a real-time response-velocity lookup table based on the power spectrum analysis of the input signal. By exploiting the improved velocity estimation accuracy and the simple structure of the Reichardt correlator, a high-speed vision system of 1 kHz is designed and applied for robot yaw-angle control in real-time experiments. The experimental results demonstrate the potential and feasibility of applying insect-inspired motion detection to robot control.

  14. Shadow and feature recognition aids for rapid image geo-registration in UAV vision system architectures

    NASA Astrophysics Data System (ADS)

    Baer, Wolfgang; Kölsch, Mathias

    2009-05-01

    The problem of real-time image geo-referencing is encountered in all vision based cognitive systems. In this paper we present a model-image feedback approach to this problem and show how it can be applied to image exploitation from Unmanned Arial Vehicle (UAV) vision systems. By calculating reference images from a known terrain database, using a novel ray trace algorithm, we are able to eliminate foreshortening, elevation, and lighting distortions, introduce registration aids and reduce the geo-referencing problem to a linear transformation search over the two dimensional image space. A method for shadow calculation that maintains real-time performance is also presented. The paper then discusses the implementation of our model-image feedback approach in the Perspective View Nascent Technology (PVNT) software package and provides sample results from UAV mission control and target mensuration experiments conducted at China Lake and Camp Roberts, California.

  15. Outstanding Science in the Neptune System from an Aerocaptured NASA "Vision Mission"

    NASA Technical Reports Server (NTRS)

    Spilker, T. R.; Spilker, L. J.; Ingersoll, A. P.

    2005-01-01

    In 2003 NASA released its Vision Mission Studies NRA (NRA-03-OSS-01-VM) soliciting proposals to study any one of 17 Vision Missions described in the NRA. The authors, along with a team of scientists and engineers, sucessfully proposed a study of the Neptune Orbiter With Probes (NOP) option, a mission that performs Cassini-level science in the Neptune system without fission-based electric power or propulsion. The Study Team includes a Science Team composed of experienced planetary scientists, many of whom helped draft the Neptune discussions in the 2003 Solar System Exploration Decadal Survey (SSEDS), and an Implementation Team with experienced engineers and technologists from multiple NASA Centers and JPL.

  16. Image processing for a tactile/vision substitution system using digital CNN.

    PubMed

    Lin, Chien-Nan; Yu, Sung-Nien; Hu, Jin-Cheng

    2006-01-01

    In view of the parallel processing and easy implementation properties of CNN, we propose to use digital CNN as the image processor of a tactile/vision substitution system (TVSS). The digital CNN processor is used to execute the wavelet down-sampling filtering and the half-toning operations, aiming to extract important features from the images. A template combination method is used to embed the two image processing functions into a single CNN processor. The digital CNN processor is implemented on an intellectual property (IP) and is implemented on a XILINX VIRTEX II 2000 FPGA board. Experiments are designated to test the capability of the CNN processor in the recognition of characters and human subjects in different environments. The experiments demonstrates impressive results, which proves the proposed digital CNN processor a powerful component in the design of efficient tactile/vision substitution systems for the visually impaired people.

  17. Real-time and low-cost embedded platform for car's surrounding vision system

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio; Franchi, Emilio

    2016-04-01

    The design and the implementation of a flexible and low-cost embedded system for real-time car's surrounding vision is presented. The target of the proposed multi-camera vision system is to provide the driver a better view of the objects that surround the vehicle. Fish-eye lenses are used to achieve a larger Field of View (FOV) but, on the other hand, introduce radial distortion of the images projected on the sensors. Using low-cost cameras there could be also some alignment issues. Since these complications are noticeable and dangerous, a real-time algorithm for their correction is presented. Then another real-time algorithm, used for merging 4 camera video streams together in a single view, is described. Real-time image processing is achieved through a hardware-software platform

  18. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    PubMed Central

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344

  19. Robust range estimation with a monocular camera for vision-based forward collision warning system.

    PubMed

    Park, Ki-Yeong; Hwang, Sun-Young

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.

  20. Extending enhanced-vision capabilities by integration of advanced surface movement guidance and control systems (A-SMGCS)

    NASA Astrophysics Data System (ADS)

    Hecker, Peter; Doehler, Hans-Ullrich; Korn, Bernd; Ludwig, T.

    2001-08-01

    DLR has set up a number of projects to increase flight safety and economics of aviation. Within these activities one field of interest is the development and validation of systems for pilot assistance in order to increase the situation awareness of the aircrew. All flight phases ('gate-to-gate') are taken into account, but as far as approaches, landing and taxiing are the most critical tasks in the field of civil aviation, special emphasis is given to these operations. As presented in previous contributions within SPIE's Enhanced and Synthetic Vision Conferences, DLR's Institute of Flight Guidance has developed an Enhanced Vision System (EVS) as a tool assisting especially approach and landing by improving the aircrew's situational awareness. The combination of forward looking imaging sensors (such as EADS's HiVision millimeter wave radar), terrain data stored in on-board databases plus information transmitted from ground or other aircraft via data link is used to help pilots handling these phases of flight especially under adverse weather conditions. A second pilot assistance module being developed at DLR is the Taxi And Ramp Management And Control - Airborne System (TARMAC-AS), which is part of an Advanced Surface Management Guidance and Control System (ASMGCS). By means of on-board terrain data bases and navigation data a map display is generated, which helps the pilot performing taxi operations. In addition to the pure map function taxi instructions and other traffic can be displayed as the aircraft is connected to TARMAC-planning and TARMAC-communication, navigation and surveillance modules on ground via data-link. Recent experiments with airline pilots have shown, that the capabilities of taxi assistance can be extended significantly by integrating EVS- and TARMAC-AS-functionalities. Especially an extended obstacle detection and warning coming from the Enhanced Vision System increases the safety of ground operations. The presented paper gives an overview

  1. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    PubMed

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  2. Machine vision guided sensor positioning system for leaf temperature assessment

    NASA Technical Reports Server (NTRS)

    Kim, Y.; Ling, P. P.; Janes, H. W. (Principal Investigator)

    2001-01-01

    A sensor positioning system was developed for monitoring plants' well-being using a non-contact sensor. Image processing algorithms were developed to identify a target region on a plant leaf. A novel algorithm to recover view depth was developed by using a camera equipped with a computer-controlled zoom lens. The methodology has improved depth recovery resolution over a conventional monocular imaging technique. An algorithm was also developed to find a maximum enclosed circle on a leaf surface so the conical field-of-view of an infrared temperature sensor could be filled by the target without peripheral noise. The center of the enclosed circle and the estimated depth were used to define the sensor 3-D location for accurate plant temperature measurement.

  3. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  4. Automatic micropropagation of plants--the vision-system: graph rewriting as pattern recognition

    NASA Astrophysics Data System (ADS)

    Schwanke, Joerg; Megnet, Roland; Jensch, Peter F.

    1993-03-01

    The automation of plant-micropropagation is necessary to produce high amounts of biomass. Plants have to be dissected on particular cutting-points. A vision-system is needed for the recognition of the cutting-points on the plants. With this background, this contribution is directed to the underlying formalism to determine cutting-points on abstract-plant models. We show the usefulness of pattern recognition by graph-rewriting along with some examples in this context.

  5. Computer-based neuro-vision system for color classification of french fries

    NASA Astrophysics Data System (ADS)

    Panigrahi, Suranjan; Wiesenborn, Dennis

    1995-01-01

    French fries are one of the frozen foods with rising demands in domestic and international markets. Color is one of the critical attributes for quality evaluation of french fries. This study discusses the development of a color computer vision system and the integration of neural network technology for objective color evaluation and classification of french fries. The classification accuracy of a prototype back-propagation network developed for this purpose was found to be 96%.

  6. WELDSMART: A vision-based expert system for quality control

    NASA Technical Reports Server (NTRS)

    Andersen, Kristinn; Barnett, Robert Joel; Springfield, James F.; Cook, George E.

    1992-01-01

    This work was aimed at exploring means for utilizing computer technology in quality inspection and evaluation. Inspection of metallic welds was selected as the main application for this development and primary emphasis was placed on visual inspection, as opposed to other inspection methods, such as radiographic techniques. Emphasis was placed on methodologies with the potential for use in real-time quality control systems. Because quality evaluation is somewhat subjective, despite various efforts to classify discontinuities and standardize inspection methods, the task of using a computer for both inspection and evaluation was not trivial. The work started out with a review of the various inspection techniques that are used for quality control in welding. Among other observations from this review was the finding that most weld defects result in abnormalities that may be seen by visual inspection. This supports the approach of emphasizing visual inspection for this work. Quality control consists of two phases: (1) identification of weld discontinuities (some of which may be severe enough to be classified as defects), and (2) assessment or evaluation of the weld based on the observed discontinuities. Usually the latter phase results in a pass/fail judgement for the inspected piece. It is the conclusion of this work that the first of the above tasks, identification of discontinuities, is the most challenging one. It calls for sophisticated image processing and image analysis techniques, and frequently ad hoc methods have to be developed to identify specific features in the weld image. The difficulty of this task is generally not due to limited computing power. In most cases it was found that a modest personal computer or workstation could carry out most computations in a reasonably short time period. Rather, the algorithms and methods necessary for identifying weld discontinuities were in some cases limited. The fact that specific techniques were finally developed and

  7. Neural-based nonimaging vision system for robotic sensing

    NASA Astrophysics Data System (ADS)

    Edwards, Timothy C.; Brown, Joe R.

    1994-03-01

    A multispectral, multiaperture, nonimaging sensor was simulated and constructed to show that the relative location of a robot arm and a specified target can be determined through Neural Network processing when the arm and target produce different spectral signatures. Data acquired from both computer simulation and actual hardware implementation was used to train an artificial Neural Network to yield the relative position in two dimensions of a robot arm and a target. The arm and target contained optical sources of different spectral characteristics which allows the sensor to discriminate between them. Simulation of the sensor gave an error distribution with a mean of zero and a standard deviation of 0.3 inches in each dimension across a work area of 6 by 10 inches. The actual sensor produced a standard deviation of approximately 0.8 inches using a limited number of training and test sets. No significant differences were found in the system performance where 9 or 18 apertures were used, indicating a minimum number of apertures required is equal to or less than nine.

  8. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  9. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

    PubMed Central

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-01-01

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553

  10. Science requirements for PRoViScout, a robotics vision system for planetary exploration

    NASA Astrophysics Data System (ADS)

    Hauber, E.; Pullan, D.; Griffiths, A.; Paar, G.

    2011-10-01

    The robotic exploration of planetary surfaces, including missions of interest for geobiology (e.g., ExoMars), will be the precursor of human missions within the next few decades. Such exploration will require platforms which are much more self-reliant and capable of exploring long distances with limited ground support in order to advance planetary science objectives in a timely manner. The key to this objective is the development of planetary robotic onboard vision processing systems, which will enable the autonomous on-site selection of scientific and mission-strategic targets, and the access thereto. The EU-funded research project PRoViScout (Planetary Robotics Vision Scout) is designed to develop a unified and generic approach for robotic vision onboard processing, namely the combination of navigation and scientific target selection. Any such system needs to be "trained", i.e. it needs (a) scientific requirements which the system needs to address, and (b) a data base of scientifically representative target scenarios which can be analysed. We present our preliminary list of science requirements, based on previous experience from landed Mars missions.

  11. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement.

    PubMed

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-01-01

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553

  12. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  13. NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.

  14. The study of dual camera 3D coordinate vision measurement system using a special probe

    NASA Astrophysics Data System (ADS)

    Liu, Shugui; Peng, Kai; Zhang, Xuefei; Zhang, Haifeng; Huang, Fengshan

    2006-11-01

    Due to high precision and convenient operation, the vision coordinate measurement machine with one probe has become the research focus in visual industry. In general such a visual system can be setup conveniently with just one CCD camera and probe. However, the price of the system will surge up too high to accept while the top performance hardware, such as CCD camera, image captured card and etc, have to be applied in the system to obtain the high axis-oriented measurement precision. In this paper, a new dual CCD camera vision coordinate measurement system based on redundancy principle is proposed to achieve high precision by moderate price. Since two CCD cameras are placed with the angle of camera axis like about 90 degrees to build the system, two sub-systems can be built by each CCD camera and the probe. With the help of the probe the inner and outer parameters of camera are first calibrated, the system by use of redundancy technique is set up now. When axis-oriented error is eliminated within the two sub-systems, which is so large and always exits in the single camera system, the high precision measurement is obtained by the system. The result of experiment compared to that from CMM shows that the system proposed is more excellent in stableness and precision with the uncertainty beyond +/-0.1 mm in xyz orient within the distance of 2m using two common CCD cameras.

  15. ADVANCED SOLID STATE SENSORS FOR VISION 21 SYSTEMS

    SciTech Connect

    C.D. Stinespring

    2005-04-28

    Silicon carbide (SiC) is a high temperature semiconductor with the potential to meet the gas and temperature sensor needs in both present and future power generation systems. These devices have been and are currently being investigated for a variety of high temperature sensing applications. These include leak detection, fire detection, environmental control, and emissions monitoring. Electronically these sensors can be very simple Schottky diode structures that rely on gas-induced changes in electrical characteristics at the metal-semiconductor interface. In these devices, thermal stability of the interfaces has been shown to be an essential requirement for improving and maintaining sensor sensitivity and lifetime. In this report, we describe device fabrication and characterization studies relevant to the development of SiC based gas and temperature sensors. Specifically, we have investigated the use of periodically stepped surfaces to improve the thermal stability of the metal semiconductor interface for simple Pd-SiC Schottky diodes. These periodically stepped surfaces have atomically flat terraces on the order of 200 nm wide separated by steps of 1.5 nm height. It should be noted that 1.5 nm is the unit cell height for the 6H-SiC (0001) substrates used in these studies. These surfaces contrast markedly with the ''standard'' SiC surfaces normally used in device fabrication. Obvious scratches and pots as well as subsurface defects characterize these standard surfaces. This research involved ultrahigh vacuum deposition and characterization studies to investigate the thermal stability of Pd-SiC Schottky diodes on both the stepped and standard surfaces, high temperature electrical characterization of these device structures, and high temperature electrical characterization of diodes under wet and dry oxidizing conditions. To our knowledge, these studies have yielded the first electrical characterization of actual sensor device structures fabricated under ultrahigh

  16. A Hybrid Synthetic Vision System for the Tele-operation of Unmanned Vehicles

    NASA Technical Reports Server (NTRS)

    Delgado, Frank; Abernathy, Mike

    2004-01-01

    A system called SmartCam3D (SC3D) has been developed to provide enhanced situational awareness for operators of a remotely piloted vehicle. SC3D is a Hybrid Synthetic Vision System (HSVS) that combines live sensor data with information from a Synthetic Vision System (SVS). By combining the dual information sources, the operators are afforded the advantages of each approach. The live sensor system provides real-time information for the region of interest. The SVS provides information rich visuals that will function under all weather and visibility conditions. Additionally, the combination of technologies allows the system to circumvent some of the limitations from each approach. Video sensor systems are not very useful when visibility conditions are hampered by rain, snow, sand, fog, and smoke, while a SVS can suffer from data freshness problems. Typically, an aircraft or satellite flying overhead collects the data used to create the SVS visuals. The SVS data could have been collected weeks, months, or even years ago. To that extent, the information from an SVS visual could be outdated and possibly inaccurate. SC3D was used in the remote cockpit during flight tests of the X-38 132 and 131R vehicles at the NASA Dryden Flight Research Center. SC3D was also used during the operation of military Unmanned Aerial Vehicles. This presentation will provide an overview of the system, the evolution of the system, the results of flight tests, and future plans. Furthermore, the safety benefits of the SC3D over traditional and pure synthetic vision systems will be discussed.

  17. Autonomous Hovering and Landing of a Quad-rotor Micro Aerial Vehicle by Means of on Ground Stereo Vision System

    NASA Astrophysics Data System (ADS)

    Pebrianti, Dwi; Kendoul, Farid; Azrad, Syaril; Wang, Wei; Nonami, Kenzo

    On ground stereo vision system is used for autonomous hovering and landing of a quadrotor Micro Aerial Vehicle (MAV). This kind of system has an advantage to support embedded vision system for autonomous hovering and landing, since an embedded vision system occasionally gives inaccurate distance calculation due to either vibration problem or unknown geometry of the landing target. Color based object tracking by using Continuously Adaptive Mean Shift (CAMSHIFT) algorithm was examined. Nonlinear model of quad-rotor MAV and a PID controller were used for autonomous hovering and landing. The result shows that the Camshift based object tracking algorithm has good performance. Additionally, the comparison between the stereo vision system based and GPS based autonomous hovering of a quad-rotor MAV shows that stereo vision system has better performance. The accuracy of the stereo vision system is about 1 meter in the longitudinal and lateral direction when the quad-rotor flies in 6 meters of altitude. In the same experimental condition, the GPS based system accuracy is about 3 meters. Additionally, experiment on autonomous landing gives a reliable result.

  18. Radiation impacts on star-tracker performance and vision systems in space

    NASA Astrophysics Data System (ADS)

    Jørgensen, John L.; Thuesen, Gøsta G.; Betto, Maurizio; Riis, Troels

    2000-03-01

    CCD-chips are widely used in spacecraft applications, due to their inherent high resolution, linearity, sensitivity and low size and power-consumption, and irrespective of their rather poor handling of ionizing radiation. One of the experiments onboard the Teamsat satellite, the payload of the prototype Ariane 502, was the Autonomous Vision System (AVS), a fully autonomous star-tracker with several advanced vision features. The main objective of the AVS was to study the autonomous operations during severe radiation flux and after appreciable total dose. The AVS experiment and the radiation experienced onboard Team-sat are described. Examples of various radiation impacts on the AVS instrument are given, and compared to ground based radiation tests.

  19. FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.

    PubMed

    Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.

  20. Hubble Space Telescope Telemetry Access using the Vision 2000 Control Center System (CCS)

    NASA Astrophysics Data System (ADS)

    Miebach, M.; Dolensky, M.

    Major changes to the Space Telescope Ground Systems are presently in progres s. The main objectives of the re-engineering effort, \\htmladdnormallinkfoot{ Vision 2000}{http://vision.hst.nasa.gov/}, are to reduce development and operation costs for the remaining years of S pace Telescope's lifetime. Costs are reduced by the use of commercial off the s helf (COTS) products wherever possible. Part of CCS is a Space Telescope Engineering Data Store, the design of which is based on modern Data Warehouse technology. The purpose of this data store i s to provide a common data source for telemet Some of the capabilities of CCS will be illustrated: sample of real-time dat a pages and plots of selected historical telemetry points.

  1. Complete vision-based traffic sign recognition supported by an I2V communication system.

    PubMed

    García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  2. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    PubMed Central

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  3. SAD-based stereo vision machine on a System-on-Programmable-Chip (SoPC).

    PubMed

    Zhang, Xiang; Chen, Zhangwei

    2013-03-04

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  4. REVS: a radar-based enhanced vision system for degraded visual environments

    NASA Astrophysics Data System (ADS)

    Brailovsky, Alexander; Bode, Justin; Cariani, Pete; Cross, Jack; Gleason, Josh; Khodos, Victor; Macias, Gary; Merrill, Rahn; Randall, Chuck; Rudy, Dean

    2014-06-01

    Sierra Nevada Corporation (SNC) has developed an enhanced vision system utilizing fast-scanning 94 GHz radar technology to provide three-dimensional measurements of an aircraft's forward external scene topography. This threedimensional data is rendered as terrain imagery, from the pilot's perspective, on a Head-Up Display (HUD). The image provides the requisite "enhanced vision" to continue a safe approach along the flight path below the Decision Height (DH) in Instrument Meteorological Conditions (IMC) that would otherwise be cause for a missed approach. Terrain imagery is optionally fused with digital elevation model (DEM) data of terrain outside the radar field of view, giving the pilot additional situational awareness. Flight tests conducted in 2013 show that REVS™ has sufficient resolution and sensitivity performance to allow identification of requisite visual references well above decision height in dense fog. This paper provides an overview of the Enhanced Flight Vision System (EFVS) concept, of the technology underlying REVS, and a detailed discussion of the flight test results.

  5. Present and future of vision systems technologies in commercial flight operations

    NASA Astrophysics Data System (ADS)

    Ward, Jim

    2016-05-01

    The development of systems to enable pilots of all types of aircraft to see through fog, clouds, and sandstorms and land in low visibility has been widely discussed and researched across aviation. For military applications, the goal has been to operate in a Degraded Visual Environment (DVE), using sensors to enable flight crews to see and operate without concern to weather that limits human visibility. These military DVE goals are mainly oriented to the off-field landing environment. For commercial aviation, the Federal Aviation Agency (FAA) implemented operational regulations in 2004 that allow the flight crew to see the runway environment using an Enhanced Flight Vision Systems (EFVS) and continue the approach below the normal landing decision height. The FAA is expanding the current use and economic benefit of EFVS technology and will soon permit landing without any natural vision using real-time weather-penetrating sensors. The operational goals of both of these efforts, DVE and EFVS, have been the stimulus for development of new sensors and vision displays to create the modern flight deck.

  6. Infrared machine vision system for the automatic detection of olive fruit quality.

    PubMed

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements. PMID:24148491

  7. Context-specific energy strategies: coupling energy system visions with feasible implementation scenarios.

    PubMed

    Trutnevyte, Evelina; Stauffacher, Michael; Schlegel, Matthias; Scholz, Roland W

    2012-09-01

    Conventional energy strategy defines an energy system vision (the goal), energy scenarios with technical choices and an implementation mechanism (such as economic incentives). Due to the lead of a generic vision, when applied in a specific regional context, such a strategy can deviate from the optimal one with, for instance, the lowest environmental impacts. This paper proposes an approach for developing energy strategies by simultaneously, rather than sequentially, combining multiple energy system visions and technically feasible, cost-effective energy scenarios that meet environmental constraints at a given place. The approach is illustrated by developing a residential heat supply strategy for a Swiss region. In the analyzed case, urban municipalities should focus on reducing heat demand, and rural municipalities should focus on harvesting local energy sources, primarily wood. Solar thermal units are cost-competitive in all municipalities, and their deployment should be fostered by information campaigns. Heat pumps and building refurbishment are not competitive; thus, economic incentives are essential, especially for urban municipalities. In rural municipalities, wood is cost-competitive, and community-based initiatives are likely to be most successful. Thus, the paper shows that energy strategies should be spatially differentiated. The suggested approach can be transferred to other regions and spatial scales.

  8. Infrared machine vision system for the automatic detection of olive fruit quality.

    PubMed

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements.

  9. Principles of image processing in machine vision systems for the color analysis of minerals

    NASA Astrophysics Data System (ADS)

    Petukhova, Daria B.; Gorbunova, Elena V.; Chertov, Aleksandr N.; Korotaev, Valery V.

    2014-09-01

    At the moment color sorting method is one of promising methods of mineral raw materials enrichment. This method is based on registration of color differences between images of analyzed objects. As is generally known the problem with delimitation of close color tints when sorting low-contrast minerals is one of the main disadvantages of color sorting method. It is can be related with wrong choice of a color model and incomplete image processing in machine vision system for realizing color sorting algorithm. Another problem is a necessity of image processing features reconfiguration when changing the type of analyzed minerals. This is due to the fact that optical properties of mineral samples vary from one mineral deposit to another. Therefore searching for values of image processing features is non-trivial task. And this task doesn't always have an acceptable solution. In addition there are no uniform guidelines for determining criteria of mineral samples separation. It is assumed that the process of image processing features reconfiguration had to be made by machine learning. But in practice it's carried out by adjusting the operating parameters which are satisfactory for one specific enrichment task. This approach usually leads to the fact that machine vision system unable to estimate rapidly the concentration rate of analyzed mineral ore by using color sorting method. This paper presents the results of research aimed at addressing mentioned shortcomings in image processing organization for machine vision systems which are used to color sorting of mineral samples. The principles of color analysis for low-contrast minerals by using machine vision systems are also studied. In addition, a special processing algorithm for color images of mineral samples is developed. Mentioned algorithm allows you to determine automatically the criteria of mineral samples separation based on an analysis of representative mineral samples. Experimental studies of the proposed algorithm

  10. The ART of representation: Memory reduction and noise tolerance in a neural network vision system

    NASA Astrophysics Data System (ADS)

    Langley, Christopher S.

    The Feature Cerebellar Model Arithmetic Computer (FCMAC) is a multiple-input-single-output neural network that can provide three-degree-of-freedom (3-DOF) pose estimation for a robotic vision system. The FCMAC provides sufficient accuracy to enable a manipulator to grasp an object from an arbitrary pose within its workspace. The network learns an appearance-based representation of an object by storing coarsely quantized feature patterns. As all unique patterns are encoded, the network size grows uncontrollably. A new architecture is introduced herein, which combines the FCMAC with an Adaptive Resonance Theory (ART) network. The ART module categorizes patterns observed during training into a set of prototypes that are used to build the FCMAC. As a result, the network no longer grows without bound, but constrains itself to a user-specified size. Pose estimates remain accurate since the ART layer tends to discard the least relevant information first. The smaller network performs recall faster, and in some cases is better for generalization, resulting in a reduction of error at recall time. The ART-Under-Constraint (ART-C) algorithm is extended to include initial filling with randomly selected patterns (referred to as ART-F). In experiments using a real-world data set, the new network performed equally well using less than one tenth the number of coarse patterns as a regular FCMAC. The FCMAC is also extended to include real-valued input activations. As a result, the network can be tuned to reject a variety of types of noise in the image feature detection. A quantitative analysis of noise tolerance was performed using four synthetic noise algorithms, and a qualitative investigation was made using noisy real-world image data. In validation experiments, the FCMAC system outperformed Radial Basis Function (RBF) networks for the 3-DOF problem, and had accuracy comparable to that of Principal Component Analysis (PCA) and superior to that of Shape Context Matching (SCM), both

  11. Measured system component development for the night vision integrated performance model (NV-IPM)

    NASA Astrophysics Data System (ADS)

    Teaney, Brian P.; Haefner, David P.

    2016-05-01

    The Night Vision Integrated Performance Model (NV-IPM) introduced a variety of measured system components in version 1.6 of the model. These measured system components enable the characterization of systems based on lab measurements which treat the system as a `black-box.' This encapsulation of individual component terms into higher level measurable quantities circumvents the need to develop costly, time-consuming measurement techniques for each individual input term. Each of the `black-box' system components were developed based upon the minimum required system level measurements for a particular type of imaging system. The measured system hierarchy also includes components for cases where a very limited number of measurements are possible. We discuss the development of the measured system components, the transition of lab measurements into model inputs, and any assumptions inherent to this process.

  12. System for synthetic vision and augmented reality in future flight decks

    NASA Astrophysics Data System (ADS)

    Behringer, Reinhold; Tam, Clement K.; McGee, Joshua H.; Sundareswaran, Venkataraman; Vassiliou, Marius S.

    2000-06-01

    Rockwell Science Center is investigating novel human-computer interface techniques for enhancing the situational awareness in future flight decks. One aspect is to provide intuitive displays which provide the vital information and the spatial awareness by augmenting the real world with an overlay of relevant information registered to the real world. Such Augmented Reality (AR) techniques can be employed during bad weather scenarios to permit flying in Visual Flight Rules (VFR) in conditions which would normally require Instrumental Flight Rules (IFR). These systems could easily be implemented on heads-up displays (HUD). The advantage of AR systems vs. purely synthetic vision (SV) systems is that the pilot can relate the information overlay to real objects in the world, whereas SV systems provide a constant virtual view, where inconsistencies can hardly be detected. The development of components for such a system led to a demonstrator implemented on a PC. A camera grabs video images which are overlaid with registered information, Orientation of the camera is obtained from an inclinometer and a magnetometer, position is acquired from GPS. In a possible implementation in an airplane, the on-board attitude information can be used for obtaining correct registration. If visibility is sufficient, computer vision modules can be used to fine-tune the registration by matching visual clues with database features. Such technology would be especially useful for landing approaches. The current demonstrator provides a frame-rate of 15 fps, using a live video feed as background and an overlay of avionics symbology in the foreground. In addition, terrain rendering from a 1 arc sec. digital elevation model database can be overlaid to provide synthetic vision in case of limited visibility. For true outdoor testing (on ground level), the system has been implemented on a wearable computer.

  13. Structure light telecentric stereoscopic vision 3D measurement system based on Scheimpflug condition

    NASA Astrophysics Data System (ADS)

    Mei, Qing; Gao, Jian; Lin, Hui; Chen, Yun; Yunbo, He; Wang, Wei; Zhang, Guanjin; Chen, Xin

    2016-11-01

    We designed a new three-dimensional (3D) measurement system for micro components: a structure light telecentric stereoscopic vision 3D measurement system based on the Scheimpflug condition. This system creatively combines the telecentric imaging model and the Scheimpflug condition on the basis of structure light stereoscopic vision, having benefits of a wide measurement range, high accuracy, fast speed, and low price. The system measurement range is 20 mm×13 mm×6 mm, the lateral resolution is 20 μm, and the practical vertical resolution reaches 2.6 μm, which is close to the theoretical value of 2 μm and well satisfies the 3D measurement needs of micro components such as semiconductor devices, photoelectron elements, and micro-electromechanical systems. In this paper, we first introduce the principle and structure of the system and then present the system calibration and 3D reconstruction. We then present an experiment that was performed for the 3D reconstruction of the surface topography of a wafer, followed by a discussion. Finally, the conclusions are presented.

  14. Synthetic vision system for improving unmanned aerial vehicle operator situation awareness

    NASA Astrophysics Data System (ADS)

    Calhoun, Gloria L.; Draper, Mark H.; Abernathy, Michael F.; Patzek, Michael; Delgado, Francisco

    2005-05-01

    The Air Force Research Laboratory's Human Effectiveness Directorate (AFRL/HE) supports research addressing human factors associated with Unmanned Aerial Vehicle (UAV) operator control stations. Recent research, in collaboration with Rapid Imaging Software, Inc., has focused on determining the value of combining synthetic vision data with live camera video presented on a UAV control station display. Information is constructed from databases (e.g., terrain, cultural features, pre-mission plan, etc.), as well as numerous information updates via networked communication with other sources (e.g., weather, intel). This information is overlaid conformal, in real time, onto the dynamic camera video image display presented to operators. Synthetic vision overlay technology is expected to improve operator situation awareness by highlighting key spatial information elements of interest directly onto the video image, such as threat locations, expected locations of targets, landmarks, emergency airfields, etc. Also, it may help maintain an operator"s situation awareness during periods of video datalink degradation/dropout and when operating in conditions of poor visibility. Additionally, this technology may serve as an intuitive means of distributed communications between geographically separated users. This paper discusses the tailoring of synthetic overlay technology for several UAV applications. Pertinent human factors issues are detailed, as well as the usability, simulation, and flight test evaluations required to determine how best to combine synthetic visual data with live camera video presented on a ground control station display and validate that a synthetic vision system is beneficial for UAV applications.

  15. An Automated Mouse Tail Vascular Access System by Vision and Pressure Feedback

    PubMed Central

    Chang, Yen-Chi; Berry-Pusey, Brittany; Yasin, Rashid; Vu, Nam; Maraglia, Brandon; Chatziioannou, Arion X.; Tsao, Tsu-Chin

    2015-01-01

    This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system. PMID:26478693

  16. Pilot performance and eye movement activity with varying levels of display integration in a synthetic vision cockpit

    NASA Astrophysics Data System (ADS)

    Stark, Julie Michele

    The primary goal of the present study was to investigate the effects of display integration in a simulated commercial aircraft cockpit equipped with a synthetic vision display. Combinations of display integration level (low/high), display view (synthetic vision view/traditional display), and workload (low/high) were presented to each participant. Sixteen commercial pilots flew multiple approaches under IMC conditions in a moderate fidelity fixed-base part-task simulator. Pilot performance data, visual activity, mental workload, and self-report situation awareness were measured. Congruent with the Proximity Compatibility Principle, the more integrated display facilitated superior performance on integrative tasks (lateral and vertical path maintenance), whereas a less integrated display elicited better focus task performance (airspeed maintenance). The synthetic vision displays facilitated superior path maintenance performance under low workload, but these performance gains were not as evident during high workload. The majority of the eye movement findings identified differences in visual acquisition of the airspeed indicator, the glideslope indicator, the localizer, and the altimeter as a function of display integration level or display view. There were more fixations on the airspeed indicator with the more integrated display layout and during high workload trials. There were also more fixations on the glideslope indicator with the more integrated display layout. However, there were more fixations on the localizer with the less integrated display layout. There were more fixations on the altimeter with the more integrated display and with the traditional view. Only a few eye movement differences were produced by the synthetic vision displays; pilots looked at the glideslope indicator and the altimeter less with the synthetic vision view. This supports the notion that utilizing a synthetic vision display should not adversely impact visual acquisition of data. Self

  17. Air and Water System (AWS) Design and Technology Selection for the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Jones, Harry; Kliss, Mark

    2005-01-01

    This paper considers technology selection for the crew air and water recycling systems to be used in long duration human space exploration. The specific objectives are to identify the most probable air and water technologies for the vision for space exploration and to identify the alternate technologies that might be developed. The approach is to conduct a preliminary first cut systems engineering analysis, beginning with the Air and Water System (AWS) requirements and the system mass balance, and then define the functional architecture, review the International Space Station (ISS) technologies, and discuss alternate technologies. The life support requirements for air and water are well known. The results of the mass flow and mass balance analysis help define the system architectural concept. The AWS includes five subsystems: Oxygen Supply, Condensate Purification, Urine Purification, Hygiene Water Purification, and Clothes Wash Purification. AWS technologies have been evaluated in the life support design for ISS node 3, and in earlier space station design studies, in proposals for the upgrade or evolution of the space station, and in studies of potential lunar or Mars missions. The leading candidate technologies for the vision for space exploration are those planned for Node 3 of the ISS. The ISS life support was designed to utilize Space Station Freedom (SSF) hardware to the maximum extent possible. The SSF final technology selection process, criteria, and results are discussed. Would it be cost-effective for the vision for space exploration to develop alternate technology? This paper will examine this and other questions associated with AWS design and technology selection.

  18. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    PubMed

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  19. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    PubMed Central

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  20. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    PubMed

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  1. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions

  2. Broad Band Antireflection Coating on Zinc Sulphide Window for Shortwave infrared cum Night Vision System

    NASA Astrophysics Data System (ADS)

    Upadhyaya, A. S.; Bandyopadhyay, P. K.

    2012-11-01

    In state of art technology, integrated devices are widely used or their potential advantages. Common system reduces weight as well as total space covered by its various parts. In the state of art surveillance system integrated SWIR and night vision system used for more accurate identification of object. In this system a common optical window is used, which passes the radiation of both the regions, further both the spectral regions are separated in two channels. ZnS is a good choice for a common window, as it transmit both the region of interest, night vision (650 - 850 nm) as well as SWIR (0.9 - 1.7 μm). In this work a broad band anti reflection coating is developed on ZnS window to enhance the transmission. This seven layer coating is designed using flip flop design method. After getting the final design, some minor refinement is done, using simplex method. SiO2 and TiO2 coating material combination is used for this work. The coating is fabricated by physical vapour deposition process and the materials were evaporated by electron beam gun. Average transmission of both side coated substrate from 660 to 1700 nm is 95%. This coating also acts as contrast enhancement filter for night vision devices, as it reflect the region of 590 - 660 nm. Several trials have been conducted to check the coating repeatability, and it is observed that transmission variation in different trials is not very much and it is under the tolerance limit. The coating also passes environmental test for stability.

  3. A synthetic vision system using directionally selective motion detectors to recognize collision.

    PubMed

    Yue, Shigang; Rind, F Claire

    2007-01-01

    Reliably recognizing objects approaching on a collision course is extremely important. A synthetic vision system is proposed to tackle the problem of collision recognition in dynamic environments. The system combines the outputs of four whole-field motion-detecting neurons, each receiving inputs from a network of neurons employing asymmetric lateral inhibition to suppress their responses to one direction of motion. An evolutionary algorithm is then used to adjust the weights between the four motion-detecting neurons to tune the system to detect collisions in two test environments. To do this, a population of agents, each representing a proposed synthetic visual system, either were shown images generated by a mobile Khepera robot navigating in a simplified laboratory environment or were shown images videoed outdoors from a moving vehicle. The agents had to cope with the local environment correctly in order to survive. After 400 generations, the best agent recognized imminent collisions reliably in the familiar environment where it had evolved. However, when the environment was swapped, only the agent evolved to cope in the robotic environment still signaled collision reliably. This study suggests that whole-field direction-selective neurons, with selectivity based on asymmetric lateral inhibition, can be organized into a synthetic vision system, which can then be adapted to play an important role in collision detection in complex dynamic scenes.

  4. Experimental study on a smart wheelchair system using a combination of stereoscopic and spherical vision.

    PubMed

    Nguyen, Jordan S; Su, Steven W; Nguyen, Hung T

    2013-01-01

    This paper is concerned with the experimental study performance of a smart wheelchair system named TIM (Thought-controlled Intelligent Machine), which uses a unique camera configuration for vision. Included in this configuration are stereoscopic cameras for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, and a spherical camera system for 360-degrees of monocular vision. The camera combination provides obstacle detection and mapping in unknown environments during real-time autonomous navigation of the wheelchair. With the integration of hands-free wheelchair control technology, designed as control methods for people with severe physical disability, the smart wheelchair system can assist the user with automated guidance during navigation. An experimental study on this system was conducted with a total of 10 participants, consisting of 8 able-bodied subjects and 2 tetraplegic (C-6 to C-7) subjects. The hands-free control technologies utilized for this testing were a head-movement controller (HMC) and a brain-computer interface (BCI). The results showed the assistance of TIM's automated guidance system had a statistically significant reduction effect (p-value = 0.000533) on the completion times of the obstacle course presented in the experimental study, as compared to the test runs conducted without the assistance of TIM.

  5. Aspects of Synthetic Vision Display Systems and the Best Practices of the NASA's SVS Project

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Jones, Denise R.; Young, Steven D.; Arthur, Jarvis J.; Prinzel, Lawrence J.; Glaab, Louis J.; Harrah, Steven D.; Parrish, Russell V.

    2008-01-01

    NASA s Synthetic Vision Systems (SVS) Project conducted research aimed at eliminating visibility-induced errors and low visibility conditions as causal factors in civil aircraft accidents while enabling the operational benefits of clear day flight operations regardless of actual outside visibility. SVS takes advantage of many enabling technologies to achieve this capability including, for example, the Global Positioning System (GPS), data links, radar, imaging sensors, geospatial databases, advanced display media and three dimensional video graphics processors. Integration of these technologies to achieve the SVS concept provides pilots with high-integrity information that improves situational awareness with respect to terrain, obstacles, traffic, and flight path. This paper attempts to emphasize the system aspects of SVS - true systems, rather than just terrain on a flight display - and to document from an historical viewpoint many of the best practices that evolved during the SVS Project from the perspective of some of the NASA researchers most heavily involved in its execution. The Integrated SVS Concepts are envisagements of what production-grade Synthetic Vision systems might, or perhaps should, be in order to provide the desired functional capabilities that eliminate low visibility as a causal factor to accidents and enable clear-day operational benefits regardless of visibility conditions.

  6. Design and testing of a dual-band enhanced vision system

    NASA Astrophysics Data System (ADS)

    Way, Scott P.; Kerr, Richard; Imamura, Joseph J.; Arnoldy, Dan; Zeylmaker, Dick; Zuro, Greg

    2003-09-01

    An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts. It has the ability to provide a single image from uncooled infrared imagers combined with SWIR, NIR or LLLTV sensors. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions but can also be used in a variety of applications where the fusion of dual band or multiband imagery is required. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for the fusion system.

  7. A synthetic vision system using directionally selective motion detectors to recognize collision.

    PubMed

    Yue, Shigang; Rind, F Claire

    2007-01-01

    Reliably recognizing objects approaching on a collision course is extremely important. A synthetic vision system is proposed to tackle the problem of collision recognition in dynamic environments. The system combines the outputs of four whole-field motion-detecting neurons, each receiving inputs from a network of neurons employing asymmetric lateral inhibition to suppress their responses to one direction of motion. An evolutionary algorithm is then used to adjust the weights between the four motion-detecting neurons to tune the system to detect collisions in two test environments. To do this, a population of agents, each representing a proposed synthetic visual system, either were shown images generated by a mobile Khepera robot navigating in a simplified laboratory environment or were shown images videoed outdoors from a moving vehicle. The agents had to cope with the local environment correctly in order to survive. After 400 generations, the best agent recognized imminent collisions reliably in the familiar environment where it had evolved. However, when the environment was swapped, only the agent evolved to cope in the robotic environment still signaled collision reliably. This study suggests that whole-field direction-selective neurons, with selectivity based on asymmetric lateral inhibition, can be organized into a synthetic vision system, which can then be adapted to play an important role in collision detection in complex dynamic scenes. PMID:17355187

  8. A vision-based dynamic rotational angle measurement system for large civil structures.

    PubMed

    Lee, Jong-Jae; Ho, Hoai-Nam; Lee, Jong-Han

    2012-01-01

    In this paper, we propose a vision-based rotational angle measurement system for large-scale civil structures. Despite the fact that during the last decade several rotation angle measurement systems were introduced, they however often required complex and expensive equipment. Therefore, alternative effective solutions with high resolution are in great demand. The proposed system consists of commercial PCs, commercial camcorders, low-cost frame grabbers, and a wireless LAN router. The calculation of rotation angle is obtained by using image processing techniques with pre-measured calibration parameters. Several laboratory tests were conducted to verify the performance of the proposed system. Compared with the commercial rotation angle measurement, the results of the system showed very good agreement with an error of less than 1.0% in all test cases. Furthermore, several tests were conducted on the five-story modal testing tower with a hybrid mass damper to experimentally verify the feasibility of the proposed system.

  9. A vision-based dynamic rotational angle measurement system for large civil structures.

    PubMed

    Lee, Jong-Jae; Ho, Hoai-Nam; Lee, Jong-Han

    2012-01-01

    In this paper, we propose a vision-based rotational angle measurement system for large-scale civil structures. Despite the fact that during the last decade several rotation angle measurement systems were introduced, they however often required complex and expensive equipment. Therefore, alternative effective solutions with high resolution are in great demand. The proposed system consists of commercial PCs, commercial camcorders, low-cost frame grabbers, and a wireless LAN router. The calculation of rotation angle is obtained by using image processing techniques with pre-measured calibration parameters. Several laboratory tests were conducted to verify the performance of the proposed system. Compared with the commercial rotation angle measurement, the results of the system showed very good agreement with an error of less than 1.0% in all test cases. Furthermore, several tests were conducted on the five-story modal testing tower with a hybrid mass damper to experimentally verify the feasibility of the proposed system. PMID:22969348

  10. Night vision imaging system design, integration and verification in spacecraft vacuum thermal test

    NASA Astrophysics Data System (ADS)

    Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing

    2015-08-01

    The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.

  11. A binocular machine vision system for non-melanoma skin cancer 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Gorpas, Dimitris S.; Politopoulos, Kostas; Alexandratou, Eleni; Yova, Dido

    2006-02-01

    Computer vision advancements have not till now achieved the accurate 3D reconstruction of objects smaller than 1cm diameter. Although this problem is of great importance in dermatology for Non Melanoma Skin Cancer diagnosis and therapy, has not yet been solved. This paper describes the development of a novel volumetric method for NMSC animal model tumors, using a binocular vision system. Monitoring NMSC tumors volume changes during PDT will grant important information for the assessment of the therapeutic progress and the efficiency of the applied drug. The vision system was designed taking into account the targets size and the flexibility. By using high resolution cameras with telecentric lenses most distortion factors were reduced significantly. Furthermore, z-axis movement was possible without requiring calibration, in contrary to wide angle lenses. The calibration was achieved by means of adapted photogrammetric technique. The required time for calibrating both cameras was less than a minute. For accuracy expansion, a structured light projector was used. The captured stereo-pair images were processed with modified morphological filters to improve background contrast and minimize noise. The determination of conjugate points was achieved via maximum correlation values and region properties, thus decreasing significantly the computational cost. The 3D reconstruction algorithm has been assessed with objects of known volumes and applied to animal model tumors with less than 0.6cm diameter. The achieved precision was at very high levels providing a standard deviation of 0.0313mm. The robustness of our system is based on the overall approach and on the size of the targets.

  12. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss

    PubMed Central

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788

  13. Gesture therapy: a vision-based system for upper extremity stroke rehabilitation.

    PubMed

    Sucar, L; Luis, Roger; Leder, Ron; Hernandez, Jorge; Sanchez, Israel

    2010-01-01

    Stroke is the main cause of motor and cognitive disabilities requiring therapy in the world. Therefor it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. We have developed a low-cost vision-based system that allows stroke survivors to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a virtual environment for facilitating repetitive movement training, with computer vision algorithms that track the hand of a patient, using an inexpensive camera and a personal computer. This system, called Gesture Therapy, includes a gripper with a pressure sensor to include hand and finger rehabilitation; and it tracks the head of the patient to detect and avoid trunk compensation. It has been evaluated in a controlled clinical trial at the National Institute for Neurology and Neurosurgery in Mexico City, comparing it with conventional occupational therapy. In this paper we describe the latest version of the Gesture Therapy System and summarize the results of the clinical trail.

  14. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  15. An Integrated Vision-Based System for Spacecraft Attitude and Topology Determination for Formation Flight Missions

    NASA Technical Reports Server (NTRS)

    Rogers, Aaron; Anderson, Kalle; Mracek, Anna; Zenick, Ray

    2004-01-01

    With the space industry's increasing focus upon multi-spacecraft formation flight missions, the ability to precisely determine system topology and the orientation of member spacecraft relative to both inertial space and each other is becoming a critical design requirement. Topology determination in satellite systems has traditionally made use of GPS or ground uplink position data for low Earth orbits, or, alternatively, inter-satellite ranging between all formation pairs. While these techniques work, they are not ideal for extension to interplanetary missions or to large fleets of decentralized, mixed-function spacecraft. The Vision-Based Attitude and Formation Determination System (VBAFDS) represents a novel solution to both the navigation and topology determination problems with an integrated approach that combines a miniature star tracker with a suite of robust processing algorithms. By combining a single range measurement with vision data to resolve complete system topology, the VBAFDS design represents a simple, resource-efficient solution that is not constrained to certain Earth orbits or formation geometries. In this paper, analysis and design of the VBAFDS integrated guidance, navigation and control (GN&C) technology will be discussed, including hardware requirements, algorithm development, and simulation results in the context of potential mission applications.

  16. Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke

    NASA Astrophysics Data System (ADS)

    Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro

    Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.

  17. Down-to-the-runway enhanced flight vision system (EFVS) approach test results

    NASA Astrophysics Data System (ADS)

    McKinley, John B.; Heidhausen, Eric; Cramer, James A.; Krone, Norris J., Jr.

    2008-04-01

    Flight tests where conducted at Cambridge-Dorchester Airport (KCGE) and Easton Municipal Airport / Newnam Field (KESN) in a Cessna 402B aircraft using a head-up display (HUD) and a Kollsman Enhanced Vision System (EVS-I) infrared camera. These tests were sponsored by the MITRE Corporation's Center for Advanced Aviation System Development (CAASD) and the Federal Aviation Administration. Imagery of the EVS-I infrared camera, HUD guidance cues, and out-the-window video were each separately recorded at an engineering workstation for each approach, roll-out, and taxi operation. The EVS-I imagery was displayed on the HUD with guidance cues generated by the mission computer. Also separately recorded was the inertial flight path data. Enhanced Flight Vision System (EFVS) approaches were conducted from the final approach fix to runway flare, touchdown, roll-out and taxi using the HUD and EVS-I sensor as the only visual reference. Flight conditions included two-pilot crew, day, night, non-precision course offset approaches, ILS approach, crosswind approaches, and missed approaches. Results confirmed the feasibility for safe conduct of down-to-the-runway precision approaches in low visibility to runways with and without precision approach systems, when consideration is given to proper aircraft instrumentation, pilot training, and acceptable procedures. Operational benefits include improved runway occupancy rates, and reduced delays and diversions.

  18. Advanced electro-mechanical micro-shutters for thermal infrared night vision imaging and targeting systems

    NASA Astrophysics Data System (ADS)

    Durfee, David; Johnson, Walter; McLeod, Scott

    2007-04-01

    Un-cooled microbolometer sensors used in modern infrared night vision systems such as driver vehicle enhancement (DVE) or thermal weapons sights (TWS) require a mechanical shutter. Although much consideration is given to the performance requirements of the sensor, supporting electronic components and imaging optics, the shutter technology required to survive in combat is typically the last consideration in the system design. Electro-mechanical shutters used in military IR applications must be reliable in temperature extremes from a low temperature of -40°C to a high temperature of +70°C. They must be extremely light weight while having the ability to withstand the high vibration and shock forces associated with systems mounted in military combat vehicles, weapon telescopic sights, or downed unmanned aerial vehicles (UAV). Electro-mechanical shutters must have minimal power consumption and contain circuitry integrated into the shutter to manage battery power while simultaneously adapting to changes in electrical component operating parameters caused by extreme temperature variations. The technology required to produce a miniature electro-mechanical shutter capable of fitting into a rifle scope with these capabilities requires innovations in mechanical design, material science, and electronics. This paper describes a new, miniature electro-mechanical shutter technology with integrated power management electronics designed for extreme service infra-red night vision systems.

  19. Enhanced and synthetic vision system for autonomous all weather approach and landing

    NASA Astrophysics Data System (ADS)

    Korn, Bernd R.

    2007-04-01

    Within its research project ADVISE-PRO (Advanced visual system for situation awareness enhancement - prototype, 2003 - 2006) that will be presented in this contribution, DLR has combined elements of Enhanced Vision and Synthetic Vision to one integrated system to allow all low visibility operations independently from the infrastructure on ground. The core element of this system is the adequate fusion of all information that is available on-board. This fusion process is organized in a hierarchical manner. The most important subsystems are a) the sensor based navigation which determines the aircraft's position relative to the runway by automatically analyzing sensor data (MMW, IR, radar altimeter) without using neither (D)GPS nor precise knowledge about the airport geometry, b) an integrity monitoring of navigation data and terrain data which verifies on-board navigation data ((D)GPS + INS) with sensor data (MMW-Radar, IR-Sensor, Radar altimeter) and airport / terrain databases, c) an obstacle detection system and finally d) a consistent description of situation and respective HMI for the pilot.

  20. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  1. Microwave vision for robots

    NASA Technical Reports Server (NTRS)

    Lewandowski, Leon; Struckman, Keith

    1994-01-01

    Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.

  2. A synchronized multipoint vision-based system for displacement measurement of civil infrastructures.

    PubMed

    Ho, Hoai-Nam; Lee, Jong-Han; Park, Young-Soo; Lee, Jong-Jae

    2012-01-01

    This study presents an advanced multipoint vision-based system for dynamic displacement measurement of civil infrastructures. The proposed system consists of commercial camcorders, frame grabbers, low-cost PCs, and a wireless LAN access point. The images of target panels attached to a structure are captured by camcorders and streamed into the PC via frame grabbers. Then the displacements of targets are calculated using image processing techniques with premeasured calibration parameters. This system can simultaneously support two camcorders at the subsystem level for dynamic real-time displacement measurement. The data of each subsystem including system time are wirelessly transferred from the subsystem PCs to master PC and vice versa. Furthermore, synchronization process is implemented to ensure the time synchronization between the master PC and subsystem PCs. Several shaking table tests were conducted to verify the effectiveness of the proposed system, and the results showed very good agreement with those from a conventional sensor with an error of less than 2%.

  3. A real-time surface inspection system for precision steel balls based on machine vision

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen

    2016-07-01

    Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s-1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.

  4. A real-time surface inspection system for precision steel balls based on machine vision

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen

    2016-07-01

    Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s‑1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.

  5. Development of an aviator's helmet-mounted night-vision goggle system

    NASA Astrophysics Data System (ADS)

    Wilson, Gerry H.; McFarlane, Robert J.

    1990-10-01

    Helmet Mounted Systems (HMS) must be lightweight, balanced and compatible with life support and head protection assemblies. This paper discusses the design of one particular HMS, the GEC Ferranti NITE-OP/NIGHTBIRD aviator's Night Vision Goggle (NVG) developed under contracts to the Ministry of Defence for all three services in the United Kingdom (UK) for Rotary Wing and fast jet aircraft. The existing equipment constraints, safety, human factor and optical performance requirements are discussed before the design solution is presented after consideration of these material and manufacturing options.

  6. Automatic inspection of analog and digital meters in a robot vision system

    NASA Technical Reports Server (NTRS)

    Trivedi, Mohan M.; Marapane, Suresh; Chen, Chuxin

    1988-01-01

    A critical limitation of most of the robots utilized in industrial environments arises due to their inability to utilize sensory feedback. This forces robot operation into totally preprogrammed or teleoperation modes. In order to endow the new generation of robots with higher levels of autonomy techniques for sensing of their work environments and for accurate and efficient analysis of the sensory data must be developed. In this paper detailed development of vision system modules for inspecting various types of meters, both analog and digital, encountered in a robotic inspection and manipulation tasks are described. These modules are tested using industrial robots having multisensory input capability.

  7. IR measurements and image processing for enhanced-vision systems in civil aviation

    NASA Astrophysics Data System (ADS)

    Beier, Kurt R.; Fries, Jochen; Mueller, Rupert M.; Palubinskas, Gintautas

    2001-08-01

    A series of IR measurements with a FLIR (Forward Looking Infrared) system during landing approaches to various airports have been performed. A real time image processing procedure to detect and identify the runway and eventual obstacles is discussed and demonstrated. It is based on IR image segmentation and information derived from synthetic vision data. Thhe extracted information from IR images will be combined with the appropriate information from a MMW (millimeter wave) radar sensor in the subsequent fusion processor. This fused information aims to increase the pilot's situation awareness.

  8. A Vision-Based System for Object Identification and Information Retrieval in a Smart Home

    NASA Astrophysics Data System (ADS)

    Grech, Raphael; Monekosso, Dorothy; de Jager, Deon; Remagnino, Paolo

    This paper describes a hand held device developed to assist people to locate and retrieve information about objects in a home. The system developed is a standalone device to assist persons with memory impairments such as people suffering from Alzheimer's disease. A second application is object detection and localization for a mobile robot operating in an ambient assisted living environment. The device relies on computer vision techniques to locate a tagged object situated in the environment. The tag is a 2D color printed pattern with a detection range and a field of view such that the user may point from a distance of over 1 meter.

  9. Integration of a Multi-Camera Vision System and Strapdown Inertial Navigation System (SDINS) with a Modified Kalman Filter

    PubMed Central

    Parnian, Neda; Golnaraghi, Farid

    2010-01-01

    This paper describes the development of a modified Kalman filter to integrate a multi-camera vision system and strapdown inertial navigation system (SDINS) for tracking a hand-held moving device for slow or nearly static applications over extended periods of time. In this algorithm, the magnitude of the changes in position and velocity are estimated and then added to the previous estimation of the position and velocity, respectively. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. The proposed Kalman filter removes the effect of the gravitational force in the state-space model. As a result, the resulting error is eliminated and the resulting position is smoother and ripple-free. PMID:22219667

  10. Integration of a multi-camera vision system and strapdown inertial navigation system (SDINS) with a modified Kalman filter.

    PubMed

    Parnian, Neda; Golnaraghi, Farid

    2010-01-01

    This paper describes the development of a modified Kalman filter to integrate a multi-camera vision system and strapdown inertial navigation system (SDINS) for tracking a hand-held moving device for slow or nearly static applications over extended periods of time. In this algorithm, the magnitude of the changes in position and velocity are estimated and then added to the previous estimation of the position and velocity, respectively. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. The proposed Kalman filter removes the effect of the gravitational force in the state-space model. As a result, the resulting error is eliminated and the resulting position is smoother and ripple-free.

  11. Computer vision

    SciTech Connect

    Not Available

    1982-01-01

    This paper discusses material from areas such as artificial intelligence, psychology, computer graphics, and image processing. The intent is to assemble a selection of this material in a form that will serve both as a senior/graduate-level academic text and as a useful reference to those building vision systems. This book has a strong artificial intelligence flavour, emphasising the belief that both the intrinsic image information and the internal model of the world are important in successful vision systems. The book is organised into four parts, based on descriptions of objects at four different levels of abstraction. These are: generalised images-images and image-like entities; segmented images-images organised into subimages that are likely to correspond to interesting objects; geometric structures-quantitative models of image and world structures; relational structures-complex symbolic descriptions of image and world structures. The book contains author and subject indexes.

  12. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers

    PubMed Central

    Olivares-Mendez, Miguel A.; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F.; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-01-01

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597

  13. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers.

    PubMed

    Olivares-Mendez, Miguel A; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-01-01

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597

  14. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers.

    PubMed

    Olivares-Mendez, Miguel A; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-12-12

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing.

  15. Real-time machine vision system using FPGA and soft-core processor

    NASA Astrophysics Data System (ADS)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  16. FLILO (flying infrared for low-level operations): an enhanced vision system

    NASA Astrophysics Data System (ADS)

    Guell, Jeff J.

    2000-06-01

    FLILO is an Enhanced Vision System (EVS); which enhances Situational Awareness for safe low level/night time and moderate weather flight operations (including: take- off/landing, taxiing, approaches, drop zone identification, Short Austere Air Field operations, etc), by providing electronic/real time vision to the pilots. It consists of a series of imaging sensors, an Image Processor and a wide field-of-view (FOV) see-through Helmet Mounted Display (HMD) integrated with a Head Tracker. The current solution for safe night time/low level military flight operations is the use of the Turret-FLIR (Forward-Looking InfraRed). This system requires an additional operator/crew member (navigator) who controls the Turret's movement and relays the information to the pilots. The image is presented on a Head-Down-Display. FLILO presents the information directly to the pilots on an HMD, therefore each pilot has an independent view controlled by their heads position, while utilizing the same sensors that are static and fixed to the aircraft structure. Since there are no moving parts, the system provides high reliability, while remaining more affordable than the Turret-FLIR solution. FLILO does not require a ball-turret, therefore there is no extra drag or range impact on the aircraft's performance. Furthermore, with future use of real-time multi-band/multi-sensor image fusion, FLILO is the right step towards obtaining safe autonomous landing guidance/0-0 flight operations capability.

  17. Enhancement of vision systems based on runway detection by image processing techniques

    NASA Astrophysics Data System (ADS)

    Gulec, N.; Sen Koktas, N.

    2012-06-01

    An explicit way of facilitating approach and landing operations of fixed-wing aircraft in degraded visual environments is presenting a coherent image of the designated runway via vision systems and hence increasing the situational awareness of the flight crew. Combined vision systems, in general, aim to provide a clear view of the aircraft exterior to the pilots using information from databases and imaging sensors. This study presents a novel method that consists of image-processing and tracking algorithms, which utilize information from navigation systems and databases along with the images from daylight and infrared cameras, for the recognition and tracking of the designated runway through the approach and landing operation. Video data simulating the straight-in approach of an aircraft from an altitude of 5000 ft down to 100 ft is synthetically generated by a COTS tool. A diverse set of atmospheric conditions such as fog and low light levels are simulated in these videos. Detection and false alarm rates are used as the primary performance metrics. The results are presented in a format where the performance metrics are compared against the altitude of the aircraft. Depending on the visual environment and the source of the video, the performance metrics reach up to 98% for DR and down to 5% for FAR.

  18. Solid state active/passive night vision imager using continuous-wave laser diodes and silicon focal plane arrays

    NASA Astrophysics Data System (ADS)

    Vollmerhausen, Richard H.

    2013-04-01

    Passive imaging offers covertness and low power, while active imaging provides longer range target acquisition without the need for natural or external illumination. This paper describes a focal plane array (FPA) concept that has the low noise needed for state-of-the-art passive imaging and the high-speed gating needed for active imaging. The FPA is used with highly efficient but low-peak-power laser diodes to create a night vision imager that has the size, weight, and power attributes suitable for man-portable applications. Video output is provided in both the active and passive modes. In addition, the active mode is Class 1 eye safe and is not visible to the naked eye or to night vision goggles.

  19. Developing Crew Health Care and Habitability Systems for the Exploration Vision

    NASA Technical Reports Server (NTRS)

    Laurini, Kathy; Sawin, Charles F.

    2006-01-01

    This paper will discuss the specific mission architectures associated with the NASA Exploration Vision and review the challenges and drivers associated with developing crew health care and habitability systems to manage human system risks. Crew health care systems must be provided to manage crew health within acceptable limits, as well as respond to medical contingencies that may occur during exploration missions. Habitability systems must enable crew performance for the tasks necessary to support the missions. During the summer of 2005, NASA defined its exploration architecture including blueprints for missions to the moon and to Mars. These mission architectures require research and technology development to focus on the operational risks associated with each mission, as well as the risks to long term astronaut health. This paper will review the highest priority risks associated with the various missions and discuss NASA s strategies and plans for performing the research and technology development necessary to manage the risks to acceptable levels.

  20. Alaskan flight trials of a synthetic vision system for instrument landings of a piston twin aircraft

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew K.; Alter, Keith W.; Jennings, Chad W.; Powell, J. D.

    1999-07-01

    Stanford University has developed a low-cost prototype synthetic vision system and flight tested it onboard general aviation aircraft. The display aids pilots by providing an 'out the window' view, making visualization of the desired flight path a simple task. Predictor symbology provides guidance on straight and curved paths presented in a 'tunnel- in-the-sky' format. Based on commodity PC hardware to achieve low cost, the Tunnel Display system uses differential GPS (typically from Stanford prototype Wide Area Augmentation System hardware) for positioning and GPS-aided inertial sensors for attitude determination. The display has been flown onboard Piper Dakota and Beechcraft Queen Air aircraft at several different locations. This paper describes the system, its development, and flight trials culminating with tests in Alaska during the summer of 1998. Operational experience demonstrated the Tunnel Display's ability to increase flight- path following accuracy and situational awareness while easing the task instrument flying.

  1. A simple machine vision-driven system for measuring optokinetic reflex in small animals.

    PubMed

    Shirai, Yoshihiro; Asano, Kenta; Takegoshi, Yoshihiro; Uchiyama, Shu; Nonobe, Yuki; Tabata, Toshihide

    2013-09-01

    The optokinetic reflex (OKR) is useful to monitor the function of the visual and motor nervous systems. However, OKR measurement is not open to all because dedicated commercial equipment or detailed instructions for building in-house equipment is rarely offered. Here we describe the design of an easy-to-install/use yet reliable OKR measuring system including a computer program to visually locate the pupil and a mathematical procedure to estimate the pupil azimuth from the location data. The pupil locating program was created on a low-cost machine vision development platform, whose graphical user interface allows one to compose and operate the program without programming expertise. Our system located mouse pupils at a high success rate (~90 %), estimated their azimuth precisely (~94 %), and detected changes in OKR gain due to the pharmacological modulation of the cerebellar flocculi. The system would promote behavioral assessment in physiology, pharmacology, and genetics.

  2. A vehicle photoelectric detection system based on guidance of machine vision

    NASA Astrophysics Data System (ADS)

    Wang, Yawei; Liu, Yu; Chen, Wei; Chen, Jing; Guo, Jia; Zhou, Lijun; Zheng, Haotian; Zhang, Xuantao

    2015-04-01

    A vehicle photoelectric detection system based on guidance of machine vision is described in detail, which is composed of electric-optic turret, distributed perception module, position orientation system and data process terminal, etc. Simultaneously, a target detection method used in the system based on visual guide is also discussed in this paper. This method, based on the initial alignment of camera position and the precise alignment of target location, realizes the target acquisition and measurement by using the high-definition cameras of distributed perception module installed around the vehicle as the human eyes to guide the line of sight of optoelectronic devices on the turret to the field of view of one camera quickly and carry on small-scale target alignment operations. Simulation results show that the method could achieve the intelligent dynamic guide of photoelectric detection system, and improve the detection efficiency and accuracy.

  3. Triangle orientation discrimination performance model for a multiband IR imaging system with human vision

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Wang, Xiaorui; Zhang, Jianqi; Bai, Honggang

    2011-08-01

    In support of multiband imaging system performance forecasting, an equation-based triangle orientation discrimination (TOD) model is developed. Specifically, with the characteristic of the test pattern related to spectrum, the mathematical equations for predicting the TOD threshold of the system with distributed fusion architecture in the IR spectrum band are derived based on human vision with the ``k/N'' fusion rule, with emphasis on the impacts of fusion on the threshold. Furthermore, a figure of merit Q related to the TOD calculation results is introduced to analyze the relation of the discrimination performance of multiband imaging system to the size and the spectral difference of test pattern. The preliminary validation with the experiment results suggests that our proposed model can provide a reasonable prediction of the performance for a multiband imaging system.

  4. Commercial Flight Crew Decision-Making during Low-Visibility Approach Operations Using Fused Synthetic/Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III

    2007-01-01

    NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.

  5. Calibration method for a vision guiding-based laser-tracking measurement system

    NASA Astrophysics Data System (ADS)

    Shao, Mingwei; Wei, Zhenzhong; Hu, Mengjie; Zhang, Guangjun

    2015-08-01

    Laser-tracking measurement systems (laser trackers) based on a vision-guiding device are widely used in industrial fields, and their calibration is important. As conventional methods typically have many disadvantages, such as difficult machining of the target and overdependence on the retroreflector, a novel calibration method is presented in this paper. The retroreflector, which is necessary in the normal calibration method, is unnecessary in our approach. As the laser beam is linear, points on the beam can be obtained with the help of a normal planar target. In this way, we can determine the function of a laser beam under the camera coordinate system, while its corresponding function under the laser-tracker coordinate system can be obtained from the encoder of the laser tracker. Clearly, when several groups of functions are confirmed, the rotation matrix can be solved from the direction vectors of the laser beams in different coordinate systems. As the intersection of the laser beams is the origin of the laser-tracker coordinate system, the translation matrix can also be determined. Our proposed method not only achieves the calibration of a single laser-tracking measurement system but also provides a reference for the calibration of a multistation system. Simulations to evaluate the effects of some critical factors were conducted. These simulations show the robustness and accuracy of our method. In real experiments, the root mean square error of the calibration result reached 1.46 mm within a range of 10 m, even though the vision-guiding device focuses on a point approximately 5 m away from the origin of its coordinate system, with a field of view of approximately 200 mm  ×  200 mm.

  6. Human interface and transmit frequency control for the through-air acoustic real-time high resolution vision substitute system.

    PubMed

    Taki, Hirofumi; Sato, Toru

    2005-01-01

    Existing vision substitute systems are not useful as navigation system due to the limitation of spatial and time resolution. In this study we propose a transmit control method free from range aliasing for a high resolution acoustic vision substitute systems, which we previously proposed. We also examine a human-machine information transfer method with a vibrotactile stimulator array consisting of 13 × 21 elements. It presents the target area of 30 degree × 60 degree by the sampling interval of 1 degree at the center. The system presents range, direction, and surface topography of targets to the subject.

  7. Acquired color vision deficiency.

    PubMed

    Simunovic, Matthew P

    2016-01-01

    Acquired color vision deficiency occurs as the result of ocular, neurologic, or systemic disease. A wide array of conditions may affect color vision, ranging from diseases of the ocular media through to pathology of the visual cortex. Traditionally, acquired color vision deficiency is considered a separate entity from congenital color vision deficiency, although emerging clinical and molecular genetic data would suggest a degree of overlap. We review the pathophysiology of acquired color vision deficiency, the data on its prevalence, theories for the preponderance of acquired S-mechanism (or tritan) deficiency, and discuss tests of color vision. We also briefly review the types of color vision deficiencies encountered in ocular disease, with an emphasis placed on larger or more detailed clinical investigations.

  8. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles.

    PubMed

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-07-13

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft's nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft's nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  9. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    PubMed Central

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-01-01

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path. PMID:26184213

  10. A vision-based system for intelligent monitoring: human behaviour analysis and privacy by context.

    PubMed

    Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco

    2014-05-20

    Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.

  11. Terrain Portrayal for Synthetic Vision Systems Head-Down Displays Evaluation Results: Compilation of Pilot Transcripts

    NASA Technical Reports Server (NTRS)

    Hughes, Monica F.; Glaab, Louis J.

    2007-01-01

    The Terrain Portrayal for Head-Down Displays (TP-HDD) simulation experiment addressed multiple objectives involving twelve display concepts (two baseline concepts without terrain and ten synthetic vision system (SVS) variations), four evaluation maneuvers (two en route and one approach maneuver, plus a rare-event scenario), and three pilot group classifications. The TP-HDD SVS simulation was conducted in the NASA Langley Research Center's (LaRC's) General Aviation WorkStation (GAWS) facility. The results from this simulation establish the relationship between terrain portrayal fidelity and pilot situation awareness, workload, stress, and performance and are published in the NASA TP entitled Terrain Portrayal for Synthetic Vision Systems Head-Down Displays Evaluation Results. This is a collection of pilot comments during each run of the TP-HDD simulation experiment. These comments are not the full transcripts, but a condensed version where only the salient remarks that applied to the scenario, the maneuver, or the actual research itself were compiled.

  12. A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context

    PubMed Central

    Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco

    2014-01-01

    Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services. PMID:24854209

  13. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    NASA Astrophysics Data System (ADS)

    Castellini, P.; Cecchini, S.; Stroppa, L.; Paone, N.

    2015-02-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes.

  14. Requirements analysis for an air traffic control tower surface surveillance enhanced vision system

    NASA Astrophysics Data System (ADS)

    Ruffner, John W.; Deaver, Dawne M.; Henry, Daniel J.

    2003-09-01

    Tower controllers are responsible for maintaining separation between aircraft and expediting the flow of traffic in the air. On the airport surface, they also are responsible for maintaining safe separation between aircraft, ground equipment, and personnel. They do this by sequencing departing and arriving aircraft, and controlling the location and movement of aircraft, vehicles, equipment, and personnel on the airport surface. The local controller and ground controller are responsible for determining aircraft location and intent, and for ensuring that aircraft, vehicles, and other surface objects maintain a safe separation distance. During nighttime or poor visibility conditions, controllers' situation awareness is significantly degraded, resulting in lower safety margins and increased errors. Safety and throughput can be increased by using an Enhanced Vision System, based upon state-of-the-art infrared sensor technology, to restore critical visual cues. We discuss the results of an analysis of tower controller critical visual tasks and information requirements. The analysis identified: representative classes of ground obstacles/targets (e.g., aircraft, vehicles, wildlife); sample airport layouts and tower-to-runway distances; and obstacle subtended visual angles. We performed NVTherm modeling of candidate sensors and field data collections. This resulted in the identification of design factors for an airport surface surveillance Enhanced Vision System.

  15. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles.

    PubMed

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-01-01

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft's nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft's nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path. PMID:26184213

  16. Optoelectronic vision

    NASA Astrophysics Data System (ADS)

    Ren, Chunye; Parel, Jean-Marie A.

    1993-06-01

    Scientists have searched every discipline to find effective methods of treating blindness, such as using aids based on conversion of the optical image, to auditory or tactile stimuli. However, the limited performance of such equipment and difficulties in training patients have seriously hampered practical applications. A great edification has been given by the discovery of Foerster (1929) and Krause & Schum (1931), who found that the electrical stimulation of the visual cortex evokes the perception of a small spot of light called `phosphene' in both blind and sighted subjects. According to this principle, it is possible to invite artificial vision by using stimulation with electrodes placed on the vision neural system, thereby developing a prosthesis for the blind that might be of value in reading and mobility. In fact, a number of investigators have already exploited this phenomena to produce a functional visual prosthesis, bringing about great advances in this area.

  17. Automated vision system for fabric defect inspection using Gabor filters and PCNN.

    PubMed

    Li, Yundong; Zhang, Cheng

    2016-01-01

    In this study, an embedded machine vision system using Gabor filters and Pulse Coupled Neural Network (PCNN) is developed to identify defects of warp-knitted fabrics automatically. The system consists of smart cameras and a Human Machine Interface (HMI) controller. A hybrid detection algorithm combing Gabor filters and PCNN is running on the SOC processor of the smart camera. First, Gabor filters are employed to enhance the contrast of images captured by a CMOS sensor. Second, defect areas are segmented by PCNN with adaptive parameter setting. Third, smart cameras will notice the controller to stop the warp-knitting machine once defects are found out. Experimental results demonstrate that the hybrid method is superior to Gabor and wavelet methods on detection accuracy. Actual operations in a textile factory verify the effectiveness of the inspection system. PMID:27386251

  18. A comparative study of three vision systems for metal surface defect detection

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Simionescu, Petru-Aurelian; Robinson, Shawn; McLauchlan, Lifford

    2015-09-01

    In this paper we present a comparative analysis of three vision systems to nondestructively predict defects on the surfaces of aluminum castings. A hyperspectral imaging system, a thermal imager, and a digital color camera have been used to inspect aluminum metal cast surfaces. Hyperspectral imaging provides both spectral and spatial information, as each material produces specific spectral signatures which are also affected by surface texture. Thermal imager detects infrared radiation whereby hotspots can be investigated to identify possible trapped inclusions close to the surface, or other superficial defects. Finally, digital color images show apparent surface defects that can also be viewed with the naked eye but can be automated for fast and efficient data analysis. The surface defect locations predicted using the three systems are then verified by breaking the casings using a tensile tester. Of the three nondestructive methods, the thermal imaging camera was found to produce the most accurate predictions for defect location that caused breakage.

  19. Vision-based on-board collision avoidance system for aircraft navigation

    NASA Astrophysics Data System (ADS)

    Candamo, Joshua; Kasturi, Rangachar; Goldgof, Dmitry; Sarkar, Sudeep

    2006-05-01

    This paper presents an automated classification system for images based on their visual complexity. The image complexity is approximated using a clutter measure, and parameters for processing it are dynamically chosen. The classification method is part of a vision-based collision avoidance system for low altitude aerial vehicles, intended to be used during search and rescue operations in urban settings. The collision avoidance system focuses on detecting thin obstacles such as wires and power lines. Automatic parameter selection for edge detection shows a 5% and 12% performance improvement for medium and heavily cluttered images respectively. The automatic classification enabled the algorithm to identify near invisible power lines in a 60 frame video footage from a SUAV helicopter crashing during a search and rescue mission at hurricane Katrina, without any manual intervention.

  20. A Respiratory Movement Monitoring System Using Fiber-Grating Vision Sensor for Diagnosing Sleep Apnea Syndrome

    NASA Astrophysics Data System (ADS)

    Takemura, Yasuhiro; Sato, Jun-Ya; Nakajima, Masato

    2005-01-01

    A non-restrictive and non-contact respiratory movement monitoring system that finds the boundary between chest and abdomen automatically and detects the vertical movement of each part of the body separately is proposed. The system uses a fiber-grating vision sensor technique and the boundary position detection is carried out by calculating the centers of gravity of upward moving and downward moving sampling points, respectively. In the experiment to evaluate the ability to detect the respiratory movement signals of each part and to discriminate between obstructive and central apneas, detected signals of the two parts and their total clearly showed the peculiarities of obstructive and central apnea. The cross talk between the two categories classified automatically according to several rules that reflect the peculiarities was ≤ 15%. This result is sufficient for discriminating central sleep apnea syndrome from obstructive sleep apnea syndrome and indicates that the system is promising as screening equipment. Society of Japan