Science.gov

Sample records for active vision system

  1. Global vision systems regulatory and standard setting activities

    NASA Astrophysics Data System (ADS)

    Tiana, Carlo; Münsterer, Thomas

    2016-05-01

    A number of committees globally, and the Regulatory Agencies they support, are active delivering and updating performance standards for vision system: Enhanced, Synthetic and Combined, as they apply to both Fixed Wing and, more recently, Rotorcraft operations in low visibility. We provide an overview of each committee's present and past work, as well as an update of recent activities and future goals.

  2. Vector disparity sensor with vergence control for active vision systems.

    PubMed

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  3. Range gated active night vision system for automobiles.

    PubMed

    David, Ofer; Kopeika, Norman S; Weizer, Boaz

    2006-10-01

    Night vision for automobiles is an emerging safety feature that is being introduced for automotive safety. We develop what we believe is an innovative new night vision system using gated imaging principles. The concept of gated imaging is described and its basic advantages, including the backscatter reduction mechanism for improved vision through fog, rain, and snow. Evaluation of performance is presented by analyzing bar pattern modulation and comparing Johnson chart predictions.

  4. Scene interpretation module for an active vision system

    NASA Astrophysics Data System (ADS)

    Remagnino, P.; Matas, J.; Illingworth, John; Kittler, Josef

    1993-08-01

    In this paper an implementation of a high level symbolic scene interpreter for an active vision system is considered. The scene interpretation module uses low level image processing and feature extraction results to achieve object recognition and to build up a 3D environment map. The module is structured to exploit spatio-temporal context provided by existing partial world interpretations and has spatial reasoning to direct gaze control and thereby achieve efficient and robust processing using spatial focus of attention. The system builds and maintains an awareness of an environment which is far larger than a single camera view. Experiments on image sequences have shown that the system can: establish its position and orientation in a partially known environment, track simple moving objects such as cups and boxes, temporally integrate recognition results to establish or forget object presence, and utilize spatial focus of attention to achieve efficient and robust object recognition. The system has been extensively tested using images from a single steerable camera viewing a simple table top scene containing box and cylinder-like objects. Work is currently progressing to further develop its competences and interface it with the Surrey active stereo vision head, GETAFIX.

  5. Active vision in marmosets: a model system for visual neuroscience.

    PubMed

    Mitchell, Jude F; Reynolds, John H; Miller, Cory T

    2014-01-22

    The common marmoset (Callithrix jacchus), a small-bodied New World primate, offers several advantages to complement vision research in larger primates. Studies in the anesthetized marmoset have detailed the anatomy and physiology of their visual system (Rosa et al., 2009) while studies of auditory and vocal processing have established their utility for awake and behaving neurophysiological investigations (Lu et al., 2001a,b; Eliades and Wang, 2008a,b; Osmanski and Wang, 2011; Remington et al., 2012). However, a critical unknown is whether marmosets can perform visual tasks under head restraint. This has been essential for studies in macaques, enabling both accurate eye tracking and head stabilization for neurophysiology. In one set of experiments we compared the free viewing behavior of head-fixed marmosets to that of macaques, and found that their saccadic behavior is comparable across a number of saccade metrics and that saccades target similar regions of interest including faces. In a second set of experiments we applied behavioral conditioning techniques to determine whether the marmoset could control fixation for liquid reward. Two marmosets could fixate a central point and ignore peripheral flashing stimuli, as needed for receptive field mapping. Both marmosets also performed an orientation discrimination task, exhibiting a saturating psychometric function with reliable performance and shorter reaction times for easier discriminations. These data suggest that the marmoset is a viable model for studies of active vision and its underlying neural mechanisms.

  6. Active vision system integrating fast and slow processes

    NASA Astrophysics Data System (ADS)

    Castrillon-Santana, Modesto; Guerra-Artal, C.; Hernandez-Sosa, J.; Dominguez-Brito, A.; Isern-Gonzalez, J.; Cabrera-Gamez, Jorge; Hernandez-Tejera, F. M.

    1998-10-01

    This paper describes an Active Vision System whose design assumes a distinction between fast or reactive and slow or background processes. Fast processes need to operate in cycles with critical timeouts that may affect system stability. While slow processes, though necessary, do not compromise system stability if its execution is delayed. Based on this simple taxonomy, a control architecture has been proposed and a prototype implemented that is able to track people in real-time with a robotic head while trying to identify the target. In this system, the tracking mobile is considered as the reactive part of the system while person identification is considered a background task. This demonstrator has been developed using a new generation DSP (TMS320C80) as a specialized coprocessor to deal with fast processes, and a commercial robotic head with a dedicated DSP-based motor controller. These subsystems are hosted by a standard Pentium-Pro PC running Windows NT where slow processes are executed. The flexibility achieved in the design phase and the preliminary results obtained so far seem to validate the approach followed to integrate time- critical and slow tasks on a heterogeneous hardware platform.

  7. Categorisation through evidence accumulation in an active vision system

    NASA Astrophysics Data System (ADS)

    Mirolli, Marco; Ferrauto, Tomassino; Nolfi, Stefano

    2010-12-01

    In this paper, we present an artificial vision system that is trained with a genetic algorithm for categorising five different kinds of images (letters) of different sizes. The system, which has a limited field of view, can move its eye so as to explore the images visually. The analysis of the system at the end of the training process indicates that correct categorisation is achieved by (1) exploiting sensory-motor coordination so as to experience stimuli that facilitate discrimination, and (2) integrating perceptual and/or motor information over time through a process of accumulation of partially conflicting evidence. We discuss our results with respect to the possible different strategies for categorisation and to the possible roles that action can play in perception.

  8. Exploring techniques for vision based human activity recognition: methods, systems, and evaluation.

    PubMed

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-25

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activity, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation towards the performance of human activity recognition.

  9. A Robot Vision System.

    DTIC Science & Technology

    1985-12-01

    ix e ...... . . . . . . .. . - . 1 I. Introduction This project includes the design and implementation of a vision - based goal achievement system. The... vision system design base . Final Conclusions Stereo vision is useless beyond about 15 feet for the camera separation of .75 feet, a picture...model. Such monocular vision and modelling, duplicated for two cameras, would give a second source of model data for resolving ambiguities, and

  10. Reduction of computational complexity in the image/video understanding systems with active vision

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-10-01

    The vision system evolved not only as a recognition system, but also as a sensory system for reaching, grasping and other motion activities. In advanced creatures, it became a component of prediction function, allowing creation of environmental models and activity planning. Fast information processing and decision making is vital for any living creature, and requires reduction of informational and computational complexities. The brain achieves this goal using symbolic coding, hierarchical compression, and selective processing of visual information. Network-Symbolic representation, where both systematic structural / logical methods and neural / statistical methods are the parts of a single mechanism, is the most feasible for such models. It converts visual information into the relational Network-Symbolic structures, instead of precise computations of a 3-dimensional models. Narrow foveal vision provides separation of figure from ground, object identification, semantic analysis, and precise control of actions. Rough wide peripheral vision identifies and tracks salient motion, guiding foveal system to salient objects. It also provides scene context. Objects with rigid bodies and other stable systems have coherent relational structures. Hierarchical compression and Network-Symbolic transformations derive more abstract structures that allow invariably recognize a particular structure as an exemplar of class. Robotic systems equipped with such smart vision will be able effectively navigate in any environment, understand situation, and act accordingly.

  11. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  12. Three-dimensional data-acquiring system fusing active projection and stereo vision

    NASA Astrophysics Data System (ADS)

    Wu, Jianbo; Zhao, Hong; Tan, Yushan

    2001-09-01

    Combining the active digitizing technique with the passive stereo vision, a novel method is proposed to acquire the 3D data from two 2D images. Based on the principle of stereo vision, and assisting the active dense structure light projecting, the system overcomes the problem of data points matching between two stereo images, which is the most important difficulty occurring in stereo vision. An algorithm based on wavelet transform is proposed here to auto-get the threshold for image segment and extract the grid points. The system described here is mainly applied to digitize the 3D objects in time. Comparing with the general digitizers, it performs the translation from 2D images to 3D data completely, and gets over some shortcomings, such as slow image acquiring and data processing speed, depending on mechanical moving, painting on the object before digitizing, and so on. The system is the same with the non-contact and fast measurement and modeling for the 3D object with freedom surface, and can be employed widely in the fields of Reverse Engineering and CAD/CAM. Experiment proves the efficiency of the new use of shape from stereo vision (SFSV) in engineering.

  13. Coherent laser vision system

    SciTech Connect

    Sebastion, R.L.

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  14. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  15. A tactile vision substitution system for the study of active sensing.

    PubMed

    Hsu, Brian; Hsieh, Cheng-Han; Yu, Sung-Nien; Ahissar, Ehud; Arieli, Amos; Zilbershtain-Kra, Yael

    2013-01-01

    This paper presents a tactile vision substitution system (TVSS) for the study of active sensing. Two algorithms, namely image processing and trajectory tracking, were developed to enhance the capability of conventional TVSS. Image processing techniques were applied to reduce the artifacts and extract important features from the active camera and effectively converted the information into tactile stimuli with much lower resolution. A fixed camera was used to record the movement of the active camera. A trajectory tracking algorithm was developed to analyze the active sensing strategy of the TVSS users to explore the environment. The image processing subsystem showed advantageous improvement in extracting object's features for superior recognition. The trajectory tracking subsystem, on the other hand, enabled accurately locating the portion of the scene pointed by the active camera and providing profound information for the study of active sensing strategy applied by TVSS users.

  16. Image/video understanding systems based on network-symbolic models and active vision

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-07-01

    Vision is a part of information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. It is hard to split the entire system apart, and vision mechanisms cannot be completely understood separately from informational processes related to knowledge and intelligence. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Vision is a component of situation awareness, motion and planning systems. Foveal vision provides semantic analysis, recognizing objects in the scene. Peripheral vision guides fovea to salient objects and provides scene context. Biologically inspired Network-Symbolic representation, in which both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding precise artificial computations of 3-D models. Network-Symbolic transformations derive more abstract structures that allows for invariant recognition of an object as exemplar of a class and for a reliable identification even if the object is occluded. Systems with such smart vision will be able to navigate in real environment and understand real-world situations.

  17. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  18. Bird Vision System

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Bird Vision system is a multicamera photogrammerty software application that runs on a Microsoft Windows XP platform and was developed at Kennedy Space Center by ASRC Aerospace. This software system collects data about the locations of birds within a volume centered on the Space Shuttle and transmits it in real time to the laptop computer of a test director in the Launch Control Center (LCC) Firing Room.

  19. A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1993-01-01

    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the

  20. Industrial robot's vision systems

    NASA Astrophysics Data System (ADS)

    Iureva, Radda A.; Raskin, Evgeni O.; Komarov, Igor I.; Maltseva, Nadezhda K.; Fedosovsky, Michael E.

    2016-03-01

    Due to the improved economic situation in the high technology sectors, work on the creation of industrial robots and special mobile robotic systems are resumed. Despite this, the robotic control systems mostly remained unchanged. Hence one can see all advantages and disadvantages of these systems. This is due to lack of funds, which could greatly facilitate the work of the operator, and in some cases, completely replace it. The paper is concerned with the complex machine vision of robotic system for monitoring of underground pipelines, which collects and analyzes up to 90% of the necessary information. Vision Systems are used to identify obstacles to the process of movement on a trajectory to determine their origin, dimensions and character. The object is illuminated in a structured light, TV camera records projected structure. Distortions of the structure uniquely determine the shape of the object in view of the camera. The reference illumination is synchronized with the camera. The main parameters of the system are the basic distance between the generator and the lights and the camera parallax angle (the angle between the optical axes of the projection unit and camera).

  1. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2004-12-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  2. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2005-01-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  3. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  4. Dynamically re-configurable CMOS imagers for an active vision system

    NASA Technical Reports Server (NTRS)

    Yang, Guang (Inventor); Pain, Bedabrata (Inventor)

    2005-01-01

    A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.

  5. Coevolution of active vision and feature selection.

    PubMed

    Floreano, Dario; Kato, Toshifumi; Marocco, Davide; Sauser, Eric

    2004-03-01

    We show that complex visual tasks, such as position- and size-invariant shape recognition and navigation in the environment, can be tackled with simple architectures generated by a coevolutionary process of active vision and feature selection. Behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons are evolved while they freely interact with their environments. We describe the application of this methodology in three sets of experiments, namely, shape discrimination, car driving, and robot navigation. We show that these systems develop sensitivity to a number of oriented, retinotopic, visual-feature-oriented edges, corners, height, and a behavioral repertoire to locate, bring, and keep these features in sensitive regions of the vision system, resembling strategies observed in simple insects.

  6. Real-time vision systems

    SciTech Connect

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  7. Dynamical Systems and Motion Vision.

    DTIC Science & Technology

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  8. Active vision in satellite scene analysis

    NASA Technical Reports Server (NTRS)

    Naillon, Martine

    1994-01-01

    In earth observation or planetary exploration it is necessary to have more and, more autonomous systems, able to adapt to unpredictable situations. This imposes the use, in artificial systems, of new concepts in cognition, based on the fact that perception should not be separated from recognition and decision making levels. This means that low level signal processing (perception level) should interact with symbolic and high level processing (decision level). This paper is going to describe the new concept of active vision, implemented in Distributed Artificial Intelligence by Dassault Aviation following a 'structuralist' principle. An application to spatial image interpretation is given, oriented toward flexible robotics.

  9. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  10. Vision inspection system and method

    NASA Technical Reports Server (NTRS)

    Huber, Edward D. (Inventor); Williams, Rick A. (Inventor)

    1997-01-01

    An optical vision inspection system (4) and method for multiplexed illuminating, viewing, analyzing and recording a range of characteristically different kinds of defects, depressions, and ridges in a selected material surface (7) with first and second alternating optical subsystems (20, 21) illuminating and sensing successive frames of the same material surface patch. To detect the different kinds of surface features including abrupt as well as gradual surface variations, correspondingly different kinds of lighting are applied in time-multiplexed fashion to the common surface area patches under observation.

  11. VISION 21 SYSTEMS ANALYSIS METHODOLOGIES

    SciTech Connect

    G.S. Samuelsen; A. Rao; F. Robson; B. Washom

    2003-08-11

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into power plant systems that meet performance and emission goals of the Vision 21 program. The study efforts have narrowed down the myriad of fuel processing, power generation, and emission control technologies to selected scenarios that identify those combinations having the potential to achieve the Vision 21 program goals of high efficiency and minimized environmental impact while using fossil fuels. The technology levels considered are based on projected technical and manufacturing advances being made in industry and on advances identified in current and future government supported research. Included in these advanced systems are solid oxide fuel cells and advanced cycle gas turbines. The results of this investigation will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  12. Vision Systems with the Human in the Loop

    NASA Astrophysics Data System (ADS)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  13. Active-Vision Control Systems for Complex Adversarial 3-D Environments

    DTIC Science & Technology

    2009-03-01

    Control Systems MURI Final Report 36 51. D. Nain, S. Haker , A. Bobick, A. Tannenbaum, "Multiscale 3D shape representation and segmentation using...Conference, August 2008. 99. L. Zhu, Y. Yang, S. Haker , and A. Tannenbaum, "An image morphing technique based on optimal mass preserving mapping," IEEE

  14. Pattern recognition and active vision in chickens.

    PubMed

    Dawkins, M S; Woodington, A

    2000-02-10

    Recognition of objects or environmental landmarks is problematic because appearance can vary widely depending on illumination, viewing distance, angle of view and so on. Storing a separate image or 'template' for every possible view requires vast numbers to be stored and scanned, has a high probability of recognition error and appears not to be the solution adopted by primates. However, some invertebrate template matching systems can achieve recognition by 'active vision' in which the animal's own behaviour is used to achieve a fit between template and object, for example by repeatedly following a set path. Recognition is thus limited to views from the set path but achieved with a minimal number of templates. Here we report the first evidence of similar active vision in a bird, in the form of locomotion and individually distinct head movements that give the eyes a similar series of views on different occasions. The hens' ability to recognize objects is also found to decrease when their normal paths are altered.

  15. Inertial Navigation System Aiding Using Vision

    DTIC Science & Technology

    2013-03-01

    INERTIAL NAVIGATION SYSTEM AIDING USING VISION THESIS James O. Quarmyne, Second Lieutenant, USAF AFIT-ENG-13-M-40 DEPARTMENT OF THE AIR FORCE AIR...protection in the United States AFIT-ENG-13-M-40 INERTIAL NAVIGATION SYSTEM AIDING USING VISION THESIS Presented to the Faculty Department of...AIDING USING VISION James O. Quarmyne, B.S.E.E. Second Lieutenant, USAF Approved: Meir Pachter, PhD (Chairman) Date John F. Raquet, PhD (Committee Member

  16. Analysis of the development and the prospects about vehicular infrared night vision system

    NASA Astrophysics Data System (ADS)

    Li, Jing; Fan, Hua-ping; Xie, Zu-yun; Zhou, Xiao-hong; Yu, Hong-qiang; Huang, Hui

    2013-08-01

    Through the classification of vehicular infrared night vision system and comparing the mainstream vehicle infrared night vision products, we summarized the functions of vehicular infrared night vision system which conclude night vision, defogging , strong-light resistance and biological recognition. At the same time , the vehicular infrared night vision system's markets of senior car and fire protection industry were analyzed。Finally, the conclusion was given that vehicle infrared night vision system would be used as a safety essential active safety equipment to promote the night vision photoelectric industry and automobile industry.

  17. Compact Autonomous Hemispheric Vision System

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.

    2012-01-01

    Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.

  18. Multi-channel automotive night vision system

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  19. Vision restoration after brain and retina damage: the "residual vision activation theory".

    PubMed

    Sabel, Bernhard A; Henrich-Noack, Petra; Fedorov, Anton; Gall, Carolin

    2011-01-01

    Vision loss after retinal or cerebral visual injury (CVI) was long considered to be irreversible. However, there is considerable potential for vision restoration and recovery even in adulthood. Here, we propose the "residual vision activation theory" of how visual functions can be reactivated and restored. CVI is usually not complete, but some structures are typically spared by the damage. They include (i) areas of partial damage at the visual field border, (ii) "islands" of surviving tissue inside the blind field, (iii) extrastriate pathways unaffected by the damage, and (iv) downstream, higher-level neuronal networks. However, residual structures have a triple handicap to be fully functional: (i) fewer neurons, (ii) lack of sufficient attentional resources because of the dominant intact hemisphere caused by excitation/inhibition dysbalance, and (iii) disturbance in their temporal processing. Because of this resulting activation loss, residual structures are unable to contribute much to everyday vision, and their "non-use" further impairs synaptic strength. However, residual structures can be reactivated by engaging them in repetitive stimulation by different means: (i) visual experience, (ii) visual training, or (iii) noninvasive electrical brain current stimulation. These methods lead to strengthening of synaptic transmission and synchronization of partially damaged structures (within-systems plasticity) and downstream neuronal networks (network plasticity). Just as in normal perceptual learning, synaptic plasticity can improve vision and lead to vision restoration. This can be induced at any time after the lesion, at all ages and in all types of visual field impairments after retinal or brain damage (stroke, neurotrauma, glaucoma, amblyopia, age-related macular degeneration). If and to what extent vision restoration can be achieved is a function of the amount of residual tissue and its activation state. However, sustained improvements require repetitive

  20. Space environment robot vision system

    NASA Technical Reports Server (NTRS)

    Wood, H. John; Eichhorn, William L.

    1990-01-01

    A prototype twin-camera stereo vision system for autonomous robots has been developed at Goddard Space Flight Center. Standard charge coupled device (CCD) imagers are interfaced with commercial frame buffers and direct memory access to a computer. The overlapping portions of the images are analyzed using photogrammetric techniques to obtain information about the position and orientation of objects in the scene. The camera head consists of two 510 x 492 x 8-bit CCD cameras mounted on individually adjustable mounts. The 16 mm efl lenses are designed for minimum geometric distortion. The cameras can be rotated in the pitch, roll, and yaw (pan angle) directions with respect to their optical axes. Calibration routines have been developed which automatically determine the lens focal lengths and pan angle between the two cameras. The calibration utilizes observations of a calibration structure with known geometry. Test results show the precision attainable is plus or minus 0.8 mm in range at 2 m distance using a camera separation of 171 mm. To demonstrate a task needed on Space Station Freedom, a target structure with a movable I beam was built. The camera head can autonomously direct actuators to dock the I-beam to another one so that they could be bolted together.

  1. COHERENT LASER VISION SYSTEM (CLVS) OPTION PHASE

    SciTech Connect

    Robert Clark

    1999-11-18

    The purpose of this research project was to develop a prototype fiber-optic based Coherent Laser Vision System (CLVS) suitable for DOE's EM Robotic program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update the dimensional spatial data on the order of once per second. The system has total immunity to ambient lighting conditions.

  2. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  3. Using perturbations to identify the brain circuits underlying active vision.

    PubMed

    Wurtz, Robert H

    2015-09-19

    The visual and oculomotor systems in the brain have been studied extensively in the primate. Together, they can be regarded as a single brain system that underlies active vision--the normal vision that begins with visual processing in the retina and extends through the brain to the generation of eye movement by the brainstem. The system is probably one of the most thoroughly studied brain systems in the primate, and it offers an ideal opportunity to evaluate the advantages and disadvantages of the series of perturbation techniques that have been used to study it. The perturbations have been critical in moving from correlations between neuronal activity and behaviour closer to a causal relation between neuronal activity and behaviour. The same perturbation techniques have also been used to tease out neuronal circuits that are related to active vision that in turn are driving behaviour. The evolution of perturbation techniques includes ablation of both cortical and subcortical targets, punctate chemical lesions, reversible inactivations, electrical stimulation, and finally the expanding optogenetic techniques. The evolution of perturbation techniques has supported progressively stronger conclusions about what neuronal circuits in the brain underlie active vision and how the circuits themselves might be organized.

  4. Flight Testing an Integrated Synthetic Vision System

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  5. Real and virtual robot head for active vision research

    NASA Astrophysics Data System (ADS)

    Marapane, Suresh B.; Lassiter, Nils T.; Trivedi, Mohan M.

    1992-11-01

    In the emerging paradigm of animate vision, the visual processes are not thought of as being independent of cognitive or motor processing, but as an integrated system within the context of visual behavior. Intimate coupling of sensory and motor systems have found to improve significantly the performance of behavior based vision systems. In order to conduct research in animate vision one requires an active image acquisition platform. This platform should possess the capability to change vision geometrical and optical parameters of the sensors under the control of a computer. This has led to the development of several robotic sensory-motor systems with multiple degrees of freedoms (DOF). In this paper we describe the status of on going work in developing a sensory-motor robotic system, R2H, with ten degrees of freedoms (DOF) for research in active vision. A Graphical Simulation and Animation (GSA) environment is also presented. The objective of building the GSA system is to create an environment to aid the researchers in developing high performance and reliable software and hardware in a most effective manner. The GSA includes a complete kinematic simulation of the R2H system, it''s sensors and it''s workspace. GSA environment is not meant to be a substitute for performing real experiments but is to complement it. Thus, the GSA environment will be an integral part of the total research effort. With the aid of the GSA environment a Depth from Defocus (DFD), Depth from Vergence, and Depth from Stereo modules have been implemented and tested. The power and usefulness of the GSA system as a research tool is demonstrated by acquiring and analyzing stereo images in the virtual world.

  6. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  7. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    PubMed

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  8. [A biotechnical system for diagnosis and treatment of binocular vision impairments].

    PubMed

    Korzhuk, N L; Shcheglova, M V

    2008-01-01

    Automation of the binocular vision biorhythm diagnosis and improvement of the efficacy of treatment of vision impairments are important medical problems. In authors' opinion, to solve these problems, it is necessary to take into account the correlation between the binocular vision and the electrical activity of the brain. A biotechnical system for diagnosis and treatment of binocular vision impairments was developed to implement diagnostic and treatment procedures based on the detection of this correlation.

  9. A Design Methodology For Industrial Vision Systems

    NASA Astrophysics Data System (ADS)

    Batchelor, B. G.; Waltz, F. M.; Snyder, M. A.

    1988-11-01

    The cost of design, rather than that of target system hardware, represents the principal factor inhibiting the adoption of machine vision systems by manufacturing industry. To reduce design costs to a minimum, a number of software and hardware aids have been developed or are currently being built by the authors. These design aids are as follows: a. An expert system for giving advice about which image acquisition techniques (i.e. lighting/viewing techniques) might be appropriate in a given situation. b. A program to assist in the selection and setup of camera lenses. c. A rich repertoire of image processing procedures, integrated with the Al language Prolog. This combination (called ProVision) provides a facility for experimenting with intelligent image processing techniques and is intended to allow rapid prototyping of algorithms and/or heuristics. d. Fast image processing hardware, capable of implementing commands in the ProVision language. The speed of operation of this equipment is sufficiently high for it to be used, without modification, in many industrial applications. Where this is not possible, even higher execution speed may be achieved by adding extra modules to the processing hardware. In this way, it is possible to trade speed against the cost of the target system hardware. New and faster implementations of a given algorithm/heuristic can usually be achieved with the expenditure of only a small effort. Throughout this article, the emphasis is on designing an industrial vision system in a smooth and effortless manner. In order to illustrate our main thesis that the design of industrial vision systems can be made very much easier through the use of suitable utilities, the article concludes with a discussion of a case study: the dissection of tiny plants using a visually controlled robot.

  10. Leisure Activity Participation of Elderly Individuals with Low Vision.

    ERIC Educational Resources Information Center

    Heinemann, Allen W.

    1988-01-01

    Studied low vision elderly clinic patients (N=63) who reported participation in six categories of leisure activities currently and at onset of vision loss. Found subjects reported significant declines in five of six activity categories. Found prior activity participation was related to current participation only for active crafts, participatory…

  11. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  12. Mobile robot on-board vision system

    SciTech Connect

    McClure, V.W.; Nai-Yung Chen.

    1993-06-15

    An automatic robot system is described comprising: an AGV transporting and transferring work piece, a control computer on board the AGV, a process machine for working on work pieces, a flexible robot arm with a gripper comprising two gripper fingers at one end of the arm, wherein the robot arm and gripper are controllable by the control computer for engaging a work piece, picking it up, and setting it down and releasing it at a commanded location, locating beacon means mounted on the process machine, wherein the locating beacon means are for locating on the process machine a place to pick up and set down work pieces, vision means, including a camera fixed in the coordinate system of the gripper means, attached to the robot arm near the gripper, such that the space between said gripper fingers lies within the vision field of said vision means, for detecting the locating beacon means, wherein the vision means provides the control computer visual information relating to the location of the locating beacon means, from which information the computer is able to calculate the pick up and set down place on the process machine, wherein said place for picking up and setting down work pieces on the process machine is a nest means and further serves the function of holding a work piece in place while it is worked on, the robot system further comprising nest beacon means located in the nest means detectable by the vision means for providing information to the control computer as to whether or not a work piece is present in the nest means.

  13. Zoom Vision System For Robotic Welding

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Hudyma, Russell M.

    1990-01-01

    Rugged zoom lens subsystem proposed for use in along-the-torch vision system of robotic welder. Enables system to adapt, via simple mechanical adjustments, to gas cups of different lengths, electrodes of different protrusions, and/or different distances between end of electrode and workpiece. Unnecessary to change optical components to accommodate changes in geometry. Easy to calibrate with respect to object in view. Provides variable focus and variable magnification.

  14. Vision enhanced navigation for unmanned systems

    NASA Astrophysics Data System (ADS)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  15. Missileborne artificial vision system (MAVIS)

    NASA Astrophysics Data System (ADS)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-03-01

    The Naval Air Warfare Center, China Lake has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a Companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera and to other COHO boards. The system is designed to have multiple SIMD machines each performing different Corticomorphic functions. The system level software has been developed which allows a high level description of Corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  16. Missileborne Artificial Vision System (MAVIS)

    NASA Technical Reports Server (NTRS)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  17. Visual Turing test for computer vision systems

    PubMed Central

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-01-01

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  18. Applications of Augmented Vision Head-Mounted Systems in Vision Rehabilitation

    PubMed Central

    Peli, Eli; Luo, Gang; Bowers, Alex; Rensing, Noa

    2007-01-01

    Vision loss typically affects either the wide peripheral vision (important for mobility), or central vision (important for seeing details). Traditional optical visual aids usually recover the lost visual function, but at a high cost for the remaining visual function. We have developed a novel concept of vision-multiplexing using augmented vision head-mounted display systems to address vision loss. Two applications are discussed in this paper. In the first, minified edge images from a head-mounted video camera are presented on a see-through display providing visual field expansion for people with peripheral vision loss, while still enabling the full resolution of the residual central vision to be maintained. The concept has been applied in daytime and nighttime devices. A series of studies suggested that the system could help with visual search, obstacle avoidance, and nighttime mobility. Subjects were positive in their ratings of device cosmetics and ergonomics. The second application is for people with central vision loss. Using an on-axis aligned camera and display system, central visibility is enhanced with 1:1 scale edge images, while still enabling the wide field of the unimpaired peripheral vision to be maintained. The registration error of the system was found to be low in laboratory testing. PMID:18172511

  19. Progress in building a cognitive vision system

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Yue, Hong

    2016-05-01

    We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.

  20. Robust active stereo vision using Kullback-Leibler divergence.

    PubMed

    Wang, Yongchang; Liu, Kai; Hao, Qi; Wang, Xianwang; Lau, Daniel L; Hassebrook, Laurence G

    2012-03-01

    Active stereo vision is a method of 3D surface scanning involving the projecting and capturing of a series of light patterns where depth is derived from correspondences between the observed and projected patterns. In contrast, passive stereo vision reveals depth through correspondences between textured images from two or more cameras. By employing a projector, active stereo vision systems find correspondences between two or more cameras, without ambiguity, independent of object texture. In this paper, we present a hybrid 3D reconstruction framework that supplements projected pattern correspondence matching with texture information. The proposed scheme consists of using projected pattern data to derive initial correspondences across cameras and then using texture data to eliminate ambiguities. Pattern modulation data are then used to estimate error models from which Kullback-Leibler divergence refinement is applied to reduce misregistration errors. Using only a small number of patterns, the presented approach reduces measurement errors versus traditional structured light and phase matching methodologies while being insensitive to gamma distortion, projector flickering, and secondary reflections. Experimental results demonstrate these advantages in terms of enhanced 3D reconstruction performance in the presence of noise, deterministic distortions, and conditions of texture and depth contrast.

  1. Three-dimensional motion estimation using genetic algorithms from image sequence in an active stereo vision system

    NASA Astrophysics Data System (ADS)

    Dipanda, Albert; Ajot, Jerome; Woo, Sanghyuk

    2003-06-01

    This paper proposes a method for estimating 3D rigid motion parameters from an image sequence of a moving object. The 3D surface measurement is achieved using an active stereovision system composed of a camera and a light projector, which illuminates objects to be analyzed by a pyramid-shaped laser beam. By associating the laser rays and the spots in the 2D image, the 3D points corresponding to these spots are reconstructed. Each image of the sequence provides a set of 3D points, which is modeled by a B-spline surface. Therefore, estimating the motion between two images of the sequence boils down to matching two B-spline surfaces. We consider the matching environment as an optimization problem and find the optimal solution using Genetic Algorithms. A chromosome is encoded by concatenating six binary coded parameters, the three angles of rotation and the x-axis, y-axis and z-axis translations. We have defined an original fitness function to calculate the similarity measure between two surfaces. The matching process is performed iteratively: the number of points to be matched grows as the process advances and results are refined until convergence. Experimental results with a real image sequence are presented to show the effectiveness of the method.

  2. Ball stud inspection system using machine vision.

    PubMed

    Shin, Dongik; Han, Changsoo; Moon, Young Shik

    2002-01-01

    In this paper, a vision-based inspection system that measures the dimensions of a ball stud is designed and implemented. The system acquires silhouetted images by backlighting and extracts the outlines of the nearly dichotomized images in subpixel accuracy. The sets of boundary data are modeled with reasonable geometric primitives and the parameters of the models are estimated in a manner that minimizes error. Jig-fixtures and servo systems for the inspection are also contrived. The system rotates an inspected object to recognize the objects in space not on a plane. The system moves the object vertically so that it may take several pictures of different parts of the object, resulting in improvement of measuring resolution. The performance of the system is evaluated by measurement of the dimensions of a standard ball, a standard cylinder, and a ball stud.

  3. 360 degree vision system: opportunities in transportation

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2007-09-01

    Panoramic technologies are experiencing new and exciting opportunities in the transportation industries. The advantages of panoramic imagers are numerous: increased areas coverage with fewer cameras, imaging of multiple target simultaneously, instantaneous full horizon detection, easier integration of various applications on the same imager and others. This paper reports our work on panomorph optics and potential usage in transportation applications. The novel panomorph lens is a new type of high resolution panoramic imager perfectly suitable for the transportation industries. The panomorph lens uses optimization techniques to improve the performance of a customized optical system for specific applications. By adding a custom angle to pixel relation at the optical design stage, the optical system provides an ideal image coverage which is designed to reduce and optimize the processing. The optics can be customized for the visible, near infra-red (NIR) or infra-red (IR) wavebands. The panomorph lens is designed to optimize the cost per pixel which is particularly important in the IR. We discuss the use of the 360 vision system which can enhance on board collision avoidance systems, intelligent cruise controls and parking assistance. 360 panoramic vision systems might enable safer highways and significant reduction in casualties.

  4. 75 FR 60478 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ... COMMISSION In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing... importation of certain machine vision software, machine vision systems, or products containing same by reason... Soft'') of Japan; Fuji Machine Manufacturing Co., Ltd. of Japan and Fuji America Corporation of...

  5. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  6. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  7. Real-time Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-01-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  8. DLP™-based dichoptic vision test system

    PubMed Central

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3%; remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer’s sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events. PMID:20210457

  9. DLP™-based dichoptic vision test system

    NASA Astrophysics Data System (ADS)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  10. Forward Obstacle Detection System by Stereo Vision

    NASA Astrophysics Data System (ADS)

    Iwata, Hiroaki; Saneyoshi, Keiji

    Forward obstacle detection is needed to prevent car accidents. We have developed forward obstacle detection system which has good detectability and the accuracy of distance only by using stereo vision. The system runs in real time by using a stereo processing system based on a Field-Programmable Gate Array (FPGA). Road surfaces are detected and the space to drive can be limited. A smoothing filter is also used. Owing to these, the accuracy of distance is improved. In the experiments, this system could detect forward obstacles 100 m away. Its error of distance up to 80 m was less than 1.5 m. It could immediately detect cutting-in objects.

  11. Robot vision system programmed in Prolog

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Hack, Ralf

    1995-10-01

    This is the latest in a series of publications which develop the theme of programming a machine vision system using the artificial intelligence language Prolog. The article states the long-term objective of the research program of which this work forms part. Many but not yet all of the goals laid out in this plan have already been achieved in an integrated system, which uses a multi-layer control hierarchy. The purpose of the present paper is to demonstrate that a system based upon a Prolog controller is capable of making complex decisions and operating a standard robot. The authors chose, as a vehicle for this exercise, the task of playing dominoes against a human opponent. This game was selected for this demonstration since it models a range of industrial assembly tasks, where parts are to be mated together. (For example, a 'daisy chain' of electronic equipment and the interconnecting cables/adapters may be likened to a chain of dominoes.)

  12. Utilizing Robot Operating System (ROS) in Robot Vision and Control

    DTIC Science & Technology

    2015-09-01

    OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples...Master’s Thesis 4. TITLE AND SUBTITLE UTILIZING ROBOT OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL 5. FUNDING NUMBERS 6. AUTHOR(S) Lum, Joshua S...release; distribution is unlimited UTILIZING ROBOT OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL Joshua S. Lum Captain, United States

  13. HMD digital night vision system for fixed wing fighters

    NASA Astrophysics Data System (ADS)

    Foote, Bobby D.

    2013-05-01

    Digital night sensor technology offers both advantages and disadvantages over standard analog systems. As the digital night sensor technology matures and disadvantages are overcome, the transition away from analog type sensors will increase with new programs. In response to this growing need RCEVS is actively investing in digital night vision systems that will provide the performance needed for the future. Rockwell Collins and Elbit Systems of America continue to invest in digital night technology and have completed laboratory, ground and preliminary flight testing to evaluate the important key factors for night vision. These evaluations have led to a summary of the maturity of the digital night capability and status of the key performance gap between analog and digital systems. Introduction of Digital Night Vision Systems can be found in the roadmap of future fixed wing and rotorcraft programs beginning in 2015. This will bring a new set of capabilities to the pilot that will enhance his abilities to perform night operations with no loss of performance.

  14. Fiber optic coherent laser radar 3d vision system

    SciTech Connect

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-12-31

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  15. Geometric Variational Methods for Controlled Active Vision

    DTIC Science & Technology

    2006-08-01

    Haker , L. Zhu, and A. Tannenbaum, ``Optimal mass transport for registration and warping’’ Int. Journal Computer Vision, volume 60, 2004, pp. 225-240. G...pp. 119-142. A. Angenent, S. Haker , and A. Tannenbaum, ``Minimizing flows for the Monge-Kantorovich problem,’’ SIAM J. Math. Analysis, volume 35...Shape analysis of structures using spherical wavelets’’ (with S. Haker and D. Nain), Proceeedings of MICCAI, 2005. ``Affine surface evolution for 3D

  16. Vision-based augmented reality system

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Wang, Yongtian; Shi, Qi; Yan, Dayuan

    2003-04-01

    The most promising aspect of augmented reality lies in its ability to integrate the virtual world of the computer with the real world of the user. Namely, users can interact with the real world subjects and objects directly. This paper presents an experimental augmented reality system with a video see-through head-mounted device to display visual objects, as if they were lying on the table together with real objects. In order to overlay virtual objects on the real world at the right position and orientation, the accurate calibration and registration are most important. A vision-based method is used to estimate CCD external parameters by tracking 4 known points with different colors. It achieves sufficient accuracy for non-critical applications such as gaming, annotation and so on.

  17. Hi-Vision telecine system using pickup tube

    NASA Astrophysics Data System (ADS)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  18. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  19. Flight test comparison between enhanced vision (FLIR) and synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-05-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA"s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA's Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  20. Technological process supervising using vision systems cooperating with the LabVIEW vision builder

    NASA Astrophysics Data System (ADS)

    Hryniewicz, P.; Banaś, W.; Gwiazda, A.; Foit, K.; Sękala, A.; Kost, G.

    2015-11-01

    One of the most important tasks in the production process is to supervise its proper functioning. Lack of required supervision over the production process can lead to incorrect manufacturing of the final element, through the production line downtime and hence to financial losses. The worst result is the damage of the equipment involved in the manufacturing process. Engineers supervise the production flow correctness use the great range of sensors supporting the supervising of a manufacturing element. Vision systems are one of sensors families. In recent years, thanks to the accelerated development of electronics as well as the easier access to electronic products and attractive prices, they become the cheap and universal type of sensors. These sensors detect practically all objects, regardless of their shape or even the state of matter. The only problem is considered with transparent or mirror objects, detected from the wrong angle. Integrating the vision system with the LabVIEW Vision and the LabVIEW Vision Builder it is possible to determine not only at what position is the given element but also to set its reorientation relative to any point in an analyzed space. The paper presents an example of automated inspection. The paper presents an example of automated inspection of the manufacturing process in a production workcell using the vision supervising system. The aim of the work is to elaborate the vision system that could integrate different applications and devices used in different production systems to control the manufacturing process.

  1. Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-01-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  2. Nuclear bimodal new vision solar system missions

    SciTech Connect

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    This paper presents an analysis of the potential mission capability using space reactor bimodal systems for planetary missions. Missions of interest include the Main belt asteroids, Jupiter, Saturn, Neptune, and Pluto. The space reactor bimodal system, defined by an Air Force study for Earth orbital missions, provides 10 kWe power, 1000 N thrust, 850 s Isp, with a 1500 kg system mass. Trajectories to the planetary destinations were examined and optimal direct and gravity assisted trajectories were selected. A conceptual design for a spacecraft using the space reactor bimodal system for propulsion and power, that is capable of performing the missions of interest, is defined. End-to-end mission conceptual designs for bimodal orbiter missions to Jupiter and Saturn are described. All missions considered use the Delta 3 class or Atlas 2AS launch vehicles. The space reactor bimodal power and propulsion system offers both; new vision {open_quote}{open_quote}constellation{close_quote}{close_quote} type missions in which the space reactor bimodal spacecraft acts as a carrier and communication spacecraft for a fleet of microspacecraft deployed at different scientific targets and; conventional missions with only a space reactor bimodal spacecraft and its science payload. {copyright} {ital 1996 American Institute of Physics.}

  3. Intelligent Computer Vision System for Automated Classification

    NASA Astrophysics Data System (ADS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  4. Intelligent Computer Vision System for Automated Classification

    SciTech Connect

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-21

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPtauS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  5. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  6. Networked vision system using a Prolog controller

    NASA Astrophysics Data System (ADS)

    Batchelor, B. G.; Caton, S. J.; Chatburn, L. T.; Crowther, R. A.; Miller, J. W. V.

    2005-11-01

    Prolog offers a very different style of programming compared to conventional languages; it can define object properties and abstract relationships in a way that Java, C, C++, etc. find awkward. In an accompanying paper, the authors describe how a distributed web-based vision systems can be built using elements that may even be located on different continents. One particular system of this general type is described here. The top-level controller is a Prolog program, which operates one, or more, image processing engines. This type of function is natural to Prolog, since it is able to reason logically using symbolic (non-numeric) data. Although Prolog is not suitable for programming image processing functions directly, it is ideal for analysing the results derived by an image processor. This article describes the implementation of two systems, in which a Prolog program controls several image processing engines, a simple robot, a pneumatic pick-and-place arm), LED illumination modules and a various mains-powered devices.

  7. Computer vision for driver assistance systems

    NASA Astrophysics Data System (ADS)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  8. GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa

    2004-01-01

    The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.

  9. Active vision task and postural control in healthy, young adults: Synergy and probably not duality.

    PubMed

    Bonnet, Cédrick T; Baudry, Stéphane

    2016-07-01

    In upright stance, individuals sway continuously and the sway pattern in dual tasks (e.g., a cognitive task performed in upright stance) differs significantly from that observed during the control quiet stance task. The cognitive approach has generated models (limited attentional resources, U-shaped nonlinear interaction) to explain such patterns based on competitive sharing of attentional resources. The objective of the current manuscript was to review these cognitive models in the specific context of visual tasks involving gaze shifts toward precise targets (here called active vision tasks). The selection excluded the effects of early and late stages of life or disease, external perturbations, active vision tasks requiring head and body motions and the combination of two tasks performed together (e.g., a visual task in addition to a computation in one's head). The selection included studies performed by healthy, young adults with control and active - difficult - vision tasks. Over 174 studies found in Pubmed and Mendeley databases, nine were selected. In these studies, young adults exhibited significantly lower amplitude of body displacement (center of pressure and/or body marker) under active vision tasks than under the control task. Furthermore, the more difficult the active vision tasks were, the better the postural control was. This underscores that postural control during active vision tasks may rely on synergistic relations between the postural and visual systems rather than on competitive or dual relations. In contrast, in the control task, there would not be any synergistic or competitive relations.

  10. Vision system for dial gage torque wrench calibration

    NASA Astrophysics Data System (ADS)

    Aggarwal, Neelam; Doiron, Theodore D.; Sanghera, Paramjeet S.

    1993-11-01

    In this paper, we present the development of a fast and robust vision system which, in conjunction with the Dial Gage Calibration system developed by AKO Inc., will be used by the U.S. Army in calibrating dial gage torque wrenches. The vision system detects the change in the angular position of the dial pointer in a dial gage. The angular change is proportional to the applied torque. The input to the system is a sequence of images of the torque wrench dial gage taken at different dial pointer positions. The system then reports the angular difference between the different positions. The primary components of this vision system include modules for image acquisition, linear feature extraction and angle measurements. For each of these modules, several techniques were evaluated and the most applicable one was selected. This system has numerous other applications like vision systems to read and calibrate analog instruments.

  11. Night vision imaging system lighting evaluation methodology

    NASA Astrophysics Data System (ADS)

    Task, H. Lee; Pinkus, Alan R.; Barbato, Maryann H.; Hausmann, Martha A.

    2005-05-01

    In order for night vision goggles (NVGs) to be effective in aircraft operations, it is necessary for the cockpit lighting and displays to be NVG compatible. It has been assumed that the cockpit lighting is compatible with NVGs if the radiance values are compliant with the limits listed in Mil-L-85762A and Mil-Std-3009. However, these documents also describe a NVG-lighting compatibility field test procedure that is based on visual acuity. The objective of the study described in this paper was to determine how reliable and precise the visual acuity-based (VAB) field evaluation method is and compare it to a VAB method that employs less expensive equipment. In addition, an alternative, objective method of evaluating compatibility of the cockpit lighting was investigated. An inexpensive cockpit lighting simulator was devised to investigate two different interference conditions and six different radiance levels per condition. This paper describes the results, which indicate the objective method, based on light output of the NVGs, is more precise and reliable than the visual acuity-based method. Precision and reliability were assessed based on a probability of rejection (of the lighting system) function approach that was developed specifically for this study.

  12. Citizens' visions on active assisted living.

    PubMed

    Gudowsky, Niklas; Sotoudeh, Mahshid

    2015-01-01

    People aged 65 years and older are the fastest growing section of the population in many countries. Great hopes are projected on technology to support solutions for many of the challenges arising from this trend, thus making our lives more independent, more efficient and safer with a higher quality of life. But, as research and innovation ventures are often closely linked to the market, their focus may lead to biased planning in research and development as well as in policy-making with severe social and economic consequences. Thus the main research question concerned desirable settings of ageing in the future from different perspectives. The participatory foresight study CIVISTI-AAL cross-linked knowledge of lay persons, experts and stakeholders to include a wide variety of perspectives and values into productive long-term planning of research and development. Results include citizens' visions for autonomous living in 2050, implicitly and explicitly containing basic needs towards technological, social and organizational development as well as recommendations for implementation. Conclusions suggest that personalized health and living environments play an important part in the lay persons' view of aging in the future, but only if technologies support social and organizational innovations and yet do not neglect the importance of social affiliation and inclusion.

  13. 75 FR 71146 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ... COMMISSION In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing..., and the sale within the United States after importation of certain machine vision software, machine..., California; Techno Soft Systemnics, Inc. (``Techno Soft'') of Japan; Fuji Machine Manufacturing Co., Ltd....

  14. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  15. Robust active binocular vision through intrinsically motivated learning.

    PubMed

    Lonini, Luca; Forestier, Sébastien; Teulière, Céline; Zhao, Yu; Shi, Bertram E; Triesch, Jochen

    2013-01-01

    The efficient coding hypothesis posits that sensory systems of animals strive to encode sensory signals efficiently by taking into account the redundancies in them. This principle has been very successful in explaining response properties of visual sensory neurons as adaptations to the statistics of natural images. Recently, we have begun to extend the efficient coding hypothesis to active perception through a form of intrinsically motivated learning: a sensory model learns an efficient code for the sensory signals while a reinforcement learner generates movements of the sense organs to improve the encoding of the signals. To this end, it receives an intrinsically generated reinforcement signal indicating how well the sensory model encodes the data. This approach has been tested in the context of binocular vison, leading to the autonomous development of disparity tuning and vergence control. Here we systematically investigate the robustness of the new approach in the context of a binocular vision system implemented on a robot. Robustness is an important aspect that reflects the ability of the system to deal with unmodeled disturbances or events, such as insults to the system that displace the stereo cameras. To demonstrate the robustness of our method and its ability to self-calibrate, we introduce various perturbations and test if and how the system recovers from them. We find that (1) the system can fully recover from a perturbation that can be compensated through the system's motor degrees of freedom, (2) performance degrades gracefully if the system cannot use its motor degrees of freedom to compensate for the perturbation, and (3) recovery from a perturbation is improved if both the sensory encoding and the behavior policy can adapt to the perturbation. Overall, this work demonstrates that our intrinsically motivated learning approach for efficient coding in active perception gives rise to a self-calibrating perceptual system of high robustness.

  16. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  17. The Tactile Vision Substitution System: Applications in Education and Employment

    ERIC Educational Resources Information Center

    Scadden, Lawrence A.

    1974-01-01

    The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)

  18. Teacher Activism: Enacting a Vision for Social Justice

    ERIC Educational Resources Information Center

    Picower, Bree

    2012-01-01

    This qualitative study focused on educators who participated in grassroots social justice groups to explore the role teacher activism can play in the struggle for educational justice. Findings show teacher activists made three overarching commitments: to reconcile their vision for justice with the realities of injustice around them; to work within…

  19. Building Artificial Vision Systems with Machine Learning

    SciTech Connect

    LeCun, Yann

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  20. Vision/INS Integrated Navigation System for Poor Vision Navigation Environments.

    PubMed

    Kim, Youngsun; Hwang, Dong-Hwan

    2016-10-12

    In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient.

  1. Vision/INS Integrated Navigation System for Poor Vision Navigation Environments

    PubMed Central

    Kim, Youngsun; Hwang, Dong-Hwan

    2016-01-01

    In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient. PMID:27754350

  2. Transport Device Driver's Assistance Vision Systems

    NASA Astrophysics Data System (ADS)

    Szpytko, Janusz; Gbyl, Michał

    2011-03-01

    The purpose of this paper is to review solutions whose task is to actively correct decision-making processes of the vehicle's driver on the basis of information obtained from the surroundings and the presentation of a tool that makes it possible to react to the changes of the psychophysical condition of the driver. The system is implemented by the Matlab application environment on the basis on the image activated by a webcam.

  3. Area scanning vision inspection system by using mirror control

    NASA Astrophysics Data System (ADS)

    Jeong, Sang Y.; Min, Sungwook; Yang, Wonyoung

    2001-02-01

    12 As the pressure increases to deliver vision products with faster speed while inspection higher resolution at lower cost, the area scanning vision inspection system can be one of the good solutions. To inspect large area with high resolution, the conventional vision system requires moving either camera or the target, therefore, the system suffers low speed and high cost due to the requirements of mechanical moving system or higher resolution camera. Because there are only tiny mirror angle movements required to change the field of view, the XY mirror controlled area scanning vision system is able to capture random area images with high speed. Elimination of external precise moving mechanism is another benefit of the mirror control. The image distortion due to the lens and the mirror system shall be automatically compensated right after each image captured so that the absolute coordination can be calculated in real- time. Motorized focusing system is used for the large area inspection, so that the proper focusing achieved for the variable working distance between lens and targets by the synchronization to the mirror scanning system. By using XY mirror controlled area scanning vision inspection system, fast and economic system can be integrated while no vibration induced and smaller space required. This paper describes the principle of the area scanning method, optical effects of the scanning, position calibration method, inspection flows and some of implementation results.

  4. Latency in Visionic Systems: Test Methods and Requirements

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  5. The research on projective visual system of night vision goggles

    NASA Astrophysics Data System (ADS)

    Zhao, Shun-long

    2009-07-01

    Driven by the need for lightweight night vision goggles with good performance, we apply the projective lens into night vision goggles to act as visual system. A 40-deg FOV projection lens is provided. The useful diameter of the image intensifier is 16mm, and the Resolutions at Center and edge are both 60-lp/mm. The projection lens has a 28mm diameter and 20g weight. The maximum distortion of the system is less than 0.15%. The MTF maintained more than 0.6 at a 60-lp/mm resolution across the FOV. So the lens meets the requirements of the visual system. Besides, two types of projective visual system of night vision goggles are presented: the Direct-view projective visual system and the Seethrough projective visual system. And the See-through projective visual system enables us to observe the object with our eyes directly, without other action, when the environment becomes bright in a sudden. Finally we have reached a conclusion: The projective system has advantages over traditional eyepiece in night vision goggles. It is very useful to minish the volume, lighten the neck supports, and improve the imaging quality. It provides a new idea and concept for visual system design in night vision goggles.

  6. A Vision System For Robotic Inspection And Manipulation

    NASA Astrophysics Data System (ADS)

    Trivedi, Mohan M.; Chen, Chu X.; Marapane, Suresh

    1988-03-01

    New generation of robotic systems will operate in complex, unstructured environments of industrial plants utilizing sophisticated sensory mechanisms. In this paper we consider development of autonomous robotic systems for various inspection and manipulation tasks associated with advanced nuclear power plants. Our approach in the development of the robotic system is to utilize an array of sensors capable of sensing the robot's environment in several sensory modalities. One of the most important sensor modality utilized is that of vision. We describe the development of a model-based vision system for performing a number of inspection and manipulation tasks. The system is designed and tested using a laboratory based test panel. A number of analog and digital meters and a variety of switches, valves and controls are mounted on the panel. The paper presents details of system design and development and a series of experiments performed to evaluate capabilities of the vision system.

  7. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    SciTech Connect

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  8. Purposeful gazing in active vision through phase-based disparity and dynamic vergence control

    NASA Astrophysics Data System (ADS)

    Wu, Liwei; Marefat, Michael M.

    1994-10-01

    In this research we propose solutions to the problems involved in gaze stabilization of a binocular active vision system, i.e., vergence error extraction, and vergence servo control. Gazing is realized by decreasing the disparity which represents the vergence error. A Fourier transform based approach that robustly and efficiently estimates vergence disparity is developed for holding gaze on selected visual target. It is shown that this method has certain advantages over existing approaches. Our work also points out that vision sensor based vergence control system is a dual sampling rate system. Feedback information prediction and dynamic vision-based self-tuning control strategy are investigated to implement vergence control. Experiments on the gaze stabilization using the techniques developed in this paper are performed.

  9. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  10. Multiple-channel Streaming Delivery for Omnidirectional Vision System

    NASA Astrophysics Data System (ADS)

    Iwai, Yoshio; Nagahara, Hajime; Yachida, Masahiko

    An omnidirectional vision is an imaging system that can capture a surrounding image in whole direction by using a hyperbolic mirror and a conventional CCD camera. This paper proposes a streaming server that can efficiently transfer movies captured by an omnidirectional vision system through the Internet. The proposed system uses multiple channels to deliver multiple movies synchronously. Through this method, the system enables clients to view the different direction of omnidirectional movies and also support the function to change the view are during playback period. Our evaluation experiments show that our proposed streaming server can effectively deliver multiple movies via multiple channels.

  11. Machine vision system for online inspection of freshly slaughtered chickens

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A machine vision system was developed and evaluated for the automation of online inspection to differentiate freshly slaughtered wholesome chickens from systemically diseased chickens. The system consisted of an electron-multiplying charge-coupled-device camera used with an imaging spectrograph and ...

  12. Musca domestica inspired machine vision system with hyperacuity

    NASA Astrophysics Data System (ADS)

    Riley, Dylan T.; Harman, William M.; Tomberlin, Eric; Barrett, Steven F.; Wilcox, Michael; Wright, Cameron H. G.

    2005-05-01

    Musca domestica, the common house fly, has a simple yet powerful and accessible vision system. Cajal indicated in 1885 the fly's vision system is the same as in the human retina. The house fly has some intriguing vision system features such as fast, analog, parallel operation. Furthermore, it has the ability to detect movement and objects at far better resolution than predicted by photoreceptor spacing, termed hyperacuity. We are investigating the mechanisms behind these features and incorporating them into next generation vision systems. We have developed a prototype sensor that employs a fly inspired arrangement of photodetectors sharing a common lens. The Gaussian shaped acceptance profile of each sensor coupled with overlapped sensor field of views provide the necessary configuration for obtaining hyperacuity data. The sensor is able to detect object movement with far greater resolution than that predicted by photoreceptor spacing. We have exhaustively tested and characterized the sensor to determine its practical resolution limit. Our tests coupled with theory from Bucklew and Saleh (1985) indicate that the limit to the hyperacuity response may only be related to target contrast. We have also implemented an array of these prototype sensors which will allow for two - dimensional position location. These high resolution, low contrast capable sensors are being developed for use as a vision system for an autonomous robot and the next generation of smart wheel chairs. However, they are easily adapted for biological endoscopy, downhole monitoring in oil wells, and other applications.

  13. Machine vision system for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.

  14. A modular real-time vision system for humanoid robots

    NASA Astrophysics Data System (ADS)

    Trifan, Alina L.; Neves, António J. R.; Lau, Nuno; Cunha, Bernardo

    2012-01-01

    Robotic vision is nowadays one of the most challenging branches of robotics. In the case of a humanoid robot, a robust vision system has to provide an accurate representation of the surrounding world and to cope with all the constraints imposed by the hardware architecture and the locomotion of the robot. Usually humanoid robots have low computational capabilities that limit the complexity of the developed algorithms. Moreover, their vision system should perform in real time, therefore a compromise between complexity and processing times has to be found. This paper presents a reliable implementation of a modular vision system for a humanoid robot to be used in color-coded environments. From image acquisition, to camera calibration and object detection, the system that we propose integrates all the functionalities needed for a humanoid robot to accurately perform given tasks in color-coded environments. The main contributions of this paper are the implementation details that allow the use of the vision system in real-time, even with low processing capabilities, the innovative self-calibration algorithm for the most important parameters of the camera and its modularity that allows its use with different robotic platforms. Experimental results have been obtained with a NAO robot produced by Aldebaran, which is currently the robotic platform used in the RoboCup Standard Platform League, as well as with a humanoid build using the Bioloid Expert Kit from Robotis. As practical examples, our vision system can be efficiently used in real time for the detection of the objects of interest for a soccer playing robot (ball, field lines and goals) as well as for navigating through a maze with the help of color-coded clues. In the worst case scenario, all the objects of interest in a soccer game, using a NAO robot, with a single core 500Mhz processor, are detected in less than 30ms. Our vision system also includes an algorithm for self-calibration of the camera parameters as well

  15. The influence of active vision on the exoskeleton of intelligent agents

    NASA Astrophysics Data System (ADS)

    Smith, Patrice; Terry, Theodore B.

    2016-04-01

    Chameleonization occurs when a self-learning autonomous mobile system's (SLAMR) active vision scans the surface of which it is perched causing the exoskeleton to changes colors exhibiting a chameleon effect. Intelligent agents having the ability to adapt to their environment and exhibit key survivability characteristics of its environments would largely be due in part to the use of active vision. Active vision would allow the intelligent agent to scan its environment and adapt as needed in order to avoid detection. The SLAMR system would have an exoskeleton, which would change, based on the surface it was perched on; this is known as the "chameleon effect." Not in the common sense of the term, but from the techno-bio inspired meaning as addressed in our previous paper. Active vision, utilizing stereoscopic color sensing functionality would enable the intelligent agent to scan an object within its close proximity, determine the color scheme, and match it; allowing the agent to blend with its environment. Through the use of its' optical capabilities, the SLAMR system would be able to further determine its position, taking into account spatial and temporal correlation and spatial frequency content of neighboring structures further ensuring successful background blending. The complex visual tasks of identifying objects, using edge detection, image filtering, and feature extraction are essential for an intelligent agent to gain additional knowledge about its environmental surroundings.

  16. Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Young, Steven D.

    2005-01-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  17. Technical Challenges in the Development of a NASA Synthetic Vision System Concept

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Parrish, Russell V.; Kramer, Lynda J.; Harrah, Steve; Arthur, J. J., III

    2002-01-01

    Within NASA's Aviation Safety Program, the Synthetic Vision Systems Project is developing display system concepts to improve pilot terrain/situation awareness by providing a perspective synthetic view of the outside world through an on-board database driven by precise aircraft positioning information updating via Global Positioning System-based data. This work is aimed at eliminating visibility-induced errors and low visibility conditions as a causal factor to civil aircraft accidents, as well as replicating the operational benefits of clear day flight operations regardless of the actual outside visibility condition. Synthetic vision research and development activities at NASA Langley Research Center are focused around a series of ground simulation and flight test experiments designed to evaluate, investigate, and assess the technology which can lead to operational and certified synthetic vision systems. The technical challenges that have been encountered and that are anticipated in this research and development activity are summarized.

  18. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  19. 2020 Vision for Tank Waste Cleanup (One System Integration) - 12506

    SciTech Connect

    Harp, Benton; Charboneau, Stacy; Olds, Erik

    2012-07-01

    The mission of the Department of Energy's Office of River Protection (ORP) is to safely retrieve and treat the 56 million gallons of Hanford's tank waste and close the Tank Farms to protect the Columbia River. The millions of gallons of waste are a by-product of decades of plutonium production. After irradiated fuel rods were taken from the nuclear reactors to the processing facilities at Hanford they were exposed to a series of chemicals designed to dissolve away the rod, which enabled workers to retrieve the plutonium. Once those chemicals were exposed to the fuel rods they became radioactive and extremely hot. They also couldn't be used in this process more than once. Because the chemicals are caustic and extremely hazardous to humans and the environment, underground storage tanks were built to hold these chemicals until a more permanent solution could be found. The Cleanup of Hanford's 56 million gallons of radioactive and chemical waste stored in 177 large underground tanks represents the Department's largest and most complex environmental remediation project. Sixty percent by volume of the nation's high-level radioactive waste is stored in the underground tanks grouped into 18 'tank farms' on Hanford's central plateau. Hanford's mission to safely remove, treat and dispose of this waste includes the construction of a first-of-its-kind Waste Treatment Plant (WTP), ongoing retrieval of waste from single-shell tanks, and building or upgrading the waste feed delivery infrastructure that will deliver the waste to and support operations of the WTP beginning in 2019. Our discussion of the 2020 Vision for Hanford tank waste cleanup will address the significant progress made to date and ongoing activities to manage the operations of the tank farms and WTP as a single system capable of retrieving, delivering, treating and disposing Hanford's tank waste. The initiation of hot operations and subsequent full operations of the WTP are not only dependent upon the successful

  20. Concurrent algorithms for a mobile robot vision system

    SciTech Connect

    Jones, J.P.; Mann, R.C.

    1988-01-01

    The application of computer vision to mobile robots has generally been hampered by insufficient on-board computing power. The advent of VLSI-based general purpose concurrent multiprocessor systems promises to give mobile robots an increasing amount of on-board computing capability, and to allow computation intensive data analysis to be performed without high-bandwidth communication with a remote system. This paper describes the integration of robot vision algorithms on a 3-dimensional hypercube system on-board a mobile robot developed at Oak Ridge National Laboratory. The vision system is interfaced to navigation and robot control software, enabling the robot to maneuver in a laboratory environment, to find a known object of interest and to recognize the object's status based on visual sensing. We first present the robot system architecture and the principles followed in the vision system implementation. We then provide some benchmark timings for low-level image processing routines, describe a concurrent algorithm with load balancing for the Hough transform, a new algorithm for binary component labeling, and an algorithm for the concurrent extraction of region features from labeled images. This system analyzes a scene in less than 5 seconds and has proven to be a valuable experimental tool for research in mobile autonomous robots. 9 refs., 1 fig., 3 tabs.

  1. Ceramic substrate's detection system based on machine vision

    NASA Astrophysics Data System (ADS)

    Yang, Li-na; Zhou, Zhen-feng; Zhu, Li-jun

    2009-05-01

    Machine vision detection technology is an integrated modern inspection technology including optoelectronics, computer image, information processing and computer vision etc. It regards image as means and carrier of transmitting information, and extracts useful information from image and acquires all kinds of necessary parameters by dealing with images. Combining key project in Zhejiang Province Office of Education-research of high accuracy and large size machine vision automatic detection and separation technology. The paper describes the primary factors of influencing system's precision, develops an automatic detection system of ceramic substrate. The system gathers the image of ceramic substrate by CMOS( Complementary Metal-Oxide Semiconductor). The quality of image is improved by optical imaging and lighting system. The precision of edge detection is improved by image preprocessing and sub-pixel. In image enhancement part , image filter and geometric distortion correction are used. Edges are obtained through a sub-pixel edge detection method: determining the probable position of image edge by advanced Sobel operator and then taking three-order spline interpolation function to interpolate the gray edge image. The mathematical modeling of dimensional and geometric error of visual inspection system is developed. The parameters of ceramic substrate's length, and width are acquired. The experiment results show that the presented method in this paper increases the precision of vision detection system , and measuring results of this system are satisfying.

  2. Synthetic vision system flight test results and lessons learned

    NASA Technical Reports Server (NTRS)

    Radke, Jeffrey

    1993-01-01

    Honeywell Systems and Research Center developed and demonstrated an active 35 GHz Radar Imaging system as part of the FAA/USAF/Industry sponsored Synthetic Vision System Technology Demonstration (SVSTD) Program. The objectives of this presentation are to provide a general overview of flight test results, a system level perspective that encompasses the efforts of the SVSTD and Augmented VIsual Display (AVID) programs, and more importantly, provide the AVID workshop participants with Honeywell's perspective on the lessons that were learned from the SVS flight tests. One objective of the SVSTD program was to explore several known system issues concerning radar imaging technology. The program ultimately resolved some of these issues, left others open, and in fact created several new concerns. In some instances, the interested community has drawn improper conclusions from the program by globally attributing implementation specific issues to radar imaging technology in general. The motivation for this presentation is therefore to provide AVID researchers with a better understanding of the issues that truly remain open, and to identify the perceived issues that are either resolved or were specific to Honeywell's implementation.

  3. A laser-based vision system for weld quality inspection.

    PubMed

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved.

  4. The impact of changing night vision goggle spectral response on night vision imaging system lighting compatibility

    NASA Astrophysics Data System (ADS)

    Task, Harry L.; Marasco, Peter L.

    2004-09-01

    The defining document outlining night-vision imaging system (NVIS) compatible lighting, MIL-L-85762A, was written in the mid 1980's, based on what was then the state of the art in night vision and image intensification. Since that time there have been changes in the photocathode sensitivity and the minus-blue coatings applied to the objective lenses. Specifically, many aviation night-vision goggles (NVGs) in the Air Force are equipped with so-called "leaky green" or Class C type objective lens coatings that provide a small amount of transmission around 545 nanometers so that the displays that use a P-43 phosphor can be seen through the NVGs. However, current NVIS compatibility requirements documents have not been updated to include these changes. Documents that followed and replaced MIL-L-85762A (ASC/ENFC-96-01 and MIL-STD-3009) addressed aspects of then current NVIS technology, but did little to change the actual content or NVIS radiance requirements set forth in the original MIL-L-85762A. This paper examines the impact of spectral response changes, introduced by changes in image tube parameters and objective lens minus-blue filters, on NVIS compatibility and NVIS radiance calculations. Possible impact on NVIS lighting requirements is also discussed. In addition, arguments are presented for revisiting NVIS radiometric unit conventions.

  5. Digital vision system for three-dimensional model acquisition

    NASA Astrophysics Data System (ADS)

    Yuan, Ta; Lin, Huei-Yung; Qin, Xiangdong; Subbarao, Murali

    2000-10-01

    A digital vision system and the computational algorithms used by the system for three-dimensional (3D) model acquisition are described. The system is named Stonybrook VIsion System (SVIS). The system can acquire the 3D model (which includes the 3D shape and the corresponding image texture) of a simple object within a 300 mm X 300 mm X 300 mm volume placed about 600 mm from the system. SVIS integrates Image Focus Analysis (IFA) and Stereo Image Analysis (SIA) techniques for 3D shape and image texture recovery. First, 4 to 8 partial 3D models of the object are obtained from 4 to 8 views of the object. The partial models are then integrated to obtain a complete model of the object. The complete model is displayed using a 3D graphics rendering software (Apple's QuickDraw). Experimental results on several objects are presented.

  6. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  7. Vision based assistive technology for people with dementia performing activities of daily living (ADLs): an overview

    NASA Astrophysics Data System (ADS)

    As'ari, M. A.; Sheikh, U. U.

    2012-04-01

    The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.

  8. Building a 3D scanner system based on monocular vision.

    PubMed

    Zhang, Zhiyi; Yuan, Lin

    2012-04-10

    This paper proposes a three-dimensional scanner system, which is built by using an ingenious geometric construction method based on monocular vision. The system is simple, low cost, and easy to use, and the measurement results are very precise. To build it, one web camera, one handheld linear laser, and one background calibration board are required. The experimental results show that the system is robust and effective, and the scanning precision can be satisfied for normal users.

  9. Characterization of a multi-user indoor positioning system based on low cost depth vision (Kinect) for monitoring human activity in a smart home.

    PubMed

    Sevrin, Loïc; Noury, Norbert; Abouchi, Nacer; Jumel, Fabrice; Massot, Bertrand; Saraydaryan, Jacques

    2015-01-01

    An increasing number of systems use indoor positioning for many scenarios such as asset tracking, health care, games, manufacturing, logistics, shopping, and security. Many technologies are available and the use of depth cameras is becoming more and more attractive as this kind of device becomes affordable and easy to handle. This paper contributes to the effort of creating an indoor positioning system based on low cost depth cameras (Kinect). A method is proposed to optimize the calibration of the depth cameras, to describe the multi-camera data fusion and to specify a global positioning projection to maintain the compatibility with outdoor positioning systems. The monitoring of the people trajectories at home is intended for the early detection of a shift in daily activities which highlights disabilities and loss of autonomy. This system is meant to improve homecare health management at home for a better end of life at a sustainable cost for the community.

  10. Design and control of active vision based mechanisms for intelligent robots

    NASA Technical Reports Server (NTRS)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  11. Crew and Display Concepts Evaluation for Synthetic / Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III

    2006-01-01

    NASA s Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot s Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.

  12. A machine vision system for the calibration of digital thermometers

    NASA Astrophysics Data System (ADS)

    Vázquez-Fernández, Esteban; Dacal-Nieto, Angel; González-Jorge, Higinio; Martín, Fernando; Formella, Arno; Alvarez-Valado, Victor

    2009-06-01

    Automation is a key point in many industrial tasks such as calibration and metrology. In this context, machine vision has shown to be a useful tool for automation support, especially when there is no other option available. A system for the calibration of portable measurement devices has been developed. The system uses machine vision to obtain the numerical values shown by displays. A new approach based on human perception of digits, which works in parallel with other more classical classifiers, has been created. The results show the benefits of the system in terms of its usability and robustness, obtaining a success rate higher than 99% in display recognition. The system saves time and effort, and offers the possibility of scheduling calibration tasks without excessive attention by the laboratory technicians.

  13. Fiber optic coherent laser radar 3D vision system

    SciTech Connect

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  14. The organizing vision of integrated health information systems.

    PubMed

    Ellingsen, Gunnar; Monteiro, Eric

    2008-09-01

    The notion of 'integration' in the context of health information systems is ill-defined yet in widespread use. We identify a variety of meanings ranging from the purely technical integration of information systems to the integration of services. This ambiguity (or interpretive flexibility), we argue, is inherent rather than accidental: it is a necessary prerequisite for mobilizing political and ideological support among stakeholders for integrated health information systems. Building on this, our aim is to trace out the career dynamics of the vision of 'integration/ integrated'. The career dynamics is the transformation of both the imaginary and the material (technological) realizations of the unfolding implementation of the vision of integrated care. Empirically we draw on a large, ongoing project at the University Hospital of North Norway (UNN) to establish an integrated health information system.

  15. Stereo vision based hand-held laser scanning system design

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Wang, Jinming

    2011-11-01

    Although 3D scanning system is used more and more broadly in many fields, such computer animate, computer aided design, digital museums, and so on, a convenient scanning device is expansive for most people to afford. In another hand, imaging devices are becoming cheaper, a stereo vision system with two video cameras cost little. In this paper, a hand held laser scanning system is design based on stereo vision principle. The two video cameras are fixed tighter, and are all calibrated in advance. The scanned object attached with some coded markers is in front of the stereo system, and can be changed its position and direction freely upon the need of scanning. When scanning, the operator swept a line laser source, and projected it on the object. At the same time, the stereo vision system captured the projected lines, and reconstructed their 3D shapes. The code markers are used to translate the coordinate system between scanned points under different view. Two methods are used to get more accurate results. One is to use NURBS curves to interpolate the sections of the laser lines to obtain accurate central points, and a thin plate spline is used to approximate the central points, and so, an exact laser central line is got, which guards an accurate correspondence between tow cameras. Another way is to incorporate the constraint of laser swept plane on the reconstructed 3D curves by a PCA (Principle Component Analysis) algorithm, and more accurate results are obtained. Some examples are given to verify the system.

  16. A novel container truck locating system based on vision technology

    NASA Astrophysics Data System (ADS)

    He, Junji; Shi, Li; Mi, Weijian

    2008-10-01

    On a container dock, the container truck must be parked right under the trolley of the container crane before loading (unloading) a container to (from) it. But it often uses nearly one minute to park the truck at the right position because of the difficulty of aiming the truck at the trolley. A monocular machine vision system is designed to locate the locomotive container truck, give the information about how long the truck need to go ahead or go back, and thereby help the driver park the truck fleetly and correctly. With this system time is saved and the efficiency of loading and unloading is increased. The mathematical model of this system is presented in detail. Then the calibration method is described. At last the experiment result testifies the validity and precision of this locating system. The prominent characteristic of this system is simple, easy to be implemented, low cost, and effective. Furthermore, this research work verifies that a monocular vision system can detect 3D size on condition that the length and width of a container are known, which greatly extends the function and application of a monocular vision system.

  17. Development Of An Aviator's Night Vision Imaging System (ANVIS)

    NASA Astrophysics Data System (ADS)

    Efkernan, Albert; Jenkins, Donald

    1981-04-01

    Historical background is presented of the U. S. Army's requirement for a high performance, lightweight, night vision goggle for use by helicopter pilots. System requirements are outlined and a current program for development of a third generation image intensification device is described. Primary emphasis is on the use of lightweight, precision molded, aspheric plastic optical elements and molded plastic mechanical components. System concept, design, and manufacturing considerations are presented.

  18. Development Of An Aviator's Night Vision Imaging System (ANVIS)

    NASA Astrophysics Data System (ADS)

    Jenkins, Donald; Efkeman, Albert

    1980-10-01

    Historical background is presented of the U.S. Army's requirement for a high performance, lightweight, night vision goggle for use by helicopter pilots. System requirements are outlined and a current program for development of a third generation image intensification device is described. Primary emphasis is on the use of light precision molded, aspheric plastic optical elements and molded plastic mechanical components. System concept, design, and manufacturing considerations are presented.

  19. Vision development test bed: The cradle of the MSS artificial vision system

    NASA Astrophysics Data System (ADS)

    Zucherman, Leon; Stovman, John

    This paper presents the concept of the Vision Development Test-Bed (VDTB) developed at Spar Aerospace Ltd. in order to assist development work on the Artificial Vision System (AVS) for the Mobile Servicing System (MSS) of Space Station Freedom in providing reliable and robust target auto acquisition and robotic auto-tracking capabilities when operating in the extremely contrasty illumination of the space environment. The paper illustrates how the VDTB will be used to understand the problems and to evaluate the methods of solving them. The VDTB is based on the use of conventional but high speed image processing hardware and software. Auxiliary equipment, such as TV cameras, illumination sources, monitors, will be added to provide completeness and flexibility. A special feature will be the use of solar simulation so that the impact of the harsh illumination conditions in space on image quality can be evaluated. The VDTB will be used to assess the required techniques, algorithms, hardware and software characteristics, and to utilize this information in overcoming the target-recognition and false-target rejection problems. The problems associated with NTSC video processing and the use of color will also be investigated. The paper concludes with a review of applications for the VDTB work, such as AVS real-time simulations, application software development, evaluations, and trade-offs studies.

  20. Intelligent vision system for autonomous vehicle operations

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  1. Practical vision based degraded text recognition system

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Rapid growth and progress in the medical, industrial, security and technology fields means more and more consideration for the use of camera based optical character recognition (OCR) Applying OCR to scanned documents is quite mature, and there are many commercial and research products available on this topic. These products achieve acceptable recognition accuracy and reasonable processing times especially with trained software, and constrained text characteristics. Even though the application space for OCR is huge, it is quite challenging to design a single system that is capable of performing automatic OCR for text embedded in an image irrespective of the application. Challenges for OCR systems include; images are taken under natural real world conditions, Surface curvature, text orientation, font, size, lighting conditions, and noise. These and many other conditions make it extremely difficult to achieve reasonable character recognition. Performance for conventional OCR systems drops dramatically as the degradation level of the text image quality increases. In this paper, a new recognition method is proposed to recognize solid or dotted line degraded characters. The degraded text string is localized and segmented using a new algorithm. The new method was implemented and tested using a development framework system that is capable of performing OCR on camera captured images. The framework allows parameter tuning of the image-processing algorithm based on a training set of camera-captured text images. Novel methods were used for enhancement, text localization and the segmentation algorithm which enables building a custom system that is capable of performing automatic OCR which can be used for different applications. The developed framework system includes: new image enhancement, filtering, and segmentation techniques which enabled higher recognition accuracies, faster processing time, and lower energy consumption, compared with the best state of the art published

  2. Development of a machine vision system for automated structural assembly

    NASA Technical Reports Server (NTRS)

    Sydow, P. Daniel; Cooper, Eric G.

    1992-01-01

    Research is being conducted at the LaRC to develop a telerobotic assembly system designed to construct large space truss structures. This research program was initiated within the past several years, and a ground-based test-bed was developed to evaluate and expand the state of the art. Test-bed operations currently use predetermined ('taught') points for truss structural assembly. Total dependence on the use of taught points for joint receptacle capture and strut installation is neither robust nor reliable enough for space operations. Therefore, a machine vision sensor guidance system is being developed to locate and guide the robot to a passive target mounted on the truss joint receptacle. The vision system hardware includes a miniature video camera, passive targets mounted on the joint receptacles, target illumination hardware, and an image processing system. Discrimination of the target from background clutter is accomplished through standard digital processing techniques. Once the target is identified, a pose estimation algorithm is invoked to determine the location, in three-dimensional space, of the target relative to the robots end-effector. Preliminary test results of the vision system in the Automated Structural Assembly Laboratory with a range of lighting and background conditions indicate that it is fully capable of successfully identifying joint receptacle targets throughout the required operational range. Controlled optical bench test results indicate that the system can also provide the pose estimation accuracy to define the target position.

  3. A vision system for an unmanned nonlethal weapon

    NASA Astrophysics Data System (ADS)

    Kogut, Greg; Drymon, Larry

    2004-10-01

    Unmanned weapons remove humans from deadly situations. However some systems, such as unmanned guns, are difficult to control remotely. It is difficult for a soldier to perform the complex tasks of identifying and aiming at specific points on targets from a remote location. This paper describes a computer vision and control system for providing autonomous control of unmanned guns developed at Space and Naval Warfare Systems Center, San Diego (SSC San Diego). The test platform, consisting of a non-lethal gun mounted on a pan-tilt mechanism, can be used as an unattended device or mounted on a robot for mobility. The system operates with a degree of autonomy determined by a remote user that ranges from teleoperated to fully autonomous. The teleoperated mode consists of remote joystick control over all aspects of the weapon, including aiming, arming, and firing. Visual feedback is provided by near-real-time video feeds from bore-site and wide-angle cameras. The semi-autonomous mode provides the user with tracking information overlayed over the real-time video. This provides the user with information on all detected targets being tracked by the vision system. The user uses a mouse to select a target, and the gun automatically aims the gun at the target. Arming and firing is still performed by teleoperation. In fully autonomous mode, all aspects of gun control are performed by the vision system.

  4. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  5. Novel Corrosion Sensor for Vision 21 Systems

    SciTech Connect

    Heng Ban; Bharat Soni

    2007-03-31

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall goal of this project is to develop a technology for on-line fireside corrosion monitoring. This objective is achieved by the laboratory development of sensors and instrumentation, testing them in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. This project successfully developed two types of sensors and measurement systems, and successful tested them in a muffle furnace in the laboratory. The capacitance sensor had a high fabrication cost and might be more appropriate in other applications. The low-cost resistance sensor was tested in a power plant burning eastern bituminous coals. The results show that the fireside corrosion measurement system can be used to determine the corrosion rate at waterwall and superheater locations. Electron microscope analysis of the corroded sensor surface provided detailed picture of the corrosion process.

  6. Image processing in an enhanced and synthetic vision system

    NASA Astrophysics Data System (ADS)

    Mueller, Rupert M.; Palubinskas, Gintautas; Gemperlein, Hans

    2002-07-01

    'Synthetic Vision' and 'Sensor Vision' complement to an ideal system for the pilot's situation awareness. To fuse these two data sets the sensor images are first segmented by a k-means algorithm and then features are extracted by blob analysis. These image features are compared with the features of the projected airport data using fuzzy logic in order to identify the runway in the sensor image and to improve the aircraft navigation data. This process is necessary due to inaccurate input data i.e. position and attitude of the aircraft. After identifying the runway, obstacles can be detected using the sensor image. The extracted information is presented to the pilot's display system and combined with the appropriate information from the MMW radar sensor in a subsequent fusion processor. A real time image processing procedure is discussed and demonstrated with IR measurements of a FLIR system during landing approaches.

  7. Low Cost Night Vision System for Intruder Detection

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  8. Development of a distributed vision system for industrial conditions

    NASA Astrophysics Data System (ADS)

    Weiss, Michael; Schiller, Arnulf; O'Leary, Paul; Fauster, Ewald; Schalk, Peter

    2003-04-01

    This paper presents a prototype system to monitor a hot glowing wire during the rolling process in quality relevant aspects. Therefore a measurement system based on image vision and a communication framework integrating distributed measurement nodes is introduced. As a technologically approach, machine vision is used to evaluate the wire quality parameters. Therefore an image processing algorithm, based on dual Grassmannian coordinates fitting parallel lines by singular value decomposition, is formulated. Furthermore a communication framework which implements anonymous tuplespace communication, a private network based on TCP/IP and a consequent Java implementation of all used components is presented. Additionally, industrial requirements such as realtime communication to IEC-61131 conform digital IO"s (Modbus TCP/IP protocol), the implementation of a watchdog pattern and the integration of multiple operating systems (LINUX, QNX and WINDOWS) are lined out. The deployment of such a framework to the real world problem statement of the wire rolling mill is presented.

  9. NOVEL CORROSION SENSOR FOR VISION 21 SYSTEMS

    SciTech Connect

    Heng Ban

    2004-12-01

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the metal loss caused by chemical reactions on surfaces exposed to the combustion environment. Such corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall objective of this proposed project is to develop a technology for on-line corrosion monitoring based on a new concept. This report describes the initial results from the first-year effort of the three-year study that include laboratory development and experiment, and pilot combustor testing.

  10. The Systemic Vision of the Educational Learning

    ERIC Educational Resources Information Center

    Lima, Nilton Cesar; Penedo, Antonio Sergio Torres; de Oliveira, Marcio Mattos Borges; de Oliveira, Sonia Valle Walter Borges; Queiroz, Jamerson Viegas

    2012-01-01

    As the sophistication of technology is increasing, also increased the demand for quality in education. The expectation for quality has promoted broad range of products and systems, including in education. These factors include the increased diversity in the student body, which requires greater emphasis that allows a simple and dynamic model in the…

  11. Displacement measurement system for inverters using computer micro-vision

    NASA Astrophysics Data System (ADS)

    Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Li, Hai; Ge, Peng

    2016-06-01

    We propose a practical system for noncontact displacement measurement of inverters using computer micro-vision at the sub-micron scale. The measuring method of the proposed system is based on a fast template matching algorithm with an optical microscopy. A laser interferometer measurement (LIM) system is built up for comparison. Experimental results demonstrate that the proposed system can achieve the same performance as the LIM system but shows a higher operability and stability. The measuring accuracy is 0.283 μm.

  12. Honey characterization using computer vision system and artificial neural networks.

    PubMed

    Shafiee, Sahameh; Minaei, Saeid; Moghaddam-Charkari, Nasrollah; Barzegar, Mohsen

    2014-09-15

    This paper reports the development of a computer vision system (CVS) for non-destructive characterization of honey based on colour and its correlated chemical attributes including ash content (AC), antioxidant activity (AA), and total phenolic content (TPC). Artificial neural network (ANN) models were applied to transform RGB values of images to CIE L*a*b* colourimetric measurements and to predict AC, TPC and AA from colour features of images. The developed ANN models were able to convert RGB values to CIE L*a*b* colourimetric parameters with low generalization error of 1.01±0.99. In addition, the developed models for prediction of AC, TPC and AA showed high performance based on colour parameters of honey images, as the R(2) values for prediction were 0.99, 0.98, and 0.87, for AC, AA and TPC, respectively. The experimental results show the effectiveness and possibility of applying CVS for non-destructive honey characterization by the industry.

  13. Bionic vision: system architectures: a review.

    PubMed

    Guenther, Thomas; Lovell, Nigel H; Suaning, Gregg J

    2012-01-01

    The concept of an electronic visual prosthesis has been investigated since the early 20th century. While the first generation of long-term implantable devices were defined by the turn of the millennium, the greatest progress has been achieved in the past decade. This review describes the current state of the art of visual prosthesis investigated by more than two dozen active groups in this field of research. The focus is on technological solutions in regard to long-term safety of materials, electrode-tissue interfaces and encapsulation technologies. Furthermore, we critically assess the maximum number of stimulating electrodes each technological approach is likely to provide.

  14. Healthcare Information Systems - Requirements and Vision

    NASA Astrophysics Data System (ADS)

    Williams, John G.

    The introduction of sophisticated information, communications and technology into health care is not a simple task, as demonstrated by the difficulties encountered by the Department of Health's multi-billion programme for the NHS. This programme has successfully implemented much of the infrastructure needed to support the activities of the NHS, but has made less progress with electronic patient records. The case for health records that are focused on the individual patient will be outlined, and the need for these to be underpinned by professionally agreed standards for structure and content. Some of the challenges will be discussed, and the benefits to health care and clinical research will be explored.

  15. Telerobotic rendezvous and docking vision system architecture

    NASA Technical Reports Server (NTRS)

    Gravely, Ben; Myers, Donald; Moody, David

    1992-01-01

    This research program has successfully demonstrated a new target label architecture that allows a microcomputer to determine the position, orientation, and identity of an object. It contains a CAD-like database with specific geometric information about the object for approach, grasping, and docking maneuvers. Successful demonstrations were performed selecting and docking an ORU box with either of two ORU receptacles. Small, but significant differences were seen in the two camera types used in the program, and camera sensitive program elements have been identified. The software has been formatted into a new co-autonomy system which provides various levels of operator interaction and promises to allow effective application of telerobotic systems while code improvements are continuing.

  16. Extracting depth by binocular stereo in a robot vision system

    SciTech Connect

    Marapane, S.B.; Trivedi, M.M.

    1988-01-01

    New generation of robotic systems will operate in complex, unstructured environments utilizing sophisticated sensory mechanisms. Vision and range will be two of the most important sensory modalities such a system will utilize to sense their operating environment. Measurement of depth is critical for the success of many robotic tasks such as: object recognition and location; obstacle avoidance and navigation; and object inspection. In this paper we consider the development of a binocular stereo technique for extracting depth information in a robot vision system for inspection and manipulation tasks. Ability to produce precise depth measurements over a wide range of distances and the passivity of the approach make binocular stereo techniques attractive and appropriate for range finding in a robotic environment. This paper describes work in progress towards the development of a region-based binocular stereo technique for a robot vision system designed for inspection and manipulation and presents preliminary experiments designed to evaluate performance of the approach. Results of these studies show promise for the region-based stereo matching approach. 16 refs., 1 fig.

  17. A vision system for a Mars rover

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.

    1988-01-01

    A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.

  18. A VISION of Advanced Nuclear System Cost Uncertainty

    SciTech Connect

    J'Tia Taylor; David E. Shropshire; Jacob J. Jacobson

    2008-08-01

    VISION (VerifIable fuel cycle SImulatiON) is the Advanced Fuel Cycle Initiative’s and Global Nuclear Energy Partnership Program’s nuclear fuel cycle systems code designed to simulate the US commercial reactor fleet. The code is a dynamic stock and flow model that tracks the mass of materials at the isotopic level through the entire nuclear fuel cycle. As VISION is run, it calculates the decay of 70 isotopes including uranium, plutonium, minor actinides, and fission products. VISION.ECON is a sub-model of VISION that was developed to estimate fuel cycle and reactor costs. The sub-model uses the mass flows generated by VISION for each of the fuel cycle functions (referred to as modules) and calculates the annual cost based on cost distributions provided by the Advanced Fuel Cycle Cost Basis Report1. Costs are aggregated for each fuel cycle module, and the modules are aggregated into front end, back end, recycling, reactor, and total fuel cycle costs. The software also has the capability to perform system sensitivity analysis. This capability may be used to analyze the impacts on costs due to system uncertainty effects. This paper will provide a preliminary evaluation of the cost uncertainty affects attributable to 1) key reactor and fuel cycle system parameters and 2) scheduling variations. The evaluation will focus on the uncertainty on the total cost of electricity and fuel cycle costs. First, a single light water reactor (LWR) using mixed oxide fuel is examined to ascertain the effects of simple parameter changes. Three system parameters; burnup, capacity factor and reactor power are varied from nominal cost values and the affect on the total cost of electricity is measured. These simple parameter changes are measured in more complex scenarios 2-tier systems including LWRs with mixed fuel and fast recycling reactors using transuranic fuel. Other system parameters are evaluated and results will be presented in the paper. Secondly, the uncertainty due to

  19. Establishing an evoked-potential vision-tracking system

    NASA Technical Reports Server (NTRS)

    Skidmore, Trent A.

    1991-01-01

    This paper presents experimental evidence to support the feasibility of an evoked-potential vision-tracking system. The topics discussed are stimulator construction, verification of the photic driving response in the electroencephalogram, a method for performing frequency separation, and a transient-analysis example. The final issue considered is that of object multiplicity (concurrent visual stimuli with different flashing rates). The paper concludes by discussing several applications currently under investigation.

  20. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  1. International Border Management Systems (IBMS) Program : visions and strategies.

    SciTech Connect

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  2. Machine vision system for automated detection of stained pistachio nuts

    NASA Astrophysics Data System (ADS)

    Pearson, Tom C.

    1995-01-01

    A machine vision system was developed to separate stained pistachio nuts, which comprise of about 5% of the California crop, from unstained nuts. The system may be used to reduce labor involved with manual grading or to remove aflatoxin contaminated product from low grade process streams. The system was tested on two different pistachio process streams: the bi- chromatic color sorter reject stream and the small nut shelling stock stream. The system had a minimum overall error rate of 14% for the bi-chromatic sorter reject stream and 15% for the small shelling stock stream.

  3. Computer vision in roadway transportation systems: a survey

    NASA Astrophysics Data System (ADS)

    Loce, Robert P.; Bernal, Edgar A.; Wu, Wencheng; Bala, Raja

    2013-10-01

    There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.

  4. Computer-vision-based inspecting system for needle roller bearing

    NASA Astrophysics Data System (ADS)

    Li, Wei; He, Tao; Zhong, Fei; Wu, Qinhua; Zhong, Yuning; Shi, Teiling

    2006-11-01

    A Computer Vision based Inspecting System for Needle Roller Bearing (CVISNRB) is proposed in the paper. The characteristic of technology, main functions and principle of CVISNRB are also introduced. CVISNRB is composed of a mechanic transmission and an automatic feeding system, an imaging system, software arithmetic, an automatic selecting system of inspected bearing, a human-computer interaction, a pneumatic control system, an electric control system and so on. The computer vision technique is introduced in the inspecting system for needle roller bearing, which resolves the problem of the small needle roller bearing inspecting in bearing production business enterprise, raises the speed of the inspecting, and realizes the automatic untouched and on-line examination. The CVISNRB can effectively examine the loss of needle and give the accurate number. The accuracy can achieve 99.5%, and the examination speed can arrive 15 needle roller bearings each minute. The CVISNRB has none malfunction in the actual performance in the past half year, and can meet the actual need.

  5. Users' subjective evaluation of electronic vision enhancement systems.

    PubMed

    Culham, Louise E; Chabra, Anthony; Rubin, Gary S

    2009-03-01

    The aims of this study were (1) to elicit the users' responses to four electronic head-mounted devices (Jordy, Flipperport, Maxport and NuVision) and (2) to correlate users' opinion with performance. Ten patients with early onset macular disease (EOMD) and 10 with age-related macular disease (AMD) used these electronic vision enhancement systems (EVESs) for a variety of visual tasks. A questionnaire designed in-house and a modified VF-14 were used to evaluate the responses. Following initial experience of the devices in the laboratory, every patient took home two of the four devices for 1 week each. Responses were re-evaluated after this period of home loan. No single EVES stood out as the strong preference for all aspects evaluated. In the laboratory-based appraisal, Flipperport typically received the best overall ratings and highest score for image quality and ability to magnify, but after home loan there was no significant difference between devices. Comfort of device, although important, was not predictive of rating once magnification had been taken into account. For actual performance, a threshold effect was seen whereby ratings increased as reading speed improved up to 60 words per minute. Newly diagnosed patients responded most positively to EVESs, but otherwise users' opinion could not be predicted by age, gender, diagnosis or previous CCTV experience. User feedback is essential in our quest to understand the benefits and shortcoming of EVESs. Such information should help guide both prescribing and future development of low vision devices.

  6. Low Cost Vision Based Personal Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Amami, M. M.; Smith, M. J.; Kokkas, N.

    2014-03-01

    Mobile mapping systems (MMS) can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS). A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  7. Bionic Vision-Based Intelligent Power Line Inspection System

    PubMed Central

    Ma, Yunpeng; He, Feijia; Xu, Jinxin

    2017-01-01

    Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions. PMID:28203269

  8. Bionic Vision-Based Intelligent Power Line Inspection System.

    PubMed

    Li, Qingwu; Ma, Yunpeng; He, Feijia; Xi, Shuya; Xu, Jinxin

    2017-01-01

    Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions.

  9. Calibration of a catadioptric omnidirectional vision system with conic mirror

    NASA Astrophysics Data System (ADS)

    Marcato Junior, J.; Tommaselli, A. M. G.; Moraes, M. V. A.

    2016-03-01

    Omnidirectional vision systems that enable 360° imaging have been widely used in several research areas, including close-range photogrammetry, which allows the accurate 3D measurement of objects. To achieve accurate results in Photogrammetric applications, it is necessary to model and calibrate these systems. The major contribution of this paper relates to the rigorous geometric modeling and calibration of a catadioptric, omnidirectional vision system that is composed of a wide-angle lens camera and a conic mirror. The indirect orientation of the omnidirectional images can also be estimated using this rigorous mathematical model. When calibrating the system, which is composed of a wide-angle camera and a conic mirror, misalignment of the conical mirror axis with respect to the camera's optical axis is a critical problem that must be considered in mathematical models. The interior calibration technique developed in this paper encompasses the following steps: wide-angle camera calibration; conic mirror modeling; and estimation of the transformation parameters between the camera and conic mirror reference systems. The main advantage of the developed technique is that it does not require accurate physical alignment between the camera and conic mirror axis. The exterior orientation is based on the properties of the conic mirror reflection. Experiments were conducted with images collected from a calibration field, and the results verified that the catadioptric omnidirectional system allows for the generation of ground coordinates with high geometric quality, provided that rigorous photogrammetric processes are applied.

  10. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  11. Retinal Structures and Visual Cortex Activity are Impaired Prior to Clinical Vision Loss in Glaucoma

    PubMed Central

    Murphy, Matthew C.; Conner, Ian P.; Teng, Cindy Y.; Lawrence, Jesse D.; Safiullah, Zaid; Wang, Bo; Bilonick, Richard A.; Kim, Seong-Gi; Wollstein, Gadi; Schuman, Joel S.; Chan, Kevin C.

    2016-01-01

    Glaucoma is the second leading cause of blindness worldwide and its pathogenesis remains unclear. In this study, we measured the structure, metabolism and function of the visual system by optical coherence tomography and multi-modal magnetic resonance imaging in healthy subjects and glaucoma patients with different degrees of vision loss. We found that inner retinal layer thinning, optic nerve cupping and reduced visual cortex activity occurred before patients showed visual field impairment. The primary visual cortex also exhibited more severe functional deficits than higher-order visual brain areas in glaucoma. Within the visual cortex, choline metabolism was perturbed along with increasing disease severity in the eye, optic radiation and visual field. In summary, this study showed evidence that glaucoma deterioration is already present in the eye and the brain before substantial vision loss can be detected clinically using current testing methods. In addition, cortical cholinergic abnormalities are involved during trans-neuronal degeneration and can be detected non-invasively in glaucoma. The current results can be of impact for identifying early glaucoma mechanisms, detecting and monitoring pathophysiological events and eye-brain-behavior relationships, and guiding vision preservation strategies in the visual system, which may help reduce the burden of this irreversible but preventable neurodegenerative disease. PMID:27510406

  12. Design of optimal correlation filters for hybrid vision systems

    NASA Technical Reports Server (NTRS)

    Rajan, Periasamy K.

    1990-01-01

    Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.

  13. Vision-Based People Detection System for Heavy Machine Applications

    PubMed Central

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  14. Codesign Environment for Computer Vision Hw/Sw Systems

    NASA Astrophysics Data System (ADS)

    Toledo, Ana; Cuenca, Sergio; Suardíaz, Juan

    2006-10-01

    In this paper we present a novel codesign environment which is conceived especially for computer vision hybrid systems. This setting is based on Mathworks Simulink and Xilinx System Generator tools and is comprised of the following: an incremental codesign flow, diverse libraries of virtual components with three levels of description (high level, hardware and software), semi-automatic tools to help in the partition of the system and a methodology for building new library components. The use of high level libraries allows for the development of systems without the need of exhaustive knowledge of the actual architecture or special skills on hardware description languages. This enable a non-traumatic incorporation of the reconfigurable technologies in the image processing systems generally developed for engineers which are not very related to hardware design disciplines.

  15. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  16. Micro-vision servo control of a multi-axis alignment system for optical fiber assembly

    NASA Astrophysics Data System (ADS)

    Chen, Weihai; Yu, Fei; Qu, Jianliang; Chen, Wenjie; Zhang, Jianbin

    2017-04-01

    This paper describes a novel optical fiber assembly system featuring a multi-axis alignment function based on micro-vision feedback control. It consists of an active parallel alignment mechanism, a passive compensation mechanism, a micro-gripper and a micro-vision servo control system. The active parallel alignment part is a parallelogram-based design with remote-center-of-motion (RCM) function to achieve precise rotation without fatal lateral motion. The passive mechanism, with five degrees of freedom (5-DOF), is used to implement passive compensation for multi-axis errors. A specially designed 1-DOF micro-gripper mounted onto the active parallel alignment platform is adopted to grasp and rotate the optical fiber. A micro-vision system equipped with two charge-coupled device (CCD) cameras is introduced to observe the small field of view and obtain multi-axis errors for servo feedback control. The two CCD cameras are installed in an orthogonal arrangement—thus the errors can be easily measured via the captured images. Meanwhile, a series of tracking and measurement algorithms based on specific features of the target objects are developed. Details of the force and displacement sensor information acquisition in the assembly experiment are also provided. An experiment demonstrates the validity of the proposed visual algorithm by achieving the task of eliminating errors and inserting an optical fiber to the U-groove accurately.

  17. Research into the Architecture of CAD Based Robot Vision Systems

    DTIC Science & Technology

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  18. Computer vision system for three-dimensional inspection

    NASA Astrophysics Data System (ADS)

    Penafiel, Francisco; Fernandez, Luis; Campoy, Pascual; Aracil, Rafael

    1994-11-01

    In the manufacturing process certain workpieces are inspected for dimensional measurement using sophisticated quality control techniques. During the operation phase, these parts are deformed due to the high temperatures involved in the process. The evolution of the workpieces structure is noticed on their dimensional modification. This evolution can be measured with a set of dimensional parameters. In this paper, a three dimensional automatic inspection of these parts is proposed. The aim is the measuring of some workpieces features through 3D control methods using directional lighting and a computer artificial vision system. The results of this measuring must be compared with the parameters obtained after the manufacturing process in order to determine the degree of deformation of the workpiece and decide whether it is still usable or not. Workpieces outside a predetermined specification range must be discarded and replaced by new ones. The advantage of artificial vision methods is based on the fact that there is no need to get in touch with the object to inspect. This makes feasible its use in hazardous environments, not suitable for human beings. A system has been developed and applied to the inspection of fuel assemblies in nuclear power plants. Such a system has been implemented in a very high level of radiation environment and operates in underwater conditions. The physical dimensions of a nuclear fuel assembly are modified after its operation in a nuclear power plant in relation to the original dimensions after its manufacturing. The whole system (camera, mechanical and illumination systems and the radioactive fuel assembly) is submerged in water for minimizing radiation effects and is remotely controlled by human intervention. The developed system has to inspect accurately a set of measures on the fuel assembly surface such as length, twists, arching, etc. The present project called SICOM (nuclear fuel assembly inspection system) is included into the R

  19. Vision-Based SLAM System for Unmanned Aerial Vehicles

    PubMed Central

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  20. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    PubMed

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-03-15

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  1. Machine vision system for the control of tunnel boring machines

    NASA Astrophysics Data System (ADS)

    Habacher, Michael; O'Leary, Paul; Harker, Matthew; Golser, Johannes

    2013-03-01

    This paper presents a machine vision system for the control of dual-shield Tunnel Boring Machines. The system consists of a camera with ultra bright LED illumination and a target system consisting of multiple retro-reflectors. The camera mounted on the gripper shield measures the relative position and orientation of the target which is mounted on the cutting shield. In this manner the position of the cutting shield relative to the gripper shield is determined. Morphological operators are used to detect the retro-reflectors in the image and a covariance optimized circle fit is used to determine the center point of each reflector. A graph matching algorithm is used to ensure a robust matching of the constellation of the observed target with the ideal target geometry.

  2. A Portable Stereo Vision System for Whole Body Surface Imaging.

    PubMed

    Yu, Wurong; Xu, Bugao

    2010-04-01

    This paper presents a whole body surface imaging system based on stereo vision technology. We have adopted a compact and economical configuration which involves only four stereo units to image the frontal and rear sides of the body. The success of the system depends on a stereo matching process that can effectively segment the body from the background in addition to recovering sufficient geometric details. For this purpose, we have developed a novel sub-pixel, dense stereo matching algorithm which includes two major phases. In the first phase, the foreground is accurately segmented with the help of a predefined virtual interface in the disparity space image, and a coarse disparity map is generated with block matching. In the second phase, local least squares matching is performed in combination with global optimization within a regularization framework, so as to ensure both accuracy and reliability. Our experimental results show that the system can realistically capture smooth and natural whole body shapes with high accuracy.

  3. Development of Non-contact Respiratory Monitoring System for Newborn Using a FG Vision Sensor

    NASA Astrophysics Data System (ADS)

    Kurami, Yoshiyuki; Itoh, Yushi; Natori, Michiya; Ohzeki, Kazuo; Aoki, Yoshimitsu

    In recent years, development of neonatal care is strongly hoped, with increase of the low-birth-weight baby birth rate. Especially respiration of low-birth-weight baby is incertitude because central nerve and respiratory function is immature. Therefore, a low-birth-weight baby often causes a disease of respiration. In a NICU (Neonatal Intensive Care Unit), neonatal respiration is monitored using cardio-respiratory monitor and pulse oximeter at all times. These contact-type sensors can measure respiratory rate and SpO2 (Saturation of Peripheral Oxygen). However, because a contact-type sensor might damage the newborn's skin, it is a real burden to monitor neonatal respiration. Therefore, we developed the respiratory monitoring system for newborn using a FG (Fiber Grating) vision sensor. FG vision sensor is an active stereo vision sensor, it is possible for non-contact 3D measurement. A respiratory waveform is calculated by detecting the vertical motion of the thoracic and abdominal region with respiration. We attempted clinical experiment in the NICU, and confirmed the accuracy of the obtained respiratory waveform was high. Non-contact respiratory monitoring of newborn using a FG vision sensor enabled the minimally invasive procedure.

  4. Recognition of Activities of Daily Living with Egocentric Vision: A Review

    PubMed Central

    Nguyen, Thi-Hoa-Cuc; Nebel, Jean-Christophe; Florez-Revuelta, Francisco

    2016-01-01

    Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory. PMID:26751452

  5. The 3-D vision system integrated dexterous hand

    NASA Technical Reports Server (NTRS)

    Luo, Ren C.; Han, Youn-Sik

    1989-01-01

    Most multifingered hands use a tendon mechanism to minimize the size and weight of the hand. Such tendon mechanisms suffer from the problems of striction and friction of the tendons resulting in a reduction of control accuracy. A design for a 3-D vision system integrated dexterous hand with motor control is described which overcomes these problems. The proposed hand is composed of three three-jointed grasping fingers with tactile sensors on their tips, a two-jointed eye finger with a cross-shaped laser beam emitting diode in its distal part. The two non-grasping fingers allow 3-D vision capability and can rotate around the hand to see and measure the sides of grasped objects and the task environment. An algorithm that determines the range and local orientation of the contact surface using a cross-shaped laser beam is introduced along with some potential applications. An efficient method for finger force calculation is presented which uses the measured contact surface normals of an object.

  6. A database/knowledge structure for a robotics vision system

    NASA Technical Reports Server (NTRS)

    Dearholt, D. W.; Gonzales, N. N.

    1987-01-01

    Desirable properties of robotics vision database systems are given, and structures which possess properties appropriate for some aspects of such database systems are examined. Included in the structures discussed is a family of networks in which link membership is determined by measures of proximity between pairs of the entities stored in the database. This type of network is shown to have properties which guarantee that the search for a matching feature vector is monotonic. That is, the database can be searched with no backtracking, if there is a feature vector in the database which matches the feature vector of the external entity which is to be identified. The construction of the database is discussed, and the search procedure is presented. A section on the support provided by the database for description of the decision-making processes and the search path is also included.

  7. MARVEL: A system that recognizes world locations with stereo vision

    SciTech Connect

    Braunegg, D.J. . Artificial Intelligence Lab.)

    1993-06-01

    MARVEL is a system that supports autonomous navigation by building and maintaining its own models of world locations and using these models and stereo vision input to recognize its location in the world and its position and orientation within that location. The system emphasizes the use of simple, easily derivable features for recognition, whose aggregate identifies a location, instead of complex features that also require recognition. MARVEL is designed to be robust with respect to input errors and to respond to a gradually changing world by updating its world location models. In over 1,000 recognition tests using real-world data, MARVEL yielded a false negative rate under 10% with zero false positives.

  8. Wearable design issues for electronic vision enhancement systems

    NASA Astrophysics Data System (ADS)

    Dvorak, Joe

    2006-09-01

    As the baby boomer generation ages, visual impairment will overtake a significant portion of the US population. At the same time, more and more of our world is becoming digital. These two trends, coupled with the continuing advances in digital electronics, argue for a rethinking in the design of aids for the visually impaired. This paper discusses design issues for electronic vision enhancement systems (EVES) [R.C. Peterson, J.S. Wolffsohn, M. Rubinstein, et al., Am. J. Ophthalmol. 136 1129 (2003)] that will facilitate their wearability and continuous use. We briefly discuss the factors affecting a person's acceptance of wearable devices. We define the concept of operational inertia which plays an important role in our design of wearable devices and systems. We then discuss how design principles based upon operational inertia can be applied to the design of EVES.

  9. [Formal care systems consequences of a vision on informal caretakers].

    PubMed

    Escuredo Rodríguez, Bibiana

    2006-10-01

    Care for dependent persons falls, fundamentally, on their family members who usually perceive this situation as a problem due to its repercussions on the family group in general and on the health and quality of life for the informal caretaker in particular. The burden which an informal caretaker assumes depends on diverse variables among which the most important are considered to be social assistance and the forms of help which the caretaker has to rely on. At the same time, the resources and help available are determined by the vision which the formal system has for informal caretakers; therefore, it is important that nurses, as caretakers in the formal system, have a clear idea about the situations that are created and that nurses reflect on the alternatives which allow a dependent person to be cared for without forgetting the needs and rights of the caretakers.

  10. Cryogenics Vision Workshop for High-Temperature Superconducting Electric Power Systems Proceedings

    SciTech Connect

    Energetics, Inc.

    2000-01-01

    The US Department of Energy's Superconductivity Program for Electric Systems sponsored the Cryogenics Vision Workshop, which was held on July 27, 1999 in Washington, D.C. This workshop was held in conjunction with the Program's Annual Peer Review meeting. Of the 175 people attending the peer review meeting, 31 were selected in advance to participate in the Cryogenics Vision Workshops discussions. The participants represented cryogenic equipment manufactures, industrial gas manufacturers and distributors, component suppliers, electric power equipment manufacturers (Superconductivity Partnership Initiative participants), electric utilities, federal agencies, national laboratories, and consulting firms. Critical factors were discussed that need to be considered in describing the successful future commercialization of cryogenic systems. Such systems will enable the widespread deployment of high-temperature superconducting (HTS) electric power equipment. Potential research, development, and demonstration (RD and D) activities and partnership opportunities for advancing suitable cryogenic systems were also discussed. The workshop agenda can be found in the following section of this report. Facilitated sessions were held to discuss the following specific focus topics: identifying Critical Factors that need to be included in a Cryogenics Vision for HTS Electric Power Systems (From the HTS equipment end-user perspective) identifying R and D Needs and Partnership Roles (From the cryogenic industry perspective) The findings of the facilitated Cryogenics Vision Workshop were then presented in a plenary session of the Annual Peer Review Meeting. Approximately 120 attendees participated in the afternoon plenary session. This large group heard summary reports from the workshop session leaders and then held a wrap-up session to discuss the findings, cross-cutting themes, and next steps. These summary reports are presented in this document. The ideas and suggestions raised during

  11. Stereoscopic Machine-Vision System Using Projected Circles

    NASA Technical Reports Server (NTRS)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  12. MMW radar enhanced vision systems: the Helicopter Autonomous Landing System (HALS) and Radar-Enhanced Vision System (REVS) are rotary and fixed wing enhanced flight vision systems that enable safe flight operations in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Cross, Jack; Schneider, John; Cariani, Pete

    2013-05-01

    Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.

  13. Visual tracking in stereo. [by computer vision system

    NASA Technical Reports Server (NTRS)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  14. Survey of computer vision in roadway transportation systems

    NASA Astrophysics Data System (ADS)

    Manikoth, Natesh; Loce, Robert; Bernal, Edgar; Wu, Wencheng

    2012-01-01

    There is a world-wide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This conference presentation and publication is brief introduction to the field, and will be followed by an in-depth journal paper that provides more details on the imaging systems and algorithms.

  15. Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system.

    PubMed

    Jia, Zhenyuan; Yang, Jinghao; Liu, Wei; Wang, Fuji; Liu, Yang; Wang, Lingli; Fan, Chaonan; Zhao, Kai

    2015-06-15

    High-precision calibration of binocular vision systems plays an important role in accurate dimensional measurements. In this paper, an improved camera calibration method is proposed. First, an accurate intrinsic parameters calibration method based on active vision with perpendicularity compensation is developed. Compared to the previous work, this method eliminates the effect of non-perpendicularity of the camera motion on calibration accuracy. The principal point, scale factors, and distortion factors are calculated independently in this method, thereby allowing the strong coupling of these parameters to be eliminated. Second, an accurate global optimization method with only 5 images is presented. The results of calibration experiments show that the accuracy of the calibration method can reach 99.91%.

  16. A Novel Vision Sensing System for Tomato Quality Detection.

    PubMed

    Srivastava, Satyam; Boyat, Sachin; Sadistap, Shashikant

    2014-01-01

    Producing tomato is a daunting task as the crop of tomato is exposed to attacks from various microorganisms. The symptoms of the attacks are usually changed in color, bacterial spots, special kind of specks, and sunken areas with concentric rings having different colors on the tomato outer surface. This paper addresses a vision sensing based system for tomato quality inspection. A novel approach has been developed for tomato fruit detection and disease detection. Developed system consists of USB based camera module having 12.0 megapixel interfaced with ARM-9 processor. Zigbee module has been interfaced with developed system for wireless transmission from host system to PC based server for further processing. Algorithm development consists of three major steps, preprocessing steps like noise rejection, segmentation and scaling, classification and recognition, and automatic disease detection and classification. Tomato samples have been collected from local market and data acquisition has been performed for data base preparation and various processing steps. Developed system can detect as well as classify the various diseases in tomato samples. Various pattern recognition and soft computing techniques have been implemented for data analysis as well as different parameters prediction like shelf life of the tomato, quality index based on disease detection and classification, freshness detection, maturity index detection, and different suggestions for detected diseases. Results are validated with aroma sensing technique using commercial Alpha Mos 3000 system. Accuracy has been calculated from extracted results, which is around 92%.

  17. Vision for an Open, Global Greenhouse Gas Information System (GHGIS)

    NASA Astrophysics Data System (ADS)

    Duren, R. M.; Butler, J. H.; Rotman, D.; Ciais, P.; Greenhouse Gas Information System Team

    2010-12-01

    Over the next few years, an increasing number of entities ranging from international, national, and regional governments, to businesses and private land-owners, are likely to become more involved in efforts to limit atmospheric concentrations of greenhouse gases. In such a world, geospatially resolved information about the location, amount, and rate of greenhouse gas (GHG) emissions will be needed, as well as the stocks and flows of all forms of carbon through the earth system. The ability to implement policies that limit GHG concentrations would be enhanced by a global, open, and transparent greenhouse gas information system (GHGIS). An operational and scientifically robust GHGIS would combine ground-based and space-based observations, carbon-cycle modeling, GHG inventories, synthesis analysis, and an extensive data integration and distribution system, to provide information about anthropogenic and natural sources, sinks, and fluxes of greenhouse gases at temporal and spatial scales relevant to decision making. The GHGIS effort was initiated in 2008 as a grassroots inter-agency collaboration intended to identify the needs for such a system, assess the capabilities of current assets, and suggest priorities for future research and development. We will present a vision for an open, global GHGIS including latest analysis of system requirements, critical gaps, and relationship to related efforts at various agencies, the Group on Earth Observations, and the Intergovernmental Panel on Climate Change.

  18. 78 FR 34935 - Revisions to Operational Requirements for the Use of Enhanced Flight Vision Systems (EFVS) and to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-11

    ... operators to use an Enhanced Flight Vision System (EFVS) in lieu of natural vision to continue descending... proficiency would be required for operators who use EFVS in lieu of natural vision to descend below decision... zone elevation. Natural vision must be used below 100 feet. Sections 121.651(c), 125.325,...

  19. Evolution of activity patterns and chromatic vision in primates: morphometrics, genetics and cladistics.

    PubMed

    Heesy, C P; Ross, C F

    2001-02-01

    Hypotheses for the adaptive origin of primates have reconstructed nocturnality as the primitive activity pattern for the entire order based on functional/adaptive interpretations of the relative size and orientation of the orbits, body size and dietary reconstruction. Based on comparative data from extant taxa this reconstruction implies that basal primates were also solitary, faunivorous, and arboreal. Recently, primates have been hypothesized to be primitively diurnal, based in part on the distribution of color-sensitive photoreceptor opsin genes and active trichromatic color vision in several extant strepsirrhines, as well as anthropoid primates (Tan & Li, 1999 Nature402, 36; Li, 2000 Am. J. phys. Anthrop. Supple.30, 318). If diurnality is primitive for all primates then the functional and adaptive significance of aspects of strepsirrhine retinal morphology and other adaptations of the primate visual system such as high acuity stereopsis, have been misinterpreted for decades. This hypothesis also implies that nocturnality evolved numerous times in primates. However, the hypothesis that primates are primitively diurnal has not been analyzed in a phylogenetic context, nor have the activity patterns of several fossil primates been considered. This study investigated the evolution of activity patterns and trichromacy in primates using a new method for reconstructing activity patterns in fragmentary fossils and by reconstructing visual system character evolution at key ancestral nodes of primate higher taxa. Results support previous studies that reconstruct omomyiform primates as nocturnal. The larger body sizes of adapiform primates confound inferences regarding activity pattern evolution in this group. The hypothesis of diurnality and trichromacy as primitive for primates is not supported by the phylogenetic data. On the contrary, nocturnality and dichromatic vision are not only primitive for all primates, but also for extant strepsirrhines. Diurnality, and

  20. Precision calibration method for binocular vision measurement systems based on arbitrary translations and 3D-connection information

    NASA Astrophysics Data System (ADS)

    Yang, Jinghao; Jia, Zhenyuan; Liu, Wei; Fan, Chaonan; Xu, Pengtao; Wang, Fuji; Liu, Yang

    2016-10-01

    Binocular vision systems play an important role in computer vision, and high-precision system calibration is a necessary and indispensable process. In this paper, an improved calibration method for binocular stereo vision measurement systems based on arbitrary translations and 3D-connection information is proposed. First, a new method for calibrating the intrinsic parameters of binocular vision system based on two translations with an arbitrary angle difference is presented, which reduces the effect of the deviation of the motion actuator on calibration accuracy. This method is simpler and more accurate than existing active-vision calibration methods and can provide a better initial value for the determination of extrinsic parameters. Second, a 3D-connection calibration and optimization method is developed that links the information of the calibration target in different positions, further improving the accuracy of the system calibration. Calibration experiments show that the calibration error can be reduced to 0.09%, outperforming traditional methods for the experiments of this study.

  1. Holographic optogenetic stimulation of patterned neuronal activity for vision restoration.

    PubMed

    Reutsky-Gefen, Inna; Golan, Lior; Farah, Nairouz; Schejter, Adi; Tsur, Limor; Brosh, Inbar; Shoham, Shy

    2013-01-01

    When natural photoreception is disrupted, as in outer-retinal degenerative diseases, artificial stimulation of surviving nerve cells offers a potential strategy for bypassing compromised neural circuits. Recently, light-sensitive proteins that photosensitize quiescent neurons have generated unprecedented opportunities for optogenetic neuronal control, inspiring early development of optical retinal prostheses. Selectively exciting large neural populations are essential for eliciting meaningful perceptions in the brain. Here we provide the first demonstration of holographic photo-stimulation strategies for bionic vision restoration. In blind retinas, we demonstrate reliable holographically patterned optogenetic stimulation of retinal ganglion cells with millisecond temporal precision and cellular resolution. Holographic excitation strategies could enable flexible control over distributed neuronal circuits, potentially paving the way towards high-acuity vision restoration devices and additional medical and scientific neuro-photonics applications.

  2. The Application of Lidar to Synthetic Vision System Integrity

    NASA Technical Reports Server (NTRS)

    Campbell, Jacob L.; UijtdeHaag, Maarten; Vadlamani, Ananth; Young, Steve

    2003-01-01

    One goal in the development of a Synthetic Vision System (SVS) is to create a system that can be certified by the Federal Aviation Administration (FAA) for use at various flight criticality levels. As part of NASA s Aviation Safety Program, Ohio University and NASA Langley have been involved in the research and development of real-time terrain database integrity monitors for SVS. Integrity monitors based on a consistency check with onboard sensors may be required if the inherent terrain database integrity is not sufficient for a particular operation. Sensors such as the radar altimeter and weather radar, which are available on most commercial aircraft, are currently being investigated for use in a real-time terrain database integrity monitor. This paper introduces the concept of using a Light Detection And Ranging (LiDAR) sensor as part of a real-time terrain database integrity monitor. A LiDAR system consists of a scanning laser ranger, an inertial measurement unit (IMU), and a Global Positioning System (GPS) receiver. Information from these three sensors can be combined to generate synthesized terrain models (profiles), which can then be compared to the stored SVS terrain model. This paper discusses an initial performance evaluation of the LiDAR-based terrain database integrity monitor using LiDAR data collected over Reno, Nevada. The paper will address the consistency checking mechanism and test statistic, sensitivity to position errors, and a comparison of the LiDAR-based integrity monitor to a radar altimeter-based integrity monitor.

  3. An Expert Vision System for Medical Image Segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Shiuh-Yung J.; Lin, Wei-Chung; Chen, Chin-Tu

    1989-05-01

    In this paper, an expert vision system is proposed which integrates knowledge from diverse sources for tomographic image segmentation. The system miinicks the reasoning process of an expert to divide a tomographic brain image into semantically meaningful entities. These entities can then be related to the fundamental biomedical processes, both in health and in disease, that are of interest or of importance to health care research. The images under study include those acquired from x-ray CT (Computed Tomography), MRI (Magnetic Resonance Imaging), and PET (Positron Emission Tomography). Given a set of three (correlated) images acquired from these three different modalities at the same slicing level and angle of a human brain, the proposed system performs image segmentation based on (1) knowledge about the characteristics of the three different sensors, (2) knowledge about the anatomic structures of human brains, (3) knowledge about brain diseases, and (4) knowledge about image processing and analysis tools. Since the problem domain is characterized by incomplete and uncertain information, the blackboard architecture which is an opportunistic reasoning model is adopted as the framework of the proposed system.

  4. Helmet-mounted pilot night vision systems: Human factors issues

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.; Brickner, Michael S.

    1989-01-01

    Helmet-mounted displays of infrared imagery (forward-looking infrared (FLIR)) allow helicopter pilots to perform low level missions at night and in low visibility. However, pilots experience high visual and cognitive workload during these missions, and their performance capabilities may be reduced. Human factors problems inherent in existing systems stem from three primary sources: the nature of thermal imagery; the characteristics of specific FLIR systems; and the difficulty of using FLIR system for flying and/or visually acquiring and tracking objects in the environment. The pilot night vision system (PNVS) in the Apache AH-64 provides a monochrome, 30 by 40 deg helmet-mounted display of infrared imagery. Thermal imagery is inferior to television imagery in both resolution and contrast ratio. Gray shades represent temperatures differences rather than brightness variability, and images undergo significant changes over time. The limited field of view, displacement of the sensor from the pilot's eye position, and monocular presentation of a bright FLIR image (while the other eye remains dark-adapted) are all potential sources of disorientation, limitations in depth and distance estimation, sensations of apparent motion, and difficulties in target and obstacle detection. Insufficient information about human perceptual and performance limitations restrains the ability of human factors specialists to provide significantly improved specifications, training programs, or alternative designs. Additional research is required to determine the most critical problem areas and to propose solutions that consider the human as well as the development of technology.

  5. The analysis of measurement accuracy of the parallel binocular stereo vision system

    NASA Astrophysics Data System (ADS)

    Yu, Huan; Xing, Tingwen; Jia, Xin

    2016-09-01

    Parallel binocular stereo vision system is a special form of binocular vision system. In order to simulate the human eyes observation state, the two cameras used to obtain images of the target scene are placed parallel to each other. This paper built a triangular geometric model, analyzed the structure parameters of parallel binocular stereo vision system and the correlations between them, and discussed the influences of baseline distance B between two cameras, the focal length f, the angle of view ω and other structural parameters on the accuracy of measurement. This paper used Matlab software to test the error function of parallel binocular stereo vision system under different structure parameters, and the simulation results showed the range of structure parameters when errors were small, thereby improved the accuracy of parallel binocular stereo vision system.

  6. New vision solar system mission study. Final report

    SciTech Connect

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    The vision for the future of the planetary exploration program includes the capability to deliver {open_quotes}constellations{close_quotes} or {open_quotes}fleets{close_quotes} of microspacecraft to a planetary destination. These fleets will act in a coordinated manner to gather science data from a variety of locations on or around the target body, thus providing detailed, global coverage without requiring development of a single large, complex and costly spacecraft. Such constellations of spacecraft, coupled with advanced information processing and visualization techniques and high-rate communications, could provide the basis for development of a {open_quotes}virtual{close_quotes} {open_quotes}presence{close_quotes} in the solar system. A goal could be the near real-time delivery of planetary images and video to a wide variety of users in the general public and the science community. This will be a major step in making the solar system accessible to the public and will help make solar system exploration a part of the human experience on Earth.

  7. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    PubMed

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21].

  8. Vision system for gauging and automatic straightening of steel bars

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Wilding, Ernst; Niel, Albert; Barg, Herbert

    2001-02-01

    A machine vision application for the fully automatic straightening of steel bars is presented. The bars with lengths of up to 6000 mm are quite bent on exit of the rolling mill and need to be straightened prior to delivery to a customer. The shape of the steel bar is extracted and measured by two video resolution cameras which are calibrated in position and viewing angle relative to a coordinate system located in the center of the roller table. Its contour is tracked and located with a dynamic programming method utilizing several constraints to make the algorithm as robust as possible. 3D camera calibration allows the transformation of image coordinates to real-world coordinates. After smoothing and spline fitting the curvature of the bar is computed. A deformation model of the effect of force applied to the steel allows the system to generate press commands which state where and with what specific pressure the bar has to be processed. The model can be used to predict the straightening of the bar over some consecutive pressing events helping to optimize the operation. The process of measurement and pressing is repeated until the straightness of the bar reaches a predefined limit.

  9. Natural language understanding and speech recognition for industrial vision systems

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.

    1992-11-01

    The accepted method of programming machine vision systems for a new application is to incorporate sub-routines from a standard library into code, written specially for the given task. Typical programming languages that might be used here are Pascal, C, and assembly code, although other `conventional' (i.e., imperative) languages are often used instead. The representation of an algorithm to recognize a certain object, in the form of, say, a C language program is clumsy and unnatural, compared to the alternative process of describing the object itself and leaving the software to search for it. The latter method, known as declarative programming, is used extensively both when programming in Prolog and when people talk to one another in English, or other natural languages. Programs to understand a limited sub-set of a natural language can also be written conveniently in Prolog. The article considers the prospects for talking to an image processing system, using only slightly constrained English. Moderately priced speech recognition devices, which interface to a standard desk-top computer and provide a limited repertoire (200 words) as well as the ability to identify isolated words, are already available commercially. At the moment, the goal of talking in English to a computer is incompletely fulfilled. Yet, sufficient progress has been made to encourage greater effort in this direction.

  10. Combination of a vision system and a coordinate measuring machine for rapid coordinate metrology

    NASA Astrophysics Data System (ADS)

    Qu, Yufu; Pu, Zhaobang; Liu, Guodong

    2002-09-01

    This paper presents a novel methodology that integrates a vision system and a coordinate measuring machine for rapid coordinate metrology. Rapid acquisition of coordinate data from parts having tiny dimension, complex geometry and soft or fragile material has many applications. Typical examples include Large Scale Integrated circuit, glass or plastic part measurement, and reverse engineering in rapid product design and realization. In this paper, a novel approach to a measuring methodology for a vision integrated coordinate measuring system is developed and demonstrated. The vision coordinate measuring system is characterized by an integrated use of a high precision coordinate measuring machine (CMM), a vision system, advanced computational software, and the associated electronics. The vision system includes a charge-coupled device (CCD) camera, a self-adapt brightness power, and a graphics workstation with an image processing board. The vision system along with intelligent feature recognition and auto-focus algorithms provides the feature point space coordinate of global part profile after the system has been calibrated. The measured data may be fitted to geometry element of part profile. The obtained results are subsequently used to compute parameters consist of curvature radius, distance, shape error and surface reconstruction. By integrating the vision system with the CMM, a highly automated, high speed, 3D coordinate acquisition system is developed. It has potential applications in a whole spectrum of manufacturing problems with a major impact on metrology, inspection, and reverse engineering.

  11. An embedded vision system for an unmanned four-rotor helicopter

    NASA Astrophysics Data System (ADS)

    Lillywhite, Kirt; Lee, Dah-Jye; Tippetts, Beau; Fowers, Spencer; Dennis, Aaron; Nelson, Brent; Archibald, James

    2006-10-01

    In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.

  12. Compatibility of the aviation night vision imaging systems and the aging aviator.

    PubMed

    Farr, W D

    1989-10-01

    With the advent of the night vision goggle (NVG) mission requirements in the United States Army, the reserve components began training with the second generation (AN/PVS-5 & AN/PVS-5A) systems. These systems prohibit the wear of spectacles by the aviator. Certain modifications on some systems allowed for spectacle wear. However, there still exists a 5-h day filter training minimum in which the full NVG with facemask and cushion must be worn without spectacles. The NVG system corrects up to +2.00 diopters of hyperopia and up to -6.00 diopters of myopia, but only +/- 1.00 diopter of astigmatism. A survey of the reserve component (USAR and NG) aviators in the Southwest was conducted to establish the relative incompatibility of the NVG system among an aviator population older than the active component aviators. All medical record custodians received questionnaires and the flight surgeon followed up replies by telephone or on-site visits. We screened a total of 127 aviator records. The aviator's average age was 39.5 years; 65.3% had 20/20 vision and were emmetropes. Of those that wore spectacles, 82.4% had hyperopia or myopia correctable by the built-in optical adjustments contained in the NVG. The other 17.6%, who had vision that exceeded the correction factors built into the NVG, consisted of astigmats with greater than 2.00 diopters of cylinder. Nearly 20% of the aviators who wore corrective lenses exceeded the corrective limits of the goggles that they used. Further, pilots had no specific prescreening instruction. With the development of more sophisticated aviation optics. Three options exist: modify visual standards, allow contact lens wear, or design future systems to be compatible with spectacles.

  13. Fostering a regional vision on river systems by remote sensing

    NASA Astrophysics Data System (ADS)

    Bizzi, S.; Piegay, H.; Demarchi, L.

    2015-12-01

    River classification and the derived knowledge about river systems have been relying until recently on discontinuous field campaigns and visual interpretation of aerial images. For this reason, building a regional vision on river systems based on a systematic and coherent set of hydromorphological indicators was, and still is, a research challenge. Remote sensing data, since some years, offer notable opportunities to shift this paradigm offering an unprecedented amount of spatially distributed data over large scales, such as regional. Here, we have implemented a river characterization framework based on color infrared orthophotos at 40 cm and a LIDAR derived DTM at 5 m acquired simultaneously in 2009-2010 for all Piedmont Region Italy (25400 kmq). 1500 km of river systems have been characterized in terms typology, geometry and topography of hydromorphological features. The framework delineates the valley bottom of each river course, and maps by a semi-automated procedure water channels, unvegetated and vegetated sediment bars, islands, and riparian corridors. Using a range of statistical techniques the river systems have been segmented and classified with an objective, quantitative, and then repeatable approach. Such regional database enhances our ability to address a number of research and management challenges, such as: i) quantify shape and topography of channel forms for different river functional types, and investigate their relationships with potential drivers like hydrology, geology, land use and historical contingency; ii) localize most degraded and better functioning river stretches so to prioritize finer scale monitoring and set quantifiable restoration targets; iii) provide indication for future RS acquisition campaigns so to start monitoring river processes at the regional scale. The Piedmont Region in Italy is here used as a laboratory of concrete examples and analyses to discuss our current ability to answer to these challenges in river science.

  14. Associations between platelet monoamine oxidase-B activity and acquired colour vision loss in a fish-eating population.

    PubMed

    Stamler, Christopher John; Mergler, Donna; Abdelouahab, Nadia; Vanier, Claire; Chan, Hing Man

    2006-01-01

    Platelet monoamine oxidase-B (MAO-B) has been considered a surrogate biochemical marker of neurotoxicity, as it may reflect changes in the monoaminergic system in the brain. Colour vision discrimination, in part a dopamine dependent process, has been used to identify early neurological effects of some environmental and industrial neurotoxicants. The objective of this cross-sectional study was to explore the relationship between platelet MAO-B activity and acquired colour discrimination capacity in fish-consumers from the St. Lawrence River region of Canada. Assessment of acquired dyschromatopsia was determined using the Lanthony D-15 desaturated panel test. Participants classified with dyschromatopsia (n=81) had significantly lower MAO-B activity when compared to those with normal colour vision (n=32) (26.5+/-9.6 versus 31.0+/-9.9 nmol/min/20 microg, P=0.030)). Similarly, Bowman's Colour Confusion Index (CCI) was inversely correlated with MAO-B activity when the vision test was performed with the worst eye only (r=-0.245, P=0.009), the best eye only (r=-0.188, P=0.048) and with both eyes together (r=-0.309, P=0.001). Associations remained significant after adjustment for age and gender when both eyes (P=0.003) and the worst eye (P=0.045) were tested. Adjustment for heavy smoking weakened the association between MAO-B and CCI in the worst eye (P=0.140), but did not alter this association for both eyes (P=0.006). Adjustment for blood-mercury concentrations did not change the association. This study suggests a relationship between reduced MAO-B activity and acquired colour vision loss and both are associated with tobacco smoking. Therefore, results show that platelet MAO-B may be used as a surrogate biochemical marker of acquired colour vision loss.

  15. A novel image fusion algorithm based on human vision system

    NASA Astrophysics Data System (ADS)

    Miao, Qiguang; Wang, Baoshu

    2006-04-01

    The proposed new fusion algorithm is based on the improved pulse coupled neural network(PCNN) model, the fundamental characteristics of images and the properties of human vision system. Compared with the traditional algorithm where the linking strength of each neuron is the same and its value is chosen through experimentation, this algorithm uses the contrast of each pixel as its value, so that the linking strength of each pixel can be chosen adaptively. After the processing of PCNN with the adaptive linking strength, new fire mapping images are obtained for each image taking part in the fusion. The clear objects of each original image are decided by the compare-selection operator with the fire mapping images pixel by pixel and then all of them are merged into a new clear image. Furthermore, by this algorithm, other parameters, for example, Δ, the threshold adjusting constant, only have a slight effect on the new fused image. It therefore overcomes the difficulty in adjusting parameters in PCNN. Experiments show that the proposed algorithm works better in preserving the edge and texture information than the wavelet transform method and the Laplacian pyramid method do image fusion.

  16. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  17. ARM-based visual processing system for prosthetic vision.

    PubMed

    Matteucci, Paul B; Byrnes-Preston, Philip; Chen, Spencer C; Lovell, Nigel H; Suaning, Gregg J

    2011-01-01

    A growing number of prosthetic devices have been shown to provide visual perception to the profoundly blind through electrical neural stimulation. These first-generation devices offer promising outcomes to those affected by degenerative disorders such as retinitis pigmentosa. Although prosthetic approaches vary in their placement of the stimulating array (visual cortex, optic-nerve, epi-retinal surface, sub-retinal surface, supra-choroidal space, etc.), most of the solutions incorporate an externally-worn device to acquire and process video to provide the implant with instructions on how to deliver electrical stimulation to the patient, in order to elicit phosphenized vision. With the significant increase in availability and performance of low power-consumption smart phone and personal device processors, the authors investigated the use of a commercially available ARM (Advanced RISC Machine) device as an externally-worn processing unit for a prosthetic neural stimulator for the retina. A 400 MHz Samsung S3C2440A ARM920T single-board computer was programmed to extract 98 values from a 1.3 Megapixel OV9650 CMOS camera using impulse, regional averaging and Gaussian sampling algorithms. Power consumption and speed of video processing were compared to results obtained to similar reported devices. The results show that by using code optimization, the system is capable of driving a 98 channel implantable device for the restoration of visual percepts to the blind.

  18. Binocular stereo vision system based on phase matching

    NASA Astrophysics Data System (ADS)

    Liu, Huixian; Huang, Shujun; Gao, Nan; Zhang, Zonghua

    2016-11-01

    Binocular stereo vision is an efficient way for three dimensional (3D) profile measurement and has broad applications. Image acquisition, camera calibration, stereo matching, and 3D reconstruction are four main steps. Among them, stereo matching is the most important step that has a significant impact on the final result. In this paper, a new stereo matching technique is proposed to combine the absolute fringe order and the unwrapped phase of every pixel. Different from traditional phase matching method, sinusoidal fringe in two perpendicular directions are projected. It can be realized through the following three steps. Firstly, colored sinusoidal fringe in both horizontal (red fringe) and vertical (blue fringe) are projected on the object to be measured, and captured by two cameras synchronously. The absolute fringe order and the unwrapped phase of each pixel along the two directions are calculated based on the optimum three-fringe numbers selection method. Then, based on the absolute fringe order of the left and right phase maps, stereo matching method is presented. In this process, the same absolute fringe orders in both horizontal and vertical directions are searched to find the corresponding point. Based on this technique, as many as possible pairs of homologous points between two cameras are found to improve the precision of the measurement result. Finally, a 3D measuring system is set up and the 3D reconstruction results are shown. The experimental results show that the proposed method can meet the requirements of high precision for industrial measurements.

  19. Target detect system in 3D using vision apply on plant reproduction by tissue culture

    NASA Astrophysics Data System (ADS)

    Vazquez Rueda, Martin G.; Hahn, Federico

    2001-03-01

    This paper presents the preliminary results for a system in tree dimension that use a system vision to manipulate plants in a tissue culture process. The system is able to estimate the position of the plant in the work area, first calculate the position and send information to the mechanical system, and recalculate the position again, and if it is necessary, repositioning the mechanical system, using an neural system to improve the location of the plant. The system use only the system vision to sense the position and control loop using a neural system to detect the target and positioning the mechanical system, the results are compared with an open loop system.

  20. Three-dimensional microscope vision system based on micro laser line scanning and adaptive genetic algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar, J.; Rodríguez, Muñoz

    2017-02-01

    A microscope vision system to retrieve small metallic surface via micro laser line scanning and genetic algorithms is presented. In this technique, a 36 μm laser line is projected on the metallic surface through a laser diode head, which is placed to a small distance away from the target. The micro laser line is captured by a CCD camera, which is attached to the microscope. The surface topography is computed by triangulation by means of the line position and microscope vision parameters. The calibration of the microscope vision system is carried out by an adaptive genetic algorithm based on the line position. In this algorithm, an objective function is constructed from the microscope geometry to determine the microscope vision parameters. Also, the genetic algorithm provides the search space to calculate the microscope vision parameters with high accuracy in fast form. This procedure avoids errors produced by the missing of references and physical measurements, which are employed by the traditional microscope vision systems. The contribution of the proposed system is corroborated by an evaluation via accuracy and speed of the traditional microscope vision systems, which retrieve micro-scale surface topography.

  1. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  2. Application of edge detection algorithm for vision guided robotics assembly system

    NASA Astrophysics Data System (ADS)

    Balabantaray, Bunil Kumar; Jha, Panchanand; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system has a major role in making robotic assembly system autonomous. Part detection and identification of the correct part are important tasks which need to be carefully done by a vision system to initiate the process. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Edge detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus one needs to choose the correct tool for the process with respect to the given environment. In this paper the comparative study of edge detection algorithm with grasping the object in robot assembly system is presented. The proposed work is performed on the Matlab R2010a Simulink. This paper proposes four algorithms i.e. Canny's, Robert, Prewitt and Sobel edge detection algorithm. An attempt has been made to find the best algorithm for the problem. It is found that Canny's edge detection algorithm gives better result and minimum error for the intended task.

  3. 3-D Object Recognition Using Combined Overhead And Robot Eye-In-Hand Vision System

    NASA Astrophysics Data System (ADS)

    Luc, Ren C.; Lin, Min-Hsiung

    1987-10-01

    A new approach for recognizing 3-D objects using a combined overhead and eye-in-hand vision system is presented. A novel eye-in-hand vision system using a fiber-optic image array is described. The significance of this approach is the fast and accurate recognition of 3-D object information compared to traditional stereo image processing. For the recognition of 3-D objects, the over-head vision system will take 2-D top view image and the eye-in-hand vision system will take side view images orthogonal to the top view image plane. We have developed and demonstrated a unique approach to integrate this 2-D information into a 3-D representation based on a new approach called "3-D Volumetric Descrip-tion from 2-D Orthogonal Projections". The Unimate PUMA 560 and TRAPIX 5500 real-time image processor have been used to test the success of the entire system.

  4. Evaluating the Effects of Dimensionality in Advanced Avionic Display Concepts for Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Alexander, Amy L.; Prinzel, Lawrence J., III; Wickens, Christopher D.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.

    2007-01-01

    Synthetic vision systems provide an in-cockpit view of terrain and other hazards via a computer-generated display representation. Two experiments examined several display concepts for synthetic vision and evaluated how such displays modulate pilot performance. Experiment 1 (24 general aviation pilots) compared three navigational display (ND) concepts: 2D coplanar, 3D, and split-screen. Experiment 2 (12 commercial airline pilots) evaluated baseline 'blue sky/brown ground' or synthetic vision-enabled primary flight displays (PFDs) and three ND concepts: 2D coplanar with and without synthetic vision and a dynamic multi-mode rotatable exocentric format. In general, the results pointed to an overall advantage for a split-screen format, whether it be stand-alone (Experiment 1) or available via rotatable viewpoints (Experiment 2). Furthermore, Experiment 2 revealed benefits associated with utilizing synthetic vision in both the PFD and ND representations and the value of combined ego- and exocentric presentations.

  5. Calibration procedures for the space vision system experiment

    NASA Astrophysics Data System (ADS)

    MacLean, Steve G.; Pinkney, Heidi

    1991-09-01

    In 1986, a space-qualified version of the real-time photogrammetry system invented by Pinkney and Perratt in 1978 was developed under contract to the Canadian Astronaut Program by Spar Aerospace and Leigh Instruments Ltd. as a space-flight experiment called the Space Vision System (SVS). Originally scheduled to fly in March of 1987, the SVS is now slated to fly on the shuttle in September of 1992 as part of a series of experiments called Canex-2. Over a period of three days the functionality of the SVS will be verified through a series of proximity operations with a test satellite called the Canadian Target Assembly (CTA). This hardware and the flight experiment are briefly described in a previous paper by Pinkney et.al. One aspect of flight preparation that is crucial to the success of the experiment is the calibration procedure utilized by the SVS. On-orbit conditions present many difficulties that are not typical of the laboratory. Extreme temperatures cause the shape of the cargo bay, which is the reference coordinate system for the photogrammetry platform, to thermally deform every 45 minutes. The pan/tilt mechanism for the current shuttle closed-circuit television (CCTV) cameras was never intended to be used for photogrammetry. Experience gained in 1984 on the Canex-1 mission showed that the pan/tilt mechanisms could be stalled by the mechanical stiffness of their own power wires, and because their angles are only command encoded the pan/tilt information available to the operator in the aft flight deck was generally suspect. This paper deals with the SVS calibration techniques and the procedures associated with the calibration of the current shuttle cameras and the photogrammetry platform, both in preparation for flight and on orbit. It has been shown in recent simulations that this self-consistent approach contributes to the position and orientation accuracies that would allow an operator who uses SVS displays to control the shuttle's remote manipulator

  6. Design of a dynamic test platform for autonomous robot vision systems

    NASA Technical Reports Server (NTRS)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  7. SeaVipers- Computer Vision and Inertial Position/Reference Sensor System (CVIPRSS)

    DTIC Science & Technology

    2015-08-01

    VISION AND INERTIAL POSITION/REFERENCE SENSOR SYSTEM (CVIPRSS) This work describes the design and development of an optical, Computer Vision (CV) based... sensor for use as a Position Reference System (PRS) in Dynamic Positioning (DP). Using a combination of robotics and CV techniques, the sensor ...Dynamic Positioning Sensor REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR/MONITOR’S ACRONYM(S) ARO 8

  8. Poor Vision, Functioning, and Depressive Symptoms: A Test of the Activity Restriction Model

    ERIC Educational Resources Information Center

    Bookwala, Jamila; Lawson, Brendan

    2011-01-01

    Purpose: This study tested the applicability of the activity restriction model of depressed affect to the context of poor vision in late life. This model hypothesizes that late-life stressors contribute to poorer mental health not only directly but also indirectly by restricting routine everyday functioning. Method: We used data from a national…

  9. The Effect of Gender and Level of Vision on the Physical Activity Level of Children and Adolescents with Visual Impairment

    ERIC Educational Resources Information Center

    Aslan, Ummuhan Bas; Calik, Bilge Basakci; Kitis, Ali

    2012-01-01

    This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between…

  10. Spatial contrast sensitivity through aviator's night vision imaging system.

    PubMed

    Rabin, J

    1993-08-01

    Visual acuity is often used to assess vision through image intensifying devices such as night vision goggles (NVG's). Fewer attempts have been made to measure contrast sensitivity through NVG's. Such information would be useful to better understand contrast processing through NVG's under various stimulus conditions. In this study, computer-generated letter charts were used to measure contrast sensitivity through third generation NVG's for a range of letter sizes. The red phosphor of a standard color monitor proved to be an effective stimulus for third generation devices. Different night sky conditions were simulated over a 3 log unit range. The results illustrate the profile of contrast sensitivity through third generation NVG's over a range of night sky conditions. Comparison of measurements through NVG's to measurements obtained without the device but at the same luminance and color distinguish between effects of luminance and noise on contrast sensitivity.

  11. Synthetic and Enhanced Vision System for Altair Lunar Lander

    NASA Technical Reports Server (NTRS)

    Prinzell, Lawrence J., III; Kramer, Lynda J.; Norman, Robert M.; Arthur, Jarvis J., III; Williams, Steven P.; Shelton, Kevin J.; Bailey, Randall E.

    2009-01-01

    Past research has demonstrated the substantial potential of synthetic and enhanced vision (SV, EV) for aviation (e.g., Prinzel & Wickens, 2009). These augmented visual-based technologies have been shown to significantly enhance situation awareness, reduce workload, enhance aviation safety (e.g., reduced propensity for controlled flight -into-terrain accidents/incidents), and promote flight path control precision. The issues that drove the design and development of synthetic and enhanced vision have commonalities to other application domains; most notably, during entry, descent, and landing on the moon and other planetary surfaces. NASA has extended SV/EV technology for use in planetary exploration vehicles, such as the Altair Lunar Lander. This paper describes an Altair Lunar Lander SV/EV concept and associated research demonstrating the safety benefits of these technologies.

  12. System and method for controlling a vision guided robot assembly

    DOEpatents

    Lin, Yhu-Tin; Daro, Timothy; Abell, Jeffrey A.; Turner, III, Raymond D.; Casoli, Daniel J.

    2017-03-07

    A method includes the following steps: actuating a robotic arm to perform an action at a start position; moving the robotic arm from the start position toward a first position; determining from a vision process method if a first part from the first position will be ready to be subjected to a first action by the robotic arm once the robotic arm reaches the first position; commencing the execution of the visual processing method for determining the position deviation of the second part from the second position and the readiness of the second part to be subjected to a second action by the robotic arm once the robotic arm reaches the second position; and performing a first action on the first part using the robotic arm with the position deviation of the first part from the first position predetermined by the vision process method.

  13. Robust and efficient vision system for group of cooperating mobile robots with application to soccer robots.

    PubMed

    Klancar, Gregor; Kristan, Matej; Kovacic, Stanislav; Orqueda, Omar

    2004-07-01

    In this paper a global vision scheme for estimation of positions and orientations of mobile robots is presented. It is applied to robot soccer application which is a fast dynamic game and therefore needs an efficient and robust vision system implemented. General applicability of the vision system can be found in other robot applications such as mobile transport robots in production, warehouses, attendant robots, fast vision tracking of targets of interest and entertainment robotics. Basic operation of the vision system is divided into two steps. In the first, the incoming image is scanned and pixels are classified into a finite number of classes. At the same time, a segmentation algorithm is used to find corresponding regions belonging to one of the classes. In the second step, all the regions are examined. Selection of the ones that are a part of the observed object is made by means of simple logic procedures. The novelty is focused on optimization of the processing time needed to finish the estimation of possible object positions. Better results of the vision system are achieved by implementing camera calibration and shading correction algorithm. The former corrects camera lens distortion, while the latter increases robustness to irregular illumination conditions.

  14. Blindsight and Unconscious Vision: What They Teach Us about the Human Visual System.

    PubMed

    Ajina, Sara; Bridge, Holly

    2016-10-23

    Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system.

  15. Human Factors Engineering as a System in the Vision for Exploration

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Smith, Danielle; Holden, Kritina

    2006-01-01

    In order to accomplish NASA's Vision for Exploration, while assuring crew safety and productivity, human performance issues must be well integrated into system design from mission conception. To that end, a two-year Technology Development Project (TDP) was funded by NASA Headquarters to develop a systematic method for including the human as a system in NASA's Vision for Exploration. The specific goals of this project are to review current Human Systems Integration (HSI) standards (i.e., industry, military, NASA) and tailor them to selected NASA Exploration activities. Once the methods are proven in the selected domains, a plan will be developed to expand the effort to a wider scope of Exploration activities. The methods will be documented for inclusion in NASA-specific documents (such as the Human Systems Integration Standards, NASA-STD-3000) to be used in future space systems. The current project builds on a previous TDP dealing with Human Factors Engineering processes. That project identified the key phases of the current NASA design lifecycle, and outlined the recommended HFE activities that should be incorporated at each phase. The project also resulted in a prototype of a webbased HFE process tool that could be used to support an ideal HFE development process at NASA. This will help to augment the limited human factors resources available by providing a web-based tool that explains the importance of human factors, teaches a recommended process, and then provides the instructions, templates and examples to carry out the process steps. The HFE activities identified by the previous TDP are being tested in situ for the current effort through support to a specific NASA Exploration activity. Currently, HFE personnel are working with systems engineering personnel to identify HSI impacts for lunar exploration by facilitating the generation of systemlevel Concepts of Operations (ConOps). For example, medical operations scenarios have been generated for lunar habitation

  16. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  17. NCRP Vision for the Future and Program Area Committee Activities.

    PubMed

    Boice, John D

    2017-02-01

    The National Council on Radiation Protection and Measurements (NCRP) believes that the most critical need for the nation in radiation protection is to train, engage, and retain radiation professionals for the future. Not only is the pipeline shrinking, but for some areas there is no longer a pipe! When the call comes to respond, there may be no one to answer the phone! The NCRP "Where are the Radiation Professionals?" initiative, Council Committee (CC) 2, and this year's annual meeting are to focus our efforts to find solutions and not just reiterate the problems. Our next major initiative is CC 1, where the NCRP is making recommendations for the United States on all things dealing with radiation protection. Our last publication was NCRP Report No. 116, Limitation of Exposure to Ionizing Radiation, in 1993-time for an update. NCRP has seven active Program Area Committees on biology and epidemiology, operational concerns, emergency response and preparedness, medicine, environmental issues and waste management, dosimetry, and communications. A major scientific research initiative is the Million Person Study of Low Dose Radiation Health Effects. It includes workers from the Manhattan Project, nuclear weapons test participants (atomic veterans), industrial radiographers, and early medical workers such as radiologists and technologists. This research will answer the one major gap in radiation risk evaluation: what are the health effects when the exposure occurs gradually over time? Other cutting edge initiatives include a re-evaluation of science behind recommendations for lens of the eye dose limits, recommendations for emergency responders on dosimetry after a major radiological incident, guidance to the National Aeronautics and Space Administration with regard to possible central nervous system effects from galactic cosmic rays (the high energy, high mass particles bounding through space), re-evaluating the population exposure to medical radiation (NCRP Report No

  18. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  19. Seam tracking performance of a Coaxial Weld Vision System and pulsed welding

    NASA Technical Reports Server (NTRS)

    Gangl, K. J.; Weeks, J. L.; Todd, D.

    1986-01-01

    This report describes a continuation of a series of tests on the Coaxial Weld Vision System at MSFC. The ability of the system to compensate for transients associated with pulsed current welding is analyzed. Using the standard image processing approach for root pass seam tracking, the system is also tested for the ability to track the toe of a previous weld bead, for tracking multiple pass weld joints. This Coaxial Weld Vision System was developed by the Ohio State University (OSU) Center for Welding Research and is a part of the Space Shuttle Main Engine Robotic Welding Development System at MSFC.

  20. Approximate world models: Incorporating qualitative and linguistic information into vision systems

    SciTech Connect

    Pinhanez, C.S.; Bobick, A.F.

    1996-12-31

    Approximate world models are coarse descriptions of the elements of a scene, and are intended to be used in the selection and control of vision routines in a vision system. In this paper we present a control architecture in which the approximate models represent the complex relationships among the objects in the world, allowing the vision routines to be situation or context specific. Moreover, because of their reduced accuracy requirements, approximate world models can employ qualitative information such as those provided by linguistic descriptions of the scene. The concept is demonstrated in the development of automatic cameras for a TV studio-SmartCams. Results are shown where SmartCams use vision processing of real imagery and information written in the script of a TV show to achieve TV-quality framing.

  1. Visions of the Future. Social Science Activities Text. Teacher's Edition.

    ERIC Educational Resources Information Center

    Melnick, Rob; Ronan, Bernard

    Intended to put both national and global issues into perspective and help students make decisions about their futures, this teacher's edition provides instructional objectives, ideas for discussion and inquiries, test blanks for each section, and answer keys for the 22 activities provided in the accompanying student text. Designed to provide high…

  2. Using Vision System Technologies for Offset Approaches in Low Visibility Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.

    2015-01-01

    Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen

  3. Improving Vision-Based Motor Rehabilitation Interactive Systems for Users with Disabilities Using Mirror Feedback

    PubMed Central

    Martínez-Bueso, Pau; Moyà-Alcover, Biel

    2014-01-01

    Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (Ts) and time-to-complete (Tc)). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (Ts = 7.09 (P < 0.001) and Tc = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems. PMID:25295310

  4. Improving vision-based motor rehabilitation interactive systems for users with disabilities using mirror feedback.

    PubMed

    Jaume-i-Capó, Antoni; Martínez-Bueso, Pau; Moyà-Alcover, Biel; Varona, Javier

    2014-01-01

    Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (T(s)) and time-to-complete (T(c))). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (T(s) = 7.09 (P < 0.001) and T(c) = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems.

  5. Data Fusion for a Vision-Radiological System for Source Tracking and Discovery

    SciTech Connect

    Enqvist, Andreas; Koppal, Sanjeev

    2015-07-01

    accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system algorithms are used and

  6. A Machine Vision Quality Control System for Industrial Acrylic Fibre Production

    NASA Astrophysics Data System (ADS)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. Brázio; Dinis, João

    2002-12-01

    This paper describes the implementation of INFIBRA, a machine vision system used in the quality control of acrylic fibre production. The system was developed by INETI under a contract with a leading industrial manufacturer of acrylic fibres. It monitors several parameters of the acrylic production process. This paper presents, after a brief overview of the system, a detailed description of the machine vision algorithms developed to perform the inspection tasks unique to this system. Some of the results of online operation are also presented.

  7. Eye vision system using programmable micro-optics and micro-electronics

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.; Amin, M. Junaid; Riza, Mehdi N.

    2014-02-01

    Proposed is a novel eye vision system that combines the use of advanced micro-optic and microelectronic technologies that includes programmable micro-optic devices, pico-projectors, Radio Frequency (RF) and optical wireless communication and control links, energy harvesting and storage devices and remote wireless energy transfer capabilities. This portable light weight system can measure eye refractive powers, optimize light conditions for the eye under test, conduct color-blindness tests, and implement eye strain relief and eye muscle exercises via time sequenced imaging. Described is the basic design of the proposed system and its first stage system experimental results for vision spherical lens refractive error correction.

  8. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  9. Development and modeling of a stereo vision focusing system for a field programmable gate array robot

    NASA Astrophysics Data System (ADS)

    Tickle, Andrew J.; Buckle, James; Grindley, Josef E.; Smith, Jeremy S.

    2010-10-01

    Stereo vision is a situation where an imaging system has two or more cameras in order to make it more robust by mimicking the human vision system. By using two inputs, knowledge of their own relative geometry can be exploited to derive depth information from the two views they receive. 3D co-ordinates of an object in an observed scene can be computed from the intersection of the two sets of rays. Presented here is the development of a stereo vision system to focus on an object at the centre of a baseline between two cameras at varying distances. This has been developed primarily for use on a Field Programmable Gate Array (FPGA) but an adaptation of this developed methodology is also presented for use with a PUMA 560 Robotic Manipulator with a single camera attachment. The two main vision systems considered here are a fixed baseline with an object moving at varying distances from this baseline, and a system with a fixed distance and a varying baseline. These two differing situations provide enough data so that the co-efficient variables that determine the system operation can be calibrated automatically with only the baseline value needing to be entered, the system performs all the required calculations for the user for use with a baseline of any distance. The limits of system with regards to the focusing accuracy obtained are also presented along with how the PUMA 560 controls its joints for the stereo vision and how it moves from one position to another to attend stereo vision compared to the two camera system for the FPGA. The benefits of such a system for range finding in mobile robotics are discussed and how this approach is more advantageous when compared against laser range finders or echolocation using ultrasonics.

  10. A Customized Vision System for Tracking Humans Wearing Reflective Safety Clothing from Industrial Vehicles and Machinery

    PubMed Central

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  11. Simultaneous perimeter measurement for 3D object with a binocular stereo vision measurement system

    NASA Astrophysics Data System (ADS)

    Peng, Zhao; Guo-Qiang, Ni

    2010-04-01

    A simultaneous measurement scheme for multiple three-dimensional (3D) objects' surface boundary perimeters is proposed. This scheme consists of three steps. First, a binocular stereo vision measurement system with two CCD cameras is devised to obtain the two images of the detected objects' 3D surface boundaries. Second, two geodesic active contours are applied to converge to the objects' contour edges simultaneously in the two CCD images to perform the stereo matching. Finally, the multiple spatial contours are reconstructed using the cubic B-spline curve interpolation. The true contour length of every spatial contour is computed as the true boundary perimeter of every 3D object. An experiment on the bent surface's perimeter measurement for the four 3D objects indicates that this scheme's measurement repetition error decreases to 0.7 mm.

  12. 78 FR 68475 - Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-14

    ... COMMISSION Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...-based driver assistance system cameras and components thereof by reason of infringement of certain... assistance system cameras and components thereof by reason of infringement of one or more of claims 1, 2,...

  13. Night vision imaging systems design, integration, and verification in military fighter aircraft

    NASA Astrophysics Data System (ADS)

    Sabatini, Roberto; Richardson, Mark A.; Cantiello, Maurizio; Toscano, Mario; Fiorini, Pietro; Jia, Huamin; Zammit-Mangion, David

    2012-04-01

    This paper describes the developmental and testing activities conducted by the Italian Air Force Official Test Centre (RSV) in collaboration with Alenia Aerospace, Litton Precision Products and Cranfiled University, in order to confer the Night Vision Imaging Systems (NVIS) capability to the Italian TORNADO IDS (Interdiction and Strike) and ECR (Electronic Combat and Reconnaissance) aircraft. The activities consisted of various Design, Development, Test and Evaluation (DDT&E) activities, including Night Vision Goggles (NVG) integration, cockpit instruments and external lighting modifications, as well as various ground test sessions and a total of eighteen flight test sorties. RSV and Litton Precision Products were responsible of coordinating and conducting the installation activities of the internal and external lights. Particularly, an iterative process was established, allowing an in-site rapid correction of the major deficiencies encountered during the ground and flight test sessions. Both single-ship (day/night) and formation (night) flights were performed, shared between the Test Crews involved in the activities, allowing for a redundant examination of the various test items by all participants. An innovative test matrix was developed and implemented by RSV for assessing the operational suitability and effectiveness of the various modifications implemented. Also important was definition of test criteria for Pilot and Weapon Systems Officer (WSO) workload assessment during the accomplishment of various operational tasks during NVG missions. Furthermore, the specific technical and operational elements required for evaluating the modified helmets were identified, allowing an exhaustive comparative evaluation of the two proposed solutions (i.e., HGU-55P and HGU-55G modified helmets). The results of the activities were very satisfactory. The initial compatibility problems encountered were progressively mitigated by incorporating modifications both in the front and

  14. Vision Underwater.

    ERIC Educational Resources Information Center

    Levine, Joseph S.

    1980-01-01

    Provides information regarding underwater vision. Includes a discussion of optically important interfaces, increased eye size of organisms at greater depths, visual peculiarities regarding the habitat of the coastal environment, and various pigment visual systems. (CS)

  15. Vision System To Identify Car Body Types For Spray Painting Robot

    NASA Astrophysics Data System (ADS)

    Uartlam, Peter; Neilson, Geoff

    1984-02-01

    The automation of car body spray booth operations employing paint spraying robots generally requires the robots to execute one of a number of defined routines according to the car body type. A vision system is described which identifies a car body type by its shape and provides an identity code to the robot controller thus enabling the correct routine to be executed. The vision system consists of a low cost linescan camera, a flucrescens light source and a microprocessor image analyser and is an example of a cost effective, reliable, industrially engineered robot vision system for a demanding production environment. Extension of the system with additional cameras will increase the application to the other automatic operations on a car assembly line where it becomes essential to reliably differentiate between up to 40 vatiations of body types.

  16. 2020 vision for a high-quality, high-value maternity care system.

    PubMed

    Carter, Martha Cook; Corry, Maureen; Delbanco, Suzanne; Foster, Tina Clark-Samazan; Friedland, Robert; Gabel, Robyn; Gipson, Teresa; Jolivet, R Rima; Main, Elliott; Sakala, Carol; Simkin, Penny; Simpson, Kathleen Rice

    2010-01-01

    A concrete and useful way to create an action plan for improving the quality of maternity care in the United States is to start with a view of the desired result, a common definition and a shared vision for a high-quality, high-value maternity care system. In this paper, we present a long-term vision for the future of maternity care in the United States. We present overarching values and principles and specific attributes of a high-performing maternity care system. We put forth the "2020 Vision for a High-Quality, High-Value Maternity Care System" to serve as a positive starting place for a fruitful collaborative process to develop specific action steps for broad-based maternity care system improvement.

  17. Long-Term Instrumentation, Information, and Control Systems (II&C) Modernization Future Vision and Strategy

    SciTech Connect

    Kenneth Thomas; Bruce Hallbert

    2013-02-01

    seamless digital environment to enhance nuclear safety, increase productivity, and improve overall plant performance. The long-term goal is to transform the operating model of the nuclear power plants (NPP)s from one that is highly reliant on a large staff performing mostly manual activities to an operating model based on highly integrated technology with a smaller staff. This digital transformation is critical to addressing an array of issues facing the plants, including aging of legacy analog systems, potential shortage of technical workers, ever-increasing expectations for nuclear safety improvement, and relentless pressure to reduce cost. The Future Vision is based on research is being conducted in the following major areas of plant function: 1. Highly integrated control rooms 2. Highly automated plant 3. Integrated operations 4. Human performance improvement for field workers 5. Outage safety and efficiency. Pilot projects will be conducted in each of these areas as the means for industry to collectively integrate these new technologies into nuclear plant work activities. The pilot projects introduce new digital technologies into the nuclear plant operating environment at host operating plants to demonstrate and validate them for production usage. In turn, the pilot project technologies serve as the stepping stones to the eventual seamless digital environment as described in the Future Vision.

  18. Long-Term Instrumentation, Information, and Control Systems (II&C) Modernization Future Vision and Strategy

    SciTech Connect

    Kenneth Thomas

    2012-02-01

    digital environment to enhance nuclear safety, increase productivity, and improve overall plant performance. The long-term goal is to transform the operating model of the nuclear power plants (NPP)s from one that is highly reliant on a large staff performing mostly manual activities to an operating model based on highly integrated technology with a smaller staff. This digital transformation is critical to addressing an array of issues facing the plants, including aging of legacy analog systems, potential shortage of technical workers, ever-increasing expectations for nuclear safety improvement, and relentless pressure to reduce cost. The Future Vision is based on research is being conducted in the following major areas of plant function: (1) Highly integrated control rooms; (2) Highly automated plant; (3) Integrated operations; (4) Human performance improvement for field workers; and (5) Outage safety and efficiency. Pilot projects will be conducted in each of these areas as the means for industry to collectively integrate these new technologies into nuclear plant work activities. The pilot projects introduce new digital technologies into the nuclear plant operating environment at host operating plants to demonstrate and validate them for production usage. In turn, the pilot project technologies serve as the stepping stones to the eventual seamless digital environment as described in the Future Vision.

  19. A computer vision system for the recognition of trees in aerial photographs

    NASA Technical Reports Server (NTRS)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  20. Computer Vision System For Locating And Identifying Defects In Hardwood Lumber

    NASA Astrophysics Data System (ADS)

    Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.

    1989-03-01

    This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.

  1. A machine vision assisted system for fluorescent magnetic particle inspection of railway wheelsets

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Sun, Zhenguo; Zhang, Wenzeng; Chen, Qiang

    2016-02-01

    Fluorescent magnetic particle inspection is a conventional non-destructive evaluation process for detecting surface and slightly subsurface cracks of the wheelsets. Using machine vision instead of workers' direct observation could remarkably improve the working condition and repeatability of the inspection. This paper presents a machine vision assisted automatic fluorescent magnetic particle inspection system for surface defect inspection of railway wheelsets. The system setup of it is composed of a semiautomatic fluorescent magnetic particle inspection machine, a vision system and an industrial computer. The detection of magnetic particle indications of quantitative quality indicators and cracks is studied: the detection of quantitative quality indicators is achieved by mathematical morphology, Otsu's thresholding and a RANSAC based ellipse fitting algorithm; the crack detection algorithm is a multiscale algorithm using Gaussian blur, mathematical morphology and several shape and color descriptors. Tests show that the algorithms are able to detect the indications of the quantitative quality indicators and the cracks precisely.

  2. Comparison of a multispectral vision system and a colorimeter for the assessment of meat color.

    PubMed

    Trinderup, Camilla H; Dahl, Anders; Jensen, Kirsten; Carstensen, Jens Michael; Conradsen, Knut

    2015-04-01

    The color assessment ability of a multispectral vision system is investigated by a comparison study with color measurements from a traditional colorimeter. The experiment involves fresh and processed meat samples. Meat is a complex material; heterogeneous with varying scattering and reflectance properties, so several factors can influence the instrumental assessment of meat color. In order to assess whether two methods are equivalent, the variation due to these factors must be taken into account. A statistical analysis was conducted and showed that on a calibration sheet the two instruments are equally capable of measuring color. Moreover the vision system provides a more color rich assessment of fresh meat samples with a glossier surface, than the colorimeter. Careful studies of the different sources of variation enable an assessment of the order of magnitude of the variability between methods accounting for other sources of variation leading to the conclusion that color assessment using a multispectral vision system is superior to traditional colorimeter assessments.

  3. A Computational Model of Active Vision for Visual Search in Human-Computer Interaction

    DTIC Science & Technology

    2010-08-01

    the Model Mixed Density Task CVC Task 3. ANSWERING THE FOUR QUESTIONS OF ACTIVE VISION 3.1. When do the Eyes Move? Modeling Fixation...from two experiments: a mixed density search task and a CVC (consonant-vowel- consonant) search task. The mixed density experiment (Halverson & Hornof...2004b) investigated the effects of varying the visual density of elements in a structured layout. The CVC search experiment (Hornof, 2004

  4. Influence of control parameters on the joint tracking performance of a coaxial weld vision system

    NASA Technical Reports Server (NTRS)

    Gangl, K. J.; Weeks, J. L.

    1985-01-01

    The first phase of a series of evaluations of a vision-based welding control sensor for the Space Shuttle Main Engine Robotic Welding System is described. The robotic welding system is presently under development at the Marshall Space Flight Center. This evaluation determines the standard control response parameters necessary for proper trajectory of the welding torch along the joint.

  5. Night vision: requirements and possible roadmap for FIR and NIR systems

    NASA Astrophysics Data System (ADS)

    Källhammer, Jan-Erik

    2006-04-01

    A night vision system must increase visibility in situations where only low beam headlights can be used today. As pedestrians and animals have the highest risk increase in night time traffic due to darkness, the ability of detecting those objects should be the main performance criteria, and the system must remain effective when facing the headlights of oncoming vehicles. Far infrared system has been shown to be superior to near infrared system in terms of pedestrian detection distance. Near infrared images were rated to have significantly higher visual clutter compared with far infrared images. Visual clutter has been shown to correlate with reduction in detection distance of pedestrians. Far infrared images are perceived as being more unusual and therefore more difficult to interpret, although the image appearance is likely related to the lower visual clutter. However, the main issue comparing the two technologies should be how well they solve the driver's problem with insufficient visibility under low beam conditions, especially of pedestrians and other vulnerable road users. With the addition of an automatic detection aid, a main issue will be whether the advantage of FIR systems will vanish given NIR systems with well performing automatic pedestrian detection functionality. The first night vision introductions did not generate the sales volumes initially expected. A renewed interest in night vision systems are however to be expected after the release of night vision systems by BMW, Mercedes and Honda, the latter with automatic pedestrian detection.

  6. Human factors and safety considerations of night-vision systems flight using thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Rash, Clarence E.; Verona, Robert W.; Crowley, John S.

    1990-10-01

    Helmet Mounted Systems (HMS) must be lightweight, balanced and compatible with life support and head protection assemblies. This paper discusses the design of one particular HMS, the GEC Ferranti NITE-OP/NIGHTBIRD aviator's Night Vision Goggle (NVG) developed under contracts to the Ministry of Defence for all three services in the United Kingdom (UK) for Rotary Wing and fast jet aircraft. The existing equipment constraints, safety, human factor and optical performance requirements are discussed before the design solution is presented after consideration of these material and manufacturing options.

  7. New vision system and navigation algorithm for an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Tann, Hokchhay; Shakya, Bicky; Merchen, Alex C.; Williams, Benjamin C.; Khanal, Abhishek; Zhao, Jiajia; Ahlgren, David J.

    2013-12-01

    Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 Intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.

  8. Flight Test Evaluation of Situation Awareness Benefits of Integrated Synthetic Vision System Technology f or Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III

    2005-01-01

    Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.

  9. An assembly system based on industrial robot with binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  10. Novel approach to characterize and compare the performance of night vision systems in representative illumination conditions

    NASA Astrophysics Data System (ADS)

    Roy, Nathalie; Vallières, Alexandre; St-Germain, Daniel; Potvin, Simon; Dupuis, Michel; Bouchard, Jean-Claude; Villemaire, André; Bérubé, Martin; Breton, Mélanie; Gagné, Guillaume

    2016-05-01

    A novel approach is used to characterize and compare the performance of night vision systems in conditions more representative of night operation in terms of spectral content. Its main advantage compared to standard testing methodologies is that it provides a fast and efficient way for untrained observers to compare night vision system performances with realistic illumination spectra. The testing methodology relies on a custom tumbling-E target and on a new LED-based illumination source that better emulates night sky spectral irradiances from deep overcast starlight to quarter-moon conditions. In this paper, we describe the setup and we demonstrate that the novel approach can be an efficient method to characterize among others night vision goggles (NVG) performances with a small error on the photogenerated electrons compared to the STANAG 4351 procedure.

  11. A bio-inspired apposition compound eye machine vision sensor system.

    PubMed

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-12-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  12. G-MAP: a novel night vision system for satellites

    NASA Astrophysics Data System (ADS)

    Miletti, Thomas; Maresi, Luca; Zuccaro Marchi, Alessandro; Pontetti, Giorgia

    2015-10-01

    The recent developments of single-photon counting array detectors opens the door to a novel type of systems that could be used on satellites in low Earth orbit. One possible application is the detection of non-cooperative vessels or illegal fishing activities. Currently only surveillance operations conducted by Navy or coast guard address this topic, operations by nature costly and with limited coverage. This paper aims to describe the architectural design of a system based on a novel single-photon counting detector, which works mainly in the visible and features fast readout, low noise and a 256x256 matrix of 64 μm-pixels. This detector is positioned in the focal plane of a fully aspheric reflective f/6 telescope, to guarantee state of the art performance. The combination of the two grants optimal ground sampling distance, compatible with the average dimension of a vessel, and overall performance. A radiative analysis of the light transmitted from emission to detection is presented, starting from models of lamps used for attracting fishes and illuminating the deck of the boats. A radiative transfer model is used to estimate the amount of photons emitted by such vessels reaching the detector. Since the novel detector features high framerate and low noise, the system as it is envisaged is able to properly serve the proposed goal. The paper shows the results of a trade-off between instrument parameters and spacecraft operations to maximize the detection probability and the covered sea surface. The status of development of both detector and telescope are also described.

  13. Low Vision

    MedlinePlus

    ... USAJobs Home > Statistics and Data > Low Vision Low Vision Low Vision Defined: Low Vision is defined as the best- ... 2010 U.S. Age-Specific Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for 2010 ...

  14. Prediction of pork color attributes using computer vision system.

    PubMed

    Sun, Xin; Young, Jennifer; Liu, Jeng Hung; Bachmeier, Laura; Somers, Rose Marie; Chen, Kun Jie; Newman, David

    2016-03-01

    Color image processing and regression methods were utilized to evaluate color score of pork center cut loin samples. One hundred loin samples of subjective color scores 1 to 5 (NPB, 2011; n=20 for each color score) were selected to determine correlation values between Minolta colorimeter measurements and image processing features. Eighteen image color features were extracted from three different RGB (red, green, blue) model, HSI (hue, saturation, intensity) and L*a*b* color spaces. When comparing Minolta colorimeter values with those obtained from image processing, correlations were significant (P<0.0001) for L* (0.91), a* (0.80), and b* (0.66). Two comparable regression models (linear and stepwise) were used to evaluate prediction results of pork color attributes. The proposed linear regression model had a coefficient of determination (R(2)) of 0.83 compared to the stepwise regression results (R(2)=0.70). These results indicate that computer vision methods have potential to be used as a tool in predicting pork color attributes.

  15. Insect vision based collision avoidance system for Remotely Piloted Aircraft

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger; Handley, James; Bevilacqua, Andrew

    2012-06-01

    Remotely Piloted Aircraft (RPA) are designed to operate in many of the same areas as manned aircraft; however, the limited instantaneous field of regard (FOR) that RPA pilots have limits their ability to react quickly to nearby objects. This increases the danger of mid-air collisions and limits the ability of RPA's to operate in environments such as terminals or other high-traffic environments. We present an approach based on insect vision that increases awareness while keeping size, weight, and power consumption at a minimum. Insect eyes are not designed to gather the same level of information that human eyes do. We present a novel Data Model and dynamically updated look-up-table approach to interpret non-imaging direction sensing only detectors observing a higher resolution video image of the aerial field of regard. Our technique is a composite hybrid method combining a small cluster of low resolution cameras multiplexed into a single composite air picture which is re-imaged by an insect eye to provide real-time scene understanding and collision avoidance cues. We provide smart camera application examples from parachute deployment testing and micro unmanned aerial vehicle (UAV) full motion video (FMV).

  16. Vision and Eye Health in Children 36 to <72 Months: Proposed Data System

    PubMed Central

    Hartmann, E. Eugenie; Block, Sandra S.; Wallace, David K.

    2015-01-01

    ABSTRACT Purpose This article provides a rationale for developing an integrated data system for recording vision screening and eye care follow-up outcomes in preschool-aged children. The recommendations were developed by the National Expert Panel to the National Center for Children’s Vision and Eye Health at Prevent Blindness and funded by the Maternal and Child Health Bureau of the Health Resources and Services Administration, US Department of Health and Human Services. Guidance is provided regarding specific elements to be included, as well as the characteristics and architecture of such a data system. Vision screening for preschool-aged children is endorsed by many organizations concerned with children’s health issues. Currently, there is a lack of data on the proportion of children screened and no effective system to ensure that children who fail screenings access appropriate comprehensive eye examinations and follow-up care. Results The expansion of currently existing, or developing integrated health information systems, which would include child-level vision screening data, as well as referral records and follow-up diagnosis and treatment, is consistent with the proposed national approach to an integrated health information system (National Health Information Infrastructure). Development of an integrated vision data system will enhance eye health for young children at three different levels: (1) the child level, (2) the health care provider level, and (3) an epidemiological level. Conclusions It is critical that the end users, the professionals who screen children and the professionals who provide eye care, be involved in the development and implementation of the proposed integrated data systems. As essential stakeholders invested in ensuring quality eye care for children, this community of professionals should find increasing need and opportunities at local, state, and national levels to contribute to cooperative guidance for data system development

  17. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    NASA Astrophysics Data System (ADS)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our

  18. Reducing field distortion for galvanometer scanning system using a vision system

    NASA Astrophysics Data System (ADS)

    Ortega Delgado, Moises Alberto; Lasagni, Andrés Fabián

    2016-11-01

    Laser galvanometer scanning systems are well-established devices for material processing, medical imaging and laser projection. Besides all the advantages of these devices like high resolution, repeatability and processing velocity, they are always affected by field distortions. Different pre-compensating techniques using iterative marking and measuring methods are applied in order to reduce such field distortions and increase in some extends the accuracy of the scanning systems. High-tech devices, temperature control systems and self-adjusting galvanometers are some expensive possibilities for reducing these deviations. This contribution presents a method for reducing field distortions using a coaxially coupled vision device and a self-designed calibration plate; this avoids, among others, the necessity of repetitive marking and measuring phases.

  19. External Vision Systems (XVS) Proof-of-Concept Flight Test Evaluation

    NASA Technical Reports Server (NTRS)

    Shelton, Kevin J.; Williams, Steven P.; Kramer, Lynda J.; Arthur, Jarvis J.; Prinzel, Lawrence, III; Bailey, Randall E.

    2014-01-01

    NASA's Fundamental Aeronautics Program, High Speed Project is performing research, development, test and evaluation of flight deck and related technologies to support future low-boom, supersonic configurations (without forward-facing windows) by use of an eXternal Vision System (XVS). The challenge of XVS is to determine a combination of sensor and display technologies which can provide an equivalent level of safety and performance to that provided by forward-facing windows in today's aircraft. This flight test was conducted with the goal of obtaining performance data on see-and-avoid and see-to-follow traffic using a proof-of-concept XVS design in actual flight conditions. Six data collection flights were flown in four traffic scenarios against two different sized participating traffic aircraft. This test utilized a 3x1 array of High Definition (HD) cameras, with a fixed forward field-of-view, mounted on NASA Langley's UC-12 test aircraft. Test scenarios, with participating NASA aircraft serving as traffic, were presented to two evaluation pilots per flight - one using the proof-of-concept (POC) XVS and the other looking out the forward windows. The camera images were presented on the XVS display in the aft cabin with Head-Up Display (HUD)-like flight symbology overlaying the real-time imagery. The test generated XVS performance data, including comparisons to natural vision, and post-run subjective acceptability data were also collected. This paper discusses the flight test activities, its operational challenges, and summarizes the findings to date.

  20. Integration of a Legacy System with Night Vision Training System (NVTS)

    NASA Astrophysics Data System (ADS)

    Anderson, Gretchen M.; Vrana, Craig A.; Riegler, Joseph T.; Martin, Elizabeth L.

    2002-08-01

    The increase in tactical night operations resulted in the requirement for improved night vision goggle (NVG) training and simulation. The Night Vision Training System (NVTS), developed at the Air Force Research Laboratory's Warfighter Training Research Division (AFRL/HEA), provides high-fidelity NVG imagery required to support effective NVG training and mission rehearsal. Acquisition of a multichannel NVTS, to drive both an out-the-window (OTW) view and a helmet-mounted display (HMD), may exceed resources of some training units. An alternative could be to add one channel of NVG imagery to the existing OTW imagery provided by the legacy system. This evaluation addressed engineering and training issues associated with integrating a single NVTS HMD channel with an existing legacy system. Pilots rated the degree of disparity between the HMD and OTW scenes for various scene attributes and effect on flight performance. Findings demonstrated the potential for integration of an NVTS channel with an existing legacy system. Latency and terrain elevation differences between the two databases were measured and did not significantly impact system integration or pilot ratings. When integrating other legacy systems with NVTS, significant disparities may exist between the two databases. Pilot ratings and comments indicate that (a) display brightness and contrast levels of the OTW scene should be set to correspond to real-world, (b) unaided luminance values for a given illumination condition; disparity in moon phase and position between the two sky models should be minimized; and (c) star quantity and brightness in the OTW scene and the NVG scene, as rendered on the HMD, should be as consistent with real-world conditions as possible.

  1. Cyborg systems as platforms for computer-vision algorithm-development for astrobiology

    NASA Astrophysics Data System (ADS)

    McGuire, Patrick Charles; Rodríguez Manfredi, José Antonio; Martínez, Eduardo Sebastián; Gómez Elvira, Javier; Díaz Martínez, Enrique; Ormö, Jens; Neuffer, Kai; Giaquinta, Antonino; Camps Martínez, Fernando; Lepinette Malvitte, Alain; Pérez Mercader, Juan; Ritter, Helge; Oesker, Markus; Ontrup, Jörg; Walter, Jörg

    2004-03-01

    Employing the allegorical imagery from the film "The Matrix", we motivate and discuss our "Cyborg Astrobiologist" research program. In this research program, we are using a wearable computer and video camcorder in order to test and train a computer-vision system to be a field-geologist and field-astrobiologist.

  2. Novel compact panomorph lens based vision system for monitoring around a vehicle

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2008-04-01

    Automotive applications are one of the largest vision-sensor market segments and one of the fastest growing ones. The trend to use increasingly more sensors in cars is driven both by legislation and consumer demands for higher safety and better driving experiences. Awareness of what directly surrounds a vehicle affects safe driving and manoeuvring of a vehicle. Consequently, panoramic 360° Field of View imaging can contributes most to the perception of the world around the driver than any other sensors. However, to obtain a complete vision around the car, several sensor systems are necessary. To solve this issue, a customized imaging system based on a panomorph lens will provide the maximum information for the drivers with a reduced number of sensors. A panomorph lens is a hemispheric wide angle anamorphic lens with enhanced resolution in predefined zone of interest. Because panomorph lenses are optimized to a custom angle-to-pixel relationship, vision systems provide ideal image coverage that reduces and optimizes the processing. We present various scenarios which may benefit from the use of a custom panoramic sensor. We also discuss the technical requirements of such vision system. Finally we demonstrate how the panomorph based visual sensor is probably one of the most promising ways to fuse many sensors in one. For example, a single panoramic sensor on the front of a vehicle could provide all necessary information for assistance in crash avoidance, lane tracking, early warning, park aids, road sign detection, and various video monitoring views.

  3. Enhanced Flight Vision Systems Operational Feasibility Study Using Radar and Infrared Sensors

    NASA Technical Reports Server (NTRS)

    Etherington, Timothy J.; Kramer, Lynda J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.

    2015-01-01

    Approach and landing operations during periods of reduced visibility have plagued aircraft pilots since the beginning of aviation. Although techniques are currently available to mitigate some of the visibility conditions, these operations are still ultimately limited by the pilot's ability to "see" required visual landing references (e.g., markings and/or lights of threshold and touchdown zone) and require significant and costly ground infrastructure. Certified Enhanced Flight Vision Systems (EFVS) have shown promise to lift the obscuration veil. They allow the pilot to operate with enhanced vision, in lieu of natural vision, in the visual segment to enable equivalent visual operations (EVO). An aviation standards document was developed with industry and government consensus for using an EFVS for approach, landing, and rollout to a safe taxi speed in visibilities as low as 300 feet runway visual range (RVR). These new standards establish performance, integrity, availability, and safety requirements to operate in this regime without reliance on a pilot's or flight crew's natural vision by use of a fail-operational EFVS. A pilot-in-the-loop high-fidelity motion simulation study was conducted at NASA Langley Research Center to evaluate the operational feasibility, pilot workload, and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 feet RVR by use of vision system technologies on a head-up display (HUD) without need or reliance on natural vision. Twelve crews flew various landing and departure scenarios in 1800, 1000, 700, and 300 RVR. This paper details the non-normal results of the study including objective and subjective measures of performance and acceptability. The study validated the operational feasibility of approach and departure operations and success was independent of visibility conditions. Failures were handled within the

  4. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  5. Forward-looking activities: incorporating citizens' visions: A critical analysis of the CIVISTI method.

    PubMed

    Gudowsky, Niklas; Peissl, Walter; Sotoudeh, Mahshid; Bechtold, Ulrike

    2012-11-01

    Looking back on the many prophets who tried to predict the future as if it were predetermined, at first sight any forward-looking activity is reminiscent of making predictions with a crystal ball. In contrast to fortune tellers, today's exercises do not predict, but try to show different paths that an open future could take. A key motivation to undertake forward-looking activities is broadening the information basis for decision-makers to help them actively shape the future in a desired way. Experts, laypeople, or stakeholders may have different sets of values and priorities with regard to pending decisions on any issue related to the future. Therefore, considering and incorporating their views can, in the best case scenario, lead to more robust decisions and strategies. However, transferring this plurality into a form that decision-makers can consider is a challenge in terms of both design and facilitation of participatory processes. In this paper, we will introduce and critically assess a new qualitative method for forward-looking activities, namely CIVISTI (Citizen Visions on Science, Technology and Innovation; www.civisti.org), which was developed during an EU project of the same name. Focussing strongly on participation, with clear roles for citizens and experts, the method combines expert, stakeholder and lay knowledge to elaborate recommendations for decision-making in issues related to today's and tomorrow's science, technology and innovation. Consisting of three steps, the process starts with citizens' visions of a future 30-40 years from now. Experts then translate these visions into practical recommendations which the same citizens then validate and prioritise to produce a final product. The following paper will highlight the added value as well as limits of the CIVISTI method and will illustrate potential for the improvement of future processes.

  6. Using Vision System Technologies to Enable Operational Improvements for Low Visibility Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.

    2014-01-01

    Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  7. Assessing impact of dual sensor enhanced flight vision systems on departure performance

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.

    2016-05-01

    Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS) may serve as game-changing technologies to meet the challenges of the Next Generation Air Transportation System and the envisioned Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety and operational tempos of current-day Visual Flight Rules operations irrespective of the weather and visibility conditions. One significant obstacle lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility and pilot workload of conducting departures and approaches on runways without centerline lighting in visibility as low as 300 feet runway visual range (RVR) by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance and workload was assessed. Using EFVS concepts during 300 RVR terminal operations on runways without centerline lighting appears feasible as all EFVS concepts had equivalent (or better) departure performance and landing rollout performance, without any workload penalty, than those flown with a conventional HUD to runways having centerline lighting. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.

  8. Assessing Impact of Dual Sensor Enhanced Flight Vision Systems on Departure Performance

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.

    2016-01-01

    Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS) may serve as game-changing technologies to meet the challenges of the Next Generation Air Transportation System and the envisioned Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety and operational tempos of current-day Visual Flight Rules operations irrespective of the weather and visibility conditions. One significant obstacle lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility and pilot workload of conducting departures and approaches on runways without centerline lighting in visibility as low as 300 feet runway visual range (RVR) by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance and workload was assessed. Using EFVS concepts during 300 RVR terminal operations on runways without centerline lighting appears feasible as all EFVS concepts had equivalent (or better) departure performance and landing rollout performance, without any workload penalty, than those flown with a conventional HUD to runways having centerline lighting. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.

  9. Knowledge-based program to assist in the design of machine vision systems

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.

    1998-10-01

    There exists a serious bottle-neck in the process of designing Machine Vision Systems. This is so severe that the long-claimed flexibility of this technology will never be realized, unless there is a significant increase in the capacity of present-day vision system design teams. One possible way to improve matters is to provide appropriate design tools that will amplify the efforts of engineers who lack the necessary educational back-ground. This article describes a major extension to an existing program, called the Lighting Advisor, which is able to search a pictorial database, looking for key-words chosen by the user. The revised program bases its advice on a description of the object to be inspected and the working environment. The objective of this research is to reduce the skill level needed to operate the program, so that an industrial engineer, with little or no special training in Machine Vision, can receive appropriate and relevant advice, relating to a range of tasks in the design of industrial vision systems.

  10. Angle extended linear MEMS scanning system for 3D laser vision sensor

    NASA Astrophysics Data System (ADS)

    Pang, Yajun; Zhang, Yinxin; Yang, Huaidong; Zhu, Pan; Gai, Ye; Zhao, Jian; Huang, Zhanhua

    2016-09-01

    Scanning system is often considered as the most important part for 3D laser vision sensor. In this paper, we propose a method for the optical system design of angle extended linear MEMS scanning system, which has features of huge scanning degree, small beam divergence angle and small spot size for 3D laser vision sensor. The principle of design and theoretical formulas are derived strictly. With the help of software ZEMAX, a linear scanning optical system based on MEMS has been designed. Results show that the designed system can extend scanning angle from ±8° to ±26.5° with a divergence angle small than 3.5 mr, and the spot size is reduced for 4.545 times.

  11. The Use of a Tactile-Vision Sensory Substitution System as an Augmentative Tool for Individuals with Visual Impairments

    ERIC Educational Resources Information Center

    Williams, Michael D.; Ray, Christopher T.; Griffith, Jennifer; De l'Aune, William

    2011-01-01

    The promise of novel technological strategies and solutions to assist persons with visual impairments (that is, those who are blind or have low vision) is frequently discussed and held to be widely beneficial in countless applications and daily activities. One such approach involving a tactile-vision sensory substitution modality as a mechanism to…

  12. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  13. Assessing Dual Sensor Enhanced Flight Vision Systems to Enable Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.

    2016-01-01

    Flight deck-based vision system technologies, such as Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS), may serve as a revolutionary crew/vehicle interface enabling technologies to meet the challenges of the Next Generation Air Transportation System Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. One significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility, pilot workload and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 ft runway visual range by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs as they made approaches to runways with and without touchdown zone and centerline lights. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance, workload, and situation awareness during extremely low visibility approach and landing operations was assessed. Results indicate that all EFVS concepts flown resulted in excellent approach path tracking and touchdown performance without any workload penalty. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.

  14. A review of RGB-LED based mixed-color illumination system for machine vision and microscopy

    NASA Astrophysics Data System (ADS)

    Hou, Lexin; Wang, Hexin; Xu, Min

    2016-09-01

    The theory and application of RGB-LED based mixed-color illumination system for use in machine vision and optical microscopy systems are presented. For machine vision system, relationship of various color sources and output image sharpness is discussed. From the viewpoint of gray scale images, evaluation and optimization methods of optimal illumination for machine vision are concluded. The image quality under monochromatic and mixed color illumination is compared. For optical microscopy system, demand of light source is introduced and design thoughts of RGB-LED based mixed-color illumination system are concluded. The problems need to be solved in this field are pointed out.

  15. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    SciTech Connect

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip

    2015-07-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological data can be incorporated by means of data fusion of the two sensors' output data. (authors)

  16. Health systems analysis of eye care services in Zambia: evaluating progress towards VISION 2020 goals

    PubMed Central

    2014-01-01

    Background VISION 2020 is a global initiative launched in 1999 to eliminate avoidable blindness by 2020. The objective of this study was to undertake a situation analysis of the Zambian eye health system and assess VISION 2020 process indicators on human resources, equipment and infrastructure. Methods All eye health care providers were surveyed to determine location, financing sources, human resources and equipment. Key informants were interviewed regarding levels of service provision, management and leadership in the sector. Policy papers were reviewed. A health system dynamics framework was used to analyse findings. Results During 2011, 74 facilities provided eye care in Zambia; 39% were public, 37% private for-profit and 24% owned by Non-Governmental Organizations. Private facilities were solely located in major cities. A total of 191 people worked in eye care; 18 of these were ophthalmologists and eight cataract surgeons, equivalent to 0.34 and 0.15 per 250,000 population, respectively. VISION 2020 targets for inpatient beds and surgical theatres were met in six out of nine provinces, but human resources and spectacles manufacturing workshops were below target in every province. Inequalities in service provision between urban and rural areas were substantial. Conclusion Shortage and maldistribution of human resources, lack of routine monitoring and inadequate financing mechanisms are the root causes of underperformance in the Zambian eye health system, which hinder the ability to achieve the VISION 2020 goals. We recommend that all VISION 2020 process indicators are evaluated simultaneously as these are not individually useful for monitoring progress. PMID:24575919

  17. Error Characterization of Vision-Aided Navigation Systems

    DTIC Science & Technology

    2013-03-01

    iv EKF Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . iv GNSS Global Navigation Satellite System...Satellite Systems ( GNSS ), of which GPS is an example, suffer from availability restrictions when satellite signals are physically blocked in areas...ION GNSS 2006, 1093–1103. Sep 2006. [24] Veth, M., J. Raquet, and M. Pachter. “Stochastic Constraints for Efficient Image Correspondence Search”. IEEE

  18. Street Viewer: An Autonomous Vision Based Traffic Tracking System.

    PubMed

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-06-03

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time.

  19. Street Viewer: An Autonomous Vision Based Traffic Tracking System

    PubMed Central

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-01-01

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time. PMID:27271627

  20. Industrial applications of a vision system for undersea robots

    NASA Astrophysics Data System (ADS)

    Turner, John

    1993-12-01

    The Offshore Oil and Gas Industry in the North Sea has many requirements for 3D measurements in air and underwater. A market audit found that the use of film based photogrammetry was being rejected for many applications because the information was not available fast enough. A development project was set up to replace the photographic cameras with a choice of video of high resolution digital electronic cameras, and the analysis system with a personal computer based image processing system. This product has been in operation with Remotely Controlled Underwater Vehicles since September 1992. The paper deals with the ongoing development of the system, including the automation of the measurement process. It introduces the application of the system as a closed-loop control system for underwater manipulators.

  1. Utilization of the Space Vision System as an Augmented Reality System For Mission Operations

    NASA Technical Reports Server (NTRS)

    Maida, James C.; Bowen, Charles

    2003-01-01

    Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to

  2. Defining filled and empty space: reassessing the filled space illusion for active touch and vision.

    PubMed

    Collier, Elizabeth S; Lawson, Rebecca

    2016-09-01

    In the filled space illusion, an extent filled with gratings is estimated as longer than an equivalent extent that is apparently empty. However, researchers do not seem to have carefully considered the terms filled and empty when describing this illusion. Specifically, for active touch, smooth, solid surfaces have typically been used to represent empty space. Thus, it is not known whether comparing gratings to truly empty space (air) during active exploration by touch elicits the same illusionary effect. In Experiments 1 and 2, gratings were estimated as longer if they were compared to smooth, solid surfaces rather than being compared to truly empty space. Consistent with this, Experiment 3 showed that empty space was perceived as longer than solid surfaces when the two were compared directly. Together these results are consistent with the hypothesis that, for touch, the standard filled space illusion only occurs if gratings are compared to smooth, solid surfaces and that it may reverse if gratings are compared to empty space. Finally, Experiment 4 showed that gratings were estimated as longer than both solid and empty extents in vision, so the direction of the filled space illusion in vision was not affected by the nature of the comparator. These results are discussed in relation to the dual nature of active touch.

  3. An Active Vision Approach to Understanding and Improving Visual Training in the Geosciences

    NASA Astrophysics Data System (ADS)

    Voronov, J.; Tarduno, J. A.; Jacobs, R. A.; Pelz, J. B.; Rosen, M. R.

    2009-12-01

    Experience in the field is a fundamental aspect of geologic training, and its effectiveness is largely unchallenged because of anecdotal evidence of its success among expert geologists. However, there have been only a few quantitative studies based on large data collection efforts to investigate how Earth Scientists learn in the field. In a recent collaboration between Earth scientists, Cognitive scientists and experts in Imaging science at the University of Rochester and Rochester Institute of Technology, we are investigating such a study. Within Cognitive Science, one school of thought, referred to as the Active Vision approach, emphasizes that visual perception is an active process requiring us to move our eyes to acquire new information about our environment. The Active Vision approach indicates the perceptual skills which experts possess and which novices will need to acquire to achieve expert performance. We describe data collection efforts using portable eye-trackers to assess how novice and expert geologists acquire visual knowledge in the field. We also discuss our efforts to collect images for use in a semi-immersive classroom environment, useful for further testing of novices and experts using eye-tracking technologies.

  4. Exploration Medical Capability System Engineering Introduction and Vision

    NASA Technical Reports Server (NTRS)

    Mindock, J.; Reilly, J.

    2017-01-01

    Human exploration missions to beyond low Earth orbit destinations such as Mars will require more autonomous capability compared to current low Earth orbit operations. For the medical system, lack of consumable resupply, evacuation opportunities, and real-time ground support are key drivers toward greater autonomy. Recognition of the limited mission and vehicle resources available to carry out exploration missions motivates the Exploration Medical Capability (ExMC) Element's approach to enabling the necessary autonomy. The Element's work must integrate with the overall exploration mission and vehicle design efforts to successfully provide exploration medical capabilities. ExMC is applying systems engineering principles and practices to accomplish its integrative goals. This talk will briefly introduce the discipline of systems engineering and key points in its application to exploration medical capability development. It will elucidate technical medical system needs to be met by the systems engineering work, and the structured and integrative science and engineering approach to satisfying those needs, including the development of shared mental and qualitative models within and external to the human health and performance community. These efforts are underway to ensure relevancy to exploration system maturation and to establish medical system development that is collaborative with vehicle and mission design and engineering efforts.

  5. A Vision-Based Emergency Response System with a Paramedic Mobile Robot

    NASA Astrophysics Data System (ADS)

    Jeong, Il-Woong; Choi, Jin; Cho, Kyusung; Seo, Yong-Ho; Yang, Hyun Seung

    Detecting emergency situation is very important to a surveillance system for people like elderly live alone. A vision-based emergency response system with a paramedic mobile robot is presented in this paper. The proposed system is consisted of a vision-based emergency detection system and a mobile robot as a paramedic. A vision-based emergency detection system detects emergency by tracking people and detecting their actions from image sequences acquired by single surveillance camera. In order to recognize human actions, interest regions are segmented from the background using blob extraction method and tracked continuously using generic model. Then a MHI (Motion History Image) for a tracked person is constructed by silhouette information of region blobs and model actions. Emergency situation is finally detected by applying these information to neural network. When an emergency is detected, a mobile robot can help to diagnose the status of the person in the situation. To send the mobile robot to the proper position, we implement mobile robot navigation algorithm based on the distance between the person and a mobile robot. We validate our system by showing emergency detection rate and emergency response demonstration using the mobile robot.

  6. Object Tracking Vision System for Mapping the UCN τ Apparatus Volume

    NASA Astrophysics Data System (ADS)

    Lumb, Rowan; UCNtau Collaboration

    2016-09-01

    The UCN τ collaboration has an immediate goal to measure the lifetime of the free neutron to within 0.1%, i.e. about 1 s. The UCN τ apparatus is a magneto-gravitational ``bottle'' system. This system holds low energy, or ultracold, neutrons in the apparatus with the constraint of gravity, and keeps these low energy neutrons from interacting with the bottle via a strong 1 T surface magnetic field created by a bowl-shaped array of permanent magnets. The apparatus is wrapped with energized coils to supply a magnetic field throughout the ''bottle'' volume to prevent depolarization of the neutrons. An object-tracking stereo-vision system will be presented that precisely tracks a Hall probe and allows a mapping of the magnetic field throughout the volume of the UCN τ bottle. The stereo-vision system utilizes two cameras and open source openCV software to track an object's 3-d position in space in real time. The desired resolution is +/-1 mm resolution along each axis. The vision system is being used as part of an even larger system to map the magnetic field of the UCN τ apparatus and expose any possible systematic effects due to field cancellation or low field points which could allow neutrons to depolarize and possibly escape from the apparatus undetected. Tennessee Technological University.

  7. A vision for an ultra-high resolution integrated water cycle observation and prediction system

    NASA Astrophysics Data System (ADS)

    Houser, P. R.

    2013-05-01

    Society's welfare, progress, and sustainable economic growth—and life itself—depend on the abundance and vigorous cycling and replenishing of water throughout the global environment. The water cycle operates on a continuum of time and space scales and exchanges large amounts of energy as water undergoes phase changes and is moved from one part of the Earth system to another. We must move toward an integrated observation and prediction paradigm that addresses broad local-to-global science and application issues by realizing synergies associated with multiple, coordinated observations and prediction systems. A central challenge of a future water and energy cycle observation strategy is to progress from single variable water-cycle instruments to multivariable integrated instruments in electromagnetic-band families. The microwave range in the electromagnetic spectrum is ideally suited for sensing the state and abundance of water because of water's dielectric properties. Eventually, a dedicated high-resolution water-cycle microwave-based satellite mission may be possible based on large-aperture antenna technology that can harvest the synergy that would be afforded by simultaneous multichannel active and passive microwave measurements. A partial demonstration of these ideas can even be realized with existing microwave satellite observations to support advanced multivariate retrieval methods that can exploit the totality of the microwave spectral information. The simultaneous multichannel active and passive microwave retrieval would allow improved-accuracy retrievals that are not possible with isolated measurements. Furthermore, the simultaneous monitoring of several of the land, atmospheric, oceanic, and cryospheric states brings synergies that will substantially enhance understanding of the global water and energy cycle as a system. The multichannel approach also affords advantages to some constituent retrievals—for instance, simultaneous retrieval of vegetation

  8. Automatic Welding System of Aluminum Pipe by Monitoring Backside Image of Molten Pool Using Vision Sensor

    NASA Astrophysics Data System (ADS)

    Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo

    An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.

  9. Awareness and Detection of Traffic and Obstacles Using Synthetic and Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.

    2012-01-01

    Research literature are reviewed and summarized to evaluate the awareness and detection of traffic and obstacles when using Synthetic Vision Systems (SVS) and Enhanced Vision Systems (EVS). The study identifies the critical issues influencing the time required, accuracy, and pilot workload associated with recognizing and reacting to potential collisions or conflicts with other aircraft, vehicles and obstructions during approach, landing, and surface operations. This work considers the effect of head-down display and head-up display implementations of SVS and EVS as well as the influence of single and dual pilot operations. The influences and strategies of adding traffic information and cockpit alerting with SVS and EVS were also included. Based on this review, a knowledge gap assessment was made with recommendations for ground and flight testing to fill these gaps and hence, promote the safe and effective implementation of SVS/EVS technologies for the Next Generation Air Transportation System

  10. Measurement of crosstalk in stereoscopic display systems used for vision research

    PubMed Central

    Baker, Daniel H.; Kaestner, Milena; Gouws, André D.

    2016-01-01

    Studying binocular vision requires precise control over the stimuli presented to the left and right eyes. A popular technique is to segregate signals either temporally (frame interleaving), spectrally (using colored filters), or through light polarization. None of these segregation methods achieves perfect isolation, and so a degree of crosstalk is usually apparent, in which signals intended for one eye are faintly visible to the other eye. Previous studies have reported crosstalk values mostly for consumer-grade systems. Here we measure crosstalk for eight systems, many of which are intended for use in vision research. We provide benchmark crosstalk values, report a negative crosstalk effect in some LCD-based systems, and give guidelines for dealing with crosstalk in different experimental paradigms. PMID:27978549

  11. Virtual vision system with actual flavor by olfactory display

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Kanazawa, Fumihiro

    2010-11-01

    The authors have researched multimedia system and support system for nursing studies on and practices of reminiscence therapy and life review therapy. The concept of the life review is presented by Butler in 1963. The process of thinking back on one's life and communicating about one's life to another person is called life review. There is a famous episode concerning the memory. It is called as Proustian effects. This effect is mentioned on the Proust's novel as an episode that a story teller reminds his old memory when he dipped a madeleine in tea. So many scientists research why smells trigger the memory. The authors pay attention to the relation between smells and memory although the reason is not evident yet. Then we have tried to add an olfactory display to the multimedia system so that the smells become a trigger of reminding buried memories. An olfactory display is a device that delivers smells to the nose. It provides us with special effects, for example to emit smell as if you were there or to give a trigger for reminding us of memories. The authors have developed a tabletop display system connected with the olfactory display. For delivering a flavor to user's nose, the system needs to recognition and measure positions of user's face and nose. In this paper, the authors describe an olfactory display which enables to detect the nose position for an effective delivery.

  12. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  13. MARVEL: A System for Recognizing World Locations with Stereo Vision

    DTIC Science & Technology

    1990-05-01

    Baxandall 1983]) to plan their daily commutes or vacation excursions. 147 148 CHAPTER 9. LOCATION RECOGNITION AND THE WORFLD MODEL 9.1 Introduction...Inc. 1982. Baxandall , L. World Guide to Nude Beaches and Recreation. New York: Harmony Books. 1983. Binford, T. 0. Survey of stereo mapping systems

  14. Finger mouse system based on computer vision in complex backgrounds

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Zhang, Xiong

    2013-12-01

    This paper presents a human-computer interaction system and realizes a real-time virtual mouse. Our system emulates the dragging and selecting functions of a mouse by recognizing bare hands, hence the control style is simple and intuitive. A single camera is used to capture hand images and a DSP chip is embedded as the image processing platform. To deal with complex backgrounds, particularly where skin-like or moving objects appear, we develop novel hand recognition algorithms. Hand segmentation is achieved by skin color cue and background difference. Each input image is corrected according to the luminance and then skin color is extracted by Gaussian model. We employ a Camshift tracking algorithm which receives feedbacks from the recognition module. In fingertip recognition, a method combining template matching and circle drawing is proposed. Our system has advantages of good real-time performance, easy integration and energy conservation. Experiments show that the system is robust to the scaling and rotation of hands.

  15. CATEGORIZATION OF EXTRANEOUS MATTER IN COTTON USING MACHINE VISION SYSTEMS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Cotton Trash Identification System (CTIS) was developed at the Southwestern Cotton Ginning Research Laboratory to identify and categorize extraneous matter in cotton. The CTIS bark/grass categorization was evaluated with USDA-Agricultural Marketing Service (AMS) extraneous matter calls assigned ...

  16. Categorization of extraneous matter in cotton using machine vision systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Cotton Trash Identification System (CTIS) developed at the Southwestern Cotton Ginning Research Laboratory was evaluated for identification and categorization of extraneous matter in cotton. The CTIS bark/grass categorization was evaluated with USDA-Agricultural Marketing Service (AMS) extraneou...

  17. Altered Vision-Related Resting-State Activity in Pituitary Adenoma Patients with Visual Damage

    PubMed Central

    Qian, Haiyan; Wang, Xingchao; Wang, Zhongyan; Wang, Zhenmin; Liu, Pinan

    2016-01-01

    Objective To investigate changes of vision-related resting-state activity in pituitary adenoma (PA) patients with visual damage through comparison to healthy controls (HCs). Methods 25 PA patients with visual damage and 25 age- and sex-matched corrected-to-normal-vision HCs underwent a complete neuro-ophthalmologic evaluation, including automated perimetry, fundus examinations, and a magnetic resonance imaging (MRI) protocol, including structural and resting-state fMRI (RS-fMRI) sequences. The regional homogeneity (ReHo) of the vision-related cortex and the functional connectivity (FC) of 6 seeds within the visual cortex (the primary visual cortex (V1), the secondary visual cortex (V2), and the middle temporal visual cortex (MT+)) were evaluated. Two-sample t-tests were conducted to identify the differences between the two groups. Results Compared with the HCs, the PA group exhibited reduced ReHo in the bilateral V1, V2, V3, fusiform, MT+, BA37, thalamus, postcentral gyrus and left precentral gyrus and increased ReHo in the precuneus, prefrontal cortex, posterior cingulate cortex (PCC), anterior cingulate cortex (ACC), insula, supramarginal gyrus (SMG), and putamen. Compared with the HCs, V1, V2, and MT+ in the PAs exhibited decreased FC with the V1, V2, MT+, fusiform, BA37, and increased FC primarily in the bilateral temporal lobe (especially BA20,21,22), prefrontal cortex, PCC, insular, angular gyrus, ACC, pre-SMA, SMG, hippocampal formation, caudate and putamen. It is worth mentioning that compared with HCs, V1 in PAs exhibited decreased or similar FC with the thalamus, whereas V2 and MT+ exhibited increased FCs with the thalamus, especially pulvinar. Conclusions In our study, we identified significant neural reorganization in the vision-related cortex of PA patients with visual damage compared with HCs. Most subareas within the visual cortex exhibited remarkable neural dysfunction. Some subareas, including the MT+ and V2, exhibited enhanced FC with the thalamic

  18. A Vision of the Future Air Traffic Control System

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    2000-01-01

    The air transportation system is on the verge of gridlock, with delays and cancelled flights this summer reaching all time highs. As demand for air transportation continues to increase, the capacity needed to accommodate the growth in traffic is falling farther and farther behind. Moreover, it has become increasingly apparent that the present system cannot be scaled up to provide the capacity increases needed to meet demand over the next 25 years. NASA, working with the Federal Aviation Administration and industry, is pursuing a major research program to develop air traffic management technologies that have the ultimate goal of doubling capacity while increasing safety and efficiency. This seminar will describe how the current system operates, what its limitations are and why a revolutionary "shift in paradigm" is needed to overcome fundamental limitations in capacity and safety. For the near term, NASA has developed a portfolio of software tools for air traffic controllers, called the Center-TRACON Automation System (CTAS), that provides modest gains in capacity and efficiency while staying within the current paradigm. The outline of a concept for the long term, with a deployment date of 2015 at the earliest, has recently been formulated and presented by NASA to a select group of industry and government stakeholders. Automated decision making software, combined with an Internet in the sky that enables sharing of information and distributes control between the cockpit and the ground, is key to this concept. However, its most revolutionary feature is a fundamental change in the roles and responsibilities assigned to air traffic controllers.

  19. An Evaluation of the VISION Execution System Demonstration Prototypes

    DTIC Science & Technology

    1991-01-01

    254731 ý-ELECTE "An valuation of the VISONExecution System Demonstration Prototypes Patricia M-. Boren, Karen E. Isaacson, Judith E. Payne , Marc...Isaacson, Judith E. Payne , Marc L. Robbins, Robert S. Tripp Prepared for the United States Army A co,".I, For RAND? Approved for. public relase...manuscript. Jeffrey Crisci and Cecilia Butler, formerly of the Army Materiel Command (AMC) and currently with the Strategic Logistics Agency (SLA), were

  20. Learning to See: Research in Training a Robot Vision System

    DTIC Science & Technology

    2008-12-01

    on barren extraterrestrial terrain conditions, with the complexities of vegetation, man-made structures, and water. Earlier work by Karlsen and...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...in trafficability. For inherently unpredictable segments it did not. An important part of a practical robot intelligence system is the ability to

  1. Visual Advantage of Enhanced Flight Vision System During NextGen Flight Test Evaluation

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Harrison, Stephanie J.; Bailey, Randall E.; Shelton, Kevin J.; Ellis, Kyle K.

    2014-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities. Nine test flights were flown in Gulfstream's G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast). Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision.

  2. LAPLACE: A mission to Europa and the Jupiter System for ESA's Cosmic Vision Programme

    NASA Astrophysics Data System (ADS)

    Blanc, Michel; Alibert, Yann; André, Nicolas; Atreya, Sushil; Beebe, Reta; Benz, Willy; Bolton, Scott J.; Coradini, Angioletta; Coustenis, Athena; Dehant, Véronique; Dougherty, Michele; Drossart, Pierre; Fujimoto, Masaki; Grasset, Olivier; Gurvits, Leonid; Hartogh, Paul; Hussmann, Hauke; Kasaba, Yasumasa; Kivelson, Margaret; Khurana, Krishan; Krupp, Norbert; Louarn, Philippe; Lunine, Jonathan; McGrath, Melissa; Mimoun, David; Mousis, Olivier; Oberst, Juergen; Okada, Tatsuaki; Pappalardo, Robert; Prieto-Ballesteros, Olga; Prieur, Daniel; Regnier, Pascal; Roos-Serote, Maarten; Sasaki, Sho; Schubert, Gerald; Sotin, Christophe; Spilker, Tom; Takahashi, Yukihiro; Takashima, Takeshi; Tosi, Federico; Turrini, Diego; van Hoolst, Tim; Zelenyi, Lev

    2009-03-01

    The exploration of the Jovian System and its fascinating satellite Europa is one of the priorities presented in ESA’s “Cosmic Vision” strategic document. The Jovian System indeed displays many facets. It is a small planetary system in its own right, built-up out of the mixture of gas and icy material that was present in the external region of the solar nebula. Through a complex history of accretion, internal differentiation and dynamic interaction, a very unique satellite system formed, in which three of the four Galilean satellites are locked in the so-called Laplace resonance. The energy and angular momentum they exchange among themselves and with Jupiter contribute to various degrees to the internal heating sources of the satellites. Unique among these satellites, Europa is believed to shelter an ocean between its geodynamically active icy crust and its silicate mantle, one where the main conditions for habitability may be fulfilled. For this very reason, Europa is one of the best candidates for the search for life in our Solar System. So, is Europa really habitable, representing a “habitable zone” in the Jupiter system? To answer this specific question, we need a dedicated mission to Europa. But to understand in a more generic way the habitability conditions around giant planets, we need to go beyond Europa itself and address two more general questions at the scale of the Jupiter system: to what extent is its possible habitability related to the initial conditions and formation scenario of the Jovian satellites? To what extent is it due to the way the Jupiter system works? ESA’s Cosmic Vision programme offers an ideal and timely framework to address these three key questions. Building on the in-depth reconnaissance of the Jupiter System by Galileo (and the Voyager, Ulysses, Cassini and New Horizons fly-by’s) and on the anticipated accomplishments of NASA’s JUNO mission, it is now time to design and fly a new mission which will focus on these

  3. Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.

    PubMed

    Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique

    2017-03-14

    Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.

  4. Estimation of Theaflavins (TF) and Thearubigins (TR) Ratio in Black Tea Liquor Using Electronic Vision System

    NASA Astrophysics Data System (ADS)

    Akuli, Amitava; Pal, Abhra; Ghosh, Arunangshu; Bhattacharyya, Nabarun; Bandhopadhyya, Rajib; Tamuly, Pradip; Gogoi, Nagen

    2011-09-01

    Quality of black tea is generally assessed using organoleptic tests by professional tea tasters. They determine the quality of black tea based on its appearance (in dry condition and during liquor formation), aroma and taste. Variation in the above parameters is actually contributed by a number of chemical compounds like, Theaflavins (TF), Thearubigins (TR), Caffeine, Linalool, Geraniol etc. Among the above, TF and TR are the most important chemical compounds, which actually contribute to the formation of taste, colour and brightness in tea liquor. Estimation of TF and TR in black tea is generally done using a spectrophotometer instrument. But, the analysis technique undergoes a rigorous and time consuming effort for sample preparation; also the operation of costly spectrophotometer requires expert manpower. To overcome above problems an Electronic Vision System based on digital image processing technique has been developed. The system is faster, low cost, repeatable and can estimate the amount of TF and TR ratio for black tea liquor with accuracy. The data analysis is done using Principal Component Analysis (PCA), Multiple Linear Regression (MLR) and Multiple Discriminate Analysis (MDA). A correlation has been established between colour of tea liquor images and TF, TR ratio. This paper describes the newly developed E-Vision system, experimental methods, data analysis algorithms and finally, the performance of the E-Vision System as compared to the results of traditional spectrophotometer.

  5. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    NASA Astrophysics Data System (ADS)

    D'Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-06-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  6. Implementation of the Canny Edge Detection algorithm for a stereo vision system

    SciTech Connect

    Wang, J.R.; Davis, T.A.; Lee, G.K.

    1996-12-31

    There exists many applications in which three-dimensional information is necessary. For example, in manufacturing systems, parts inspection may require the extraction of three-dimensional information from two-dimensional images, through the use of a stereo vision system. In medical applications, one may wish to reconstruct a three-dimensional image of a human organ from two or more transducer images. An important component of three-dimensional reconstruction is edge detection, whereby an image boundary is separated from background, for further processing. In this paper, a modification of the Canny Edge Detection approach is suggested to extract an image from a cluttered background. The resulting cleaned image can then be sent to the image matching, interpolation and inverse perspective transformation blocks to reconstruct the 3-D scene. A brief discussion of the stereo vision system that has been developed at the Mars Mission Research Center (MMRC) is also presented. Results of a version of Canny Edge Detection algorithm show promise as an accurate edge extractor which may be used in the edge-pixel based binocular stereo vision system.

  7. Commercial machine vision system for traffic monitoring and control

    NASA Astrophysics Data System (ADS)

    D Agostino, Salvatore A.

    1992-03-01

    Traffic imaging covers a range of current and potential applications. These include traffic control and analysis, license plate finding, reading and storage, violation detection and archiving, vehicle sensors, and toll collection/enforcement. Experience from commercial installations and knowledge of the system requirements have been gained over the past 10 years. Recent improvements in system component cost and performance now allow products to be applied that provide cost effective solutions to the requirements for truly intelligent vehicle/highway systems (IVHS). The United States is a country that loves to drive. The infrastructure built in the 1950s and 1960s along with the low price of gasoline created an environment where the automobiles became an accessible and intricate part of American life. The United States has spent $DLR103 billion to build 40,000 highway miles since 1956, the start of the interstate program which is nearly complete. Unfortunately, a situation has arisen where the options for dramatically improving the ability of our roadways to absorb the increasing amount of traffic is limited. This is true in other countries as well as in the United States. The number of vehicles in the world increases by over 10,000,000 each year. In the United States there are about 180 million cars, trucks, and buses and this is estimated to double in the next 30 years. Urban development, and development in general, pushes from the edge of our roadways out. This leaves little room to increase the physical amount of roadway. Americans now spend more than 1.6 billion hours a year waiting in traffic jams. It is estimated that this congestion wasted 3 billion gallons of oil or 4% of the nation's annual gas consumption. The way out of the dilemma is to increase road use efficiency as well as improve mass transportation alternatives.

  8. Optical calculation of correlation filters for a robotic vision system

    NASA Technical Reports Server (NTRS)

    Knopp, Jerome

    1989-01-01

    A method is presented for designing optical correlation filters based on measuring three intensity patterns: the Fourier transform of a filter object, a reference wave and the interference pattern produced by the sum of the object transform and the reference. The method can produce a filter that is well matched to both the object, its transforming optical system and the spatial light modulator used in the correlator input plane. A computer simulation was presented to demonstrate the approach for the special case of a conventional binary phase-only filter. The simulation produced a workable filter with a sharp correlation peak.

  9. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    PubMed

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  10. Retinal stem cells and regeneration of vision system.

    PubMed

    Yip, Henry K

    2014-01-01

    The vertebrate retina is a well-characterized model for studying neurogenesis. Retinal neurons and glia are generated in a conserved order from a pool of mutlipotent progenitor cells. During retinal development, retinal stem/progenitor cells (RPC) change their competency over time under the influence of intrinsic (such as transcriptional factors) and extrinsic factors (such as growth factors). In this review, we summarize the roles of these factors, together with the understanding of the signaling pathways that regulate eye development. The information about the interactions between intrinsic and extrinsic factors for retinal cell fate specification is useful to regenerate specific retinal neurons from RPCs. Recent studies have identified RPCs in the retina, which may have important implications in health and disease. Despite the recent advances in stem cell biology, our understanding of many aspects of RPCs in the eye remains limited. PRCs are present in the developing eye of all vertebrates and remain active in lower vertebrates throughout life. In mammals, however, PRCs are quiescent and exhibit very little activity and thus have low capacity for retinal regeneration. A number of different cellular sources of RPCs have been identified in the vertebrate retina. These include PRCs at the retinal margin, pigmented cells in the ciliary body, iris, and retinal pigment epithelium, and Müller cells within the retina. Because PRCs can be isolated and expanded from immature and mature eyes, it is possible now to study these cells in culture and after transplantation in the degenerated retinal tissue. We also examine current knowledge of intrinsic RPCs, and human embryonic stems and induced pluripotent stem cells as potential sources for cell transplant therapy to regenerate the diseased retina.

  11. Endoscopic machine vision system for blood-supply estimation of the nasal mucosa

    NASA Astrophysics Data System (ADS)

    Balas, Constantin J.; Christodoulou, P. N.; Prokopakis, E. P.; Helidonis, Emmanuel S.

    1996-12-01

    We have developed a machine vision system, which combines imaging and absolute color measurement techniques, for remote, objective, 2D color and color difference measurements. This imaging colorimeter adapted on an endoscope was used to evaluate nasal mucosa color changes induced by the administration of a sympathomimetic agent, with vasoconstrictive properties. The demonstrated reproducible and reliable measurements indicate the efficacy of the described system, for the potent vasoconstriction assessment of different pharmacotherapeutic agents, and suggests that it can also be useful for evaluating individuals, with allergic rhinitis, vasomotor rhinitis, and inflammation disorders of the paranasal sinuses. Machine vision techniques in endoscopy providing objective indices for optical tissue characterization and analysis can serve in understanding the pathophysiology of tissue lesions, and in the objective evaluation of their response to different therapeutic schemes, in several medical fields.

  12. A novel registration method for image-guided neurosurgery system based on stereo vision.

    PubMed

    An, Yong; Wang, Manning; Song, Zhijian

    2015-01-01

    This study presents a novel spatial registration method of Image-guided neurosurgery system (IGNS) based on stereo-vision. Images of the patient's head are captured by a video camera, which is calibrated and tracked by an optical tracking system. Then, a set of sparse facial data points are reconstructed from them by stereo vision in the patient space. Surface matching method is utilized to register the reconstructed sparse points and the facial surface reconstructed from preoperative images of the patient. Simulation experiments verified the feasibility of the proposed method. The proposed method it is a new low-cost and easy-to-use spatial registration method for IGNS, with good prospects for clinical application.

  13. Outstanding Science in the Neptune System from an Aerocaptured NASA "Vision Mission"

    NASA Technical Reports Server (NTRS)

    Spilker, T. R.; Spilker, L. J.; Ingersoll, A. P.

    2005-01-01

    In 2003 NASA released its Vision Mission Studies NRA (NRA-03-OSS-01-VM) soliciting proposals to study any one of 17 Vision Missions described in the NRA. The authors, along with a team of scientists and engineers, sucessfully proposed a study of the Neptune Orbiter With Probes (NOP) option, a mission that performs Cassini-level science in the Neptune system without fission-based electric power or propulsion. The Study Team includes a Science Team composed of experienced planetary scientists, many of whom helped draft the Neptune discussions in the 2003 Solar System Exploration Decadal Survey (SSEDS), and an Implementation Team with experienced engineers and technologists from multiple NASA Centers and JPL.

  14. Image processing for a tactile/vision substitution system using digital CNN.

    PubMed

    Lin, Chien-Nan; Yu, Sung-Nien; Hu, Jin-Cheng

    2006-01-01

    In view of the parallel processing and easy implementation properties of CNN, we propose to use digital CNN as the image processor of a tactile/vision substitution system (TVSS). The digital CNN processor is used to execute the wavelet down-sampling filtering and the half-toning operations, aiming to extract important features from the images. A template combination method is used to embed the two image processing functions into a single CNN processor. The digital CNN processor is implemented on an intellectual property (IP) and is implemented on a XILINX VIRTEX II 2000 FPGA board. Experiments are designated to test the capability of the CNN processor in the recognition of characters and human subjects in different environments. The experiments demonstrates impressive results, which proves the proposed digital CNN processor a powerful component in the design of efficient tactile/vision substitution systems for the visually impaired people.

  15. Real-time and low-cost embedded platform for car's surrounding vision system

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio; Franchi, Emilio

    2016-04-01

    The design and the implementation of a flexible and low-cost embedded system for real-time car's surrounding vision is presented. The target of the proposed multi-camera vision system is to provide the driver a better view of the objects that surround the vehicle. Fish-eye lenses are used to achieve a larger Field of View (FOV) but, on the other hand, introduce radial distortion of the images projected on the sensors. Using low-cost cameras there could be also some alignment issues. Since these complications are noticeable and dangerous, a real-time algorithm for their correction is presented. Then another real-time algorithm, used for merging 4 camera video streams together in a single view, is described. Real-time image processing is achieved through a hardware-software platform

  16. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  17. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  18. Research on imaging system of vision measurement for the shaft

    NASA Astrophysics Data System (ADS)

    Yang, Zhao; Wang, Xingdong; Liu, Yuanjiong; Liu, Zhao; Gao, Qing

    2015-12-01

    An imaging system is researched for the shaft size measurement, thus to replace the on-line manual measuring method, which is used to measuring diametric sizes and axial sizes of the shaft. Through the research of the characteristics of illumination, a kind of backlight was designed, which could improve the quality of image. For one CCD camera to the large size of the shaft is not easy to achieve, to continue research two CCD cameras imaging, the use of two cameras shoot the shaft two ends, to reduce the field of view to improve accuracy. At the same time, using the drive device to the relative position of the two cameras to achieve measure a variety of specifications of the shaft, improve compatibility. Because of the shaft parts for curved surface, need to extract the characteristics are not in the same plane, the telecentric lens of large depth of field was selected, to ensure the accuracy of image information. The image processing based on HALCON. From the measurement results, the shaft size measurement system measuring accuracy is high.

  19. Color night vision system for ground vehicle navigation

    NASA Astrophysics Data System (ADS)

    Ali, E. A.; Qadir, H.; Kozaitis, S. P.

    2014-06-01

    Operating in a degraded visual environment due to darkness can pose a threat to navigation safety. Systems have been developed to navigate in darkness that depend upon differences between objects such as temperature or reflectivity at various wavelengths. However, adding sensors for these systems increases the complexity by adding multiple components that may create problems with alignment and calibration. An approach is needed that is passive and simple for widespread acceptance. Our approach uses a type of augmented display to show fused images from visible and thermal sensors that are continuously updated. Because the raw fused image gave an unnatural color appearance, we used a color transfer process based on a look-up table to replace the false colors with a colormap derived from a daytime reference image obtained from a public database using the GPS coordinates of the vehicle. Although the database image was not perfectly registered, we were able to produce imagery acquired at night that appeared with daylight colors. Such an approach could improve the safety of nighttime navigation.

  20. Mosad and Stream Vision For A Telerobotic, Flying Camera System

    NASA Technical Reports Server (NTRS)

    Mandl, William

    2002-01-01

    Two full custom camera systems using the Multiplexed OverSample Analog to Digital (MOSAD) conversion technology for visible light sensing were built and demonstrated. They include a photo gate sensor and a photo diode sensor. The system includes the camera assembly, driver interface assembly, a frame stabler board with integrated decimeter and Windows 2000 compatible software for real time image display. An array size of 320X240 with 16 micron pixel pitch was developed for compatibility with 0.3 inch CCTV optics. With 1.2 micron technology, a 73% fill factor was achieved. Noise measurements indicated 9 to 11 bits operating with 13.7 bits best case. Power measured under 10 milliwatts at 400 samples per second. Nonuniformity variation was below noise floor. Pictures were taken with different cameras during the characterization study to demonstrate the operable range. The successful conclusion of this program demonstrates the utility of the MOSAD for NASA missions, providing superior performance over CMOS and lower cost and power consumption over CCD. The MOSAD approach also provides a path to radiation hardening for space based applications.

  1. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    SciTech Connect

    M. D. McKay; M. O. Anderson; R. A. Kinoshita; W. D. Willis

    1999-02-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an ongoing effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the ''feel'' of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  2. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    SciTech Connect

    Kinoshita, Robert Arthur; Anderson, Matthew Oley; Mckay, Mark D; Willis, Walter David

    1999-04-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the "feel" of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  3. Multispectral uncooled infrared enhanced-vision system for flight test

    NASA Astrophysics Data System (ADS)

    Tiana, Carlo L.; Kerr, Richard; Harrah, Steven D.

    2001-08-01

    The 1997 Final Report of the 'White House Commission on Aviation Safety and Security' challenged industrial and government concerns to reduce aviation accident rates by a factor of five within 10 years. In the report, the commission encourages NASA, FAA and others 'to expand their cooperative efforts in aviation safety research and development'. As a result of this publication, NASA has since undertaken a number of initiatives aimed at meeting the stated goal. Among these, the NASA Aviation Safety Program was initiated to encourage and assist in the development of technologies for the improvement of aviation safety. Among the technologies being considered are certain sensor technologies that may enable commercial and general aviation pilots to 'see to land' at night or in poor visibility conditions. Infrared sensors have potential applicability in this field, and this paper describes a system, based on such sensors, that is being deployed on the NASA Langley Research Center B757 ARIES research aircraft. The system includes two infrared sensors operating in different spectral bands, and a visible-band color CCD camera for documentation purposes. The sensors are mounted in an aerodynamic package in a forward position on the underside of the aircraft. Support equipment in the aircraft cabin collects and processes all relevant sensor data. Display of sensor images is achieved in real time on the aircraft's Head Up Display (HUD), or other display devices.

  4. Machine vision guided sensor positioning system for leaf temperature assessment

    NASA Technical Reports Server (NTRS)

    Kim, Y.; Ling, P. P.; Janes, H. W. (Principal Investigator)

    2001-01-01

    A sensor positioning system was developed for monitoring plants' well-being using a non-contact sensor. Image processing algorithms were developed to identify a target region on a plant leaf. A novel algorithm to recover view depth was developed by using a camera equipped with a computer-controlled zoom lens. The methodology has improved depth recovery resolution over a conventional monocular imaging technique. An algorithm was also developed to find a maximum enclosed circle on a leaf surface so the conical field-of-view of an infrared temperature sensor could be filled by the target without peripheral noise. The center of the enclosed circle and the estimated depth were used to define the sensor 3-D location for accurate plant temperature measurement.

  5. Night Vision Laboratory Static Performance Model for Thermal Viewing Systems

    DTIC Science & Technology

    1975-04-01

    Electro-Science Laboratory, Columbus, Ohio, May 1968, AD 831666. 5:’ 4,0 L~IGT FOG - CLEAH TO Li-3HT HAZE 0.01 - VERY LL I CLEAR 0.5 1.0 2 34 63810 20...Next, note that for a simple imaging system Ii’Ad L 7r Ad2 LL () AT (49 where 77o(X) = the optical efficiency of the viewer F = the f/number T...CSeATV.CJNL NU AN2/ S:STPMUL) LL ,-km. PAGL f’JLL CS WAIT. CLEAR SCRN-EN, PRINT ’JANNE.R9 AND RETURN. CSIO CALL CUJNNEC(.5LIN1-UTs0) CS CALL CUNNLCCbL0UTPUTa0) C

  6. Computer-based neuro-vision system for color classification of french fries

    NASA Astrophysics Data System (ADS)

    Panigrahi, Suranjan; Wiesenborn, Dennis

    1995-01-01

    French fries are one of the frozen foods with rising demands in domestic and international markets. Color is one of the critical attributes for quality evaluation of french fries. This study discusses the development of a color computer vision system and the integration of neural network technology for objective color evaluation and classification of french fries. The classification accuracy of a prototype back-propagation network developed for this purpose was found to be 96%.

  7. Increasing the object recognition distance of compact open air on board vision system

    NASA Astrophysics Data System (ADS)

    Kirillov, Sergey; Kostkin, Ivan; Strotov, Valery; Dmitriev, Vladimir; Berdnikov, Vadim; Akopov, Eduard; Elyutin, Aleksey

    2016-10-01

    The aim of this work was developing an algorithm eliminating the atmospheric distortion and improves image quality. The proposed algorithm is entirely software without using additional hardware photographic equipment. . This algorithm does not required preliminary calibration. It can work equally effectively with the images obtained at a distances from 1 to 500 meters. An algorithm for the open air images improve designed for Raspberry Pi model B on-board vision systems is proposed. The results of experimental examination are given.

  8. HDR video synthesis for vision systems in dynamic scenes

    NASA Astrophysics Data System (ADS)

    Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried

    2016-09-01

    High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.

  9. WELDSMART: A vision-based expert system for quality control

    NASA Technical Reports Server (NTRS)

    Andersen, Kristinn; Barnett, Robert Joel; Springfield, James F.; Cook, George E.

    1992-01-01

    This work was aimed at exploring means for utilizing computer technology in quality inspection and evaluation. Inspection of metallic welds was selected as the main application for this development and primary emphasis was placed on visual inspection, as opposed to other inspection methods, such as radiographic techniques. Emphasis was placed on methodologies with the potential for use in real-time quality control systems. Because quality evaluation is somewhat subjective, despite various efforts to classify discontinuities and standardize inspection methods, the task of using a computer for both inspection and evaluation was not trivial. The work started out with a review of the various inspection techniques that are used for quality control in welding. Among other observations from this review was the finding that most weld defects result in abnormalities that may be seen by visual inspection. This supports the approach of emphasizing visual inspection for this work. Quality control consists of two phases: (1) identification of weld discontinuities (some of which may be severe enough to be classified as defects), and (2) assessment or evaluation of the weld based on the observed discontinuities. Usually the latter phase results in a pass/fail judgement for the inspected piece. It is the conclusion of this work that the first of the above tasks, identification of discontinuities, is the most challenging one. It calls for sophisticated image processing and image analysis techniques, and frequently ad hoc methods have to be developed to identify specific features in the weld image. The difficulty of this task is generally not due to limited computing power. In most cases it was found that a modest personal computer or workstation could carry out most computations in a reasonably short time period. Rather, the algorithms and methods necessary for identifying weld discontinuities were in some cases limited. The fact that specific techniques were finally developed and

  10. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  11. NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.

  12. Novel method of calibration with restrictive constraints for stereo-vision system

    NASA Astrophysics Data System (ADS)

    Cui, Jiashan; Huo, Ju; Yang, Ming

    2016-05-01

    Regarding the calibration of a stereo vision measurement system, this paper puts forward a new bundle adjustment algorithm based on the stereo vision camera calibration method. Multiple-view geometric constraints and a bundle adjustment algorithm are used to optimize the inner and outer parameters of the camera accurately. A fixed relative constraint relationship between cameras is introduced. We have improved the normal equation construction process of the traditional bundle adjustment method, so that each iteration process occurs just outside the parameters of two images that are taken by a camera that has been optimized to better integrate two cameras bound together as one camera. The relationship between the fixed relative constraints can effectively increase the number of superfluous observations of the adjustment system and optimize higher accuracy while reducing the dimension of the normal matrix; it means that each iteration will reduce the time required. Simulation and actual experimental results show the superior performance of the proposed approach in terms of robustness and accuracy, and our approach also can be extended to stereo-vision system with more than two cameras.

  13. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  14. Edge detection algorithms implemented on Bi-i cellular vision system

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Arik, Sabri

    2009-02-01

    Bi-i (Bio-inspired) Cellular Vision system is built mainly on Cellular Neural /Nonlinear Networks (CNNs) type (ACE16k) and Digital Signal Processing (DSP) type microprocessors. CNN theory proposed by Chua has advanced properties for image processing applications. In this study, the edge detection algorithms are implemented on the Bi-i Cellular Vision System. Extracting the edge of an image to be processed correctly and fast is of crucial importance for image processing applications. Threshold Gradient based edge detection algorithm is implemented using ACE16k microprocessor. In addition, pre-processing operation is realized by using an image enhancement technique based on Laplacian operator. Finally, morphologic operations are performed as post processing operations. Sobel edge detection algorithm is performed by convolving sobel operators with the image in the DSP. The performances of the edge detection algorithms are compared using visual inspection and timing analysis. Experimental results show that the ACE16k has great computational power and Bi-i Cellular Vision System is very qualified to apply image processing algorithms in real time.

  15. Comparative system identification of flower tracking performance in three hawkmoth species reveals adaptations for dim light vision.

    PubMed

    Stöckl, Anna L; Kihlström, Klara; Chandler, Steven; Sponberg, Simon

    2017-04-05

    Flight control in insects is heavily dependent on vision. Thus, in dim light, the decreased reliability of visual signal detection also prompts consequences for insect flight. We have an emerging understanding of the neural mechanisms that different species employ to adapt the visual system to low light. However, much less explored are comparative analyses of how low light affects the flight behaviour of insect species, and the corresponding links between physiological adaptations and behaviour. We investigated whether the flower tracking behaviour of three hawkmoth species with different diel activity patterns revealed luminance-dependent adaptations, using a system identification approach. We found clear luminance-dependent differences in flower tracking in all three species, which were explained by a simple luminance-dependent delay model, which generalized across species. We discuss physiological and anatomical explanations for the variance in tracking responses, which could not be explained by such simple models. Differences between species could not be explained by the simple delay model. However, in several cases, they could be explained through the addition on a second model parameter, a simple scaling term, that captures the responsiveness of each species to flower movements. Thus, we demonstrate here that much of the variance in the luminance-dependent flower tracking responses of hawkmoths with different diel activity patterns can be captured by simple models of neural processing.This article is part of the themed issue 'Vision in dim light'.

  16. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  17. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  18. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  19. Machine Vision Monitoring System of Lettuce Growth in a State-Of Greenhouse

    NASA Astrophysics Data System (ADS)

    Lee, Jong Whan

    Farmers want a monitoring system to support decisions for plant cultivation and to obtain information for plant health conditions. This study established the remote monitoring system in a greenhouse and provided the machine vision system with remote control camera to extract plant images, to analyze a trend of lettuce growth, and to predict fresh weights of lettuce plants. The calibration bars with color patches were used for the geometric calibration and the extraction of lettuce regions from the image captured under sunlight conditions. The fresh weight prediction model was developed by the image analysis of lettuces growing in a greenhouse.

  20. FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.

    PubMed

    Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.

  1. A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system

    NASA Astrophysics Data System (ADS)

    Ge, Zhuo; Zhu, Ying; Liang, Guanhao

    2017-01-01

    To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.

  2. SAD-based stereo vision machine on a System-on-Programmable-Chip (SoPC).

    PubMed

    Zhang, Xiang; Chen, Zhangwei

    2013-03-04

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  3. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    PubMed Central

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  4. Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System

    PubMed Central

    García-Garrido, Miguel A.; Ocaña, Manuel; Llorca, David F.; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance. PMID:22438704

  5. Context-specific energy strategies: coupling energy system visions with feasible implementation scenarios.

    PubMed

    Trutnevyte, Evelina; Stauffacher, Michael; Schlegel, Matthias; Scholz, Roland W

    2012-09-04

    Conventional energy strategy defines an energy system vision (the goal), energy scenarios with technical choices and an implementation mechanism (such as economic incentives). Due to the lead of a generic vision, when applied in a specific regional context, such a strategy can deviate from the optimal one with, for instance, the lowest environmental impacts. This paper proposes an approach for developing energy strategies by simultaneously, rather than sequentially, combining multiple energy system visions and technically feasible, cost-effective energy scenarios that meet environmental constraints at a given place. The approach is illustrated by developing a residential heat supply strategy for a Swiss region. In the analyzed case, urban municipalities should focus on reducing heat demand, and rural municipalities should focus on harvesting local energy sources, primarily wood. Solar thermal units are cost-competitive in all municipalities, and their deployment should be fostered by information campaigns. Heat pumps and building refurbishment are not competitive; thus, economic incentives are essential, especially for urban municipalities. In rural municipalities, wood is cost-competitive, and community-based initiatives are likely to be most successful. Thus, the paper shows that energy strategies should be spatially differentiated. The suggested approach can be transferred to other regions and spatial scales.

  6. Present and future of vision systems technologies in commercial flight operations

    NASA Astrophysics Data System (ADS)

    Ward, Jim

    2016-05-01

    The development of systems to enable pilots of all types of aircraft to see through fog, clouds, and sandstorms and land in low visibility has been widely discussed and researched across aviation. For military applications, the goal has been to operate in a Degraded Visual Environment (DVE), using sensors to enable flight crews to see and operate without concern to weather that limits human visibility. These military DVE goals are mainly oriented to the off-field landing environment. For commercial aviation, the Federal Aviation Agency (FAA) implemented operational regulations in 2004 that allow the flight crew to see the runway environment using an Enhanced Flight Vision Systems (EFVS) and continue the approach below the normal landing decision height. The FAA is expanding the current use and economic benefit of EFVS technology and will soon permit landing without any natural vision using real-time weather-penetrating sensors. The operational goals of both of these efforts, DVE and EFVS, have been the stimulus for development of new sensors and vision displays to create the modern flight deck.

  7. The ART of representation: Memory reduction and noise tolerance in a neural network vision system

    NASA Astrophysics Data System (ADS)

    Langley, Christopher S.

    The Feature Cerebellar Model Arithmetic Computer (FCMAC) is a multiple-input-single-output neural network that can provide three-degree-of-freedom (3-DOF) pose estimation for a robotic vision system. The FCMAC provides sufficient accuracy to enable a manipulator to grasp an object from an arbitrary pose within its workspace. The network learns an appearance-based representation of an object by storing coarsely quantized feature patterns. As all unique patterns are encoded, the network size grows uncontrollably. A new architecture is introduced herein, which combines the FCMAC with an Adaptive Resonance Theory (ART) network. The ART module categorizes patterns observed during training into a set of prototypes that are used to build the FCMAC. As a result, the network no longer grows without bound, but constrains itself to a user-specified size. Pose estimates remain accurate since the ART layer tends to discard the least relevant information first. The smaller network performs recall faster, and in some cases is better for generalization, resulting in a reduction of error at recall time. The ART-Under-Constraint (ART-C) algorithm is extended to include initial filling with randomly selected patterns (referred to as ART-F). In experiments using a real-world data set, the new network performed equally well using less than one tenth the number of coarse patterns as a regular FCMAC. The FCMAC is also extended to include real-valued input activations. As a result, the network can be tuned to reject a variety of types of noise in the image feature detection. A quantitative analysis of noise tolerance was performed using four synthetic noise algorithms, and a qualitative investigation was made using noisy real-world image data. In validation experiments, the FCMAC system outperformed Radial Basis Function (RBF) networks for the 3-DOF problem, and had accuracy comparable to that of Principal Component Analysis (PCA) and superior to that of Shape Context Matching (SCM), both

  8. Retina-specific activation of a sustained hypoxia-like response leads to severe retinal degeneration and loss of vision.

    PubMed

    Lange, Christina; Caprara, Christian; Tanimoto, Naoyuki; Beck, Susanne; Huber, Gesine; Samardzija, Marijana; Seeliger, Mathias; Grimm, Christian

    2011-01-01

    Loss of vision and blindness in human patients is often caused by the degeneration of neuronal cells in the retina. In mouse models, photoreceptors can be protected from death by hypoxic preconditioning. Preconditioning in low oxygen stabilizes and activates hypoxia inducible transcription factors (HIFs), which play a major role in the hypoxic response of tissues including the retina. We show that a tissue-specific knockdown of von Hippel-Lindau protein (VHL) activated HIF transcription factors in normoxic conditions in the retina. Sustained activation of HIF1 and HIF2 was accompanied by persisting embryonic vasculatures in the posterior eye and the iris. Embryonic vessels persisted into adulthood and led to a severely abnormal mature vessel system with vessels penetrating the photoreceptor layer in adult mice. The sustained hypoxia-like response also activated the leukemia inhibitory factor (LIF)-controlled endogenous molecular cell survival pathway. However, this was not sufficient to protect the retina against massive cell death in all retinal layers of adult mice. Caspases 1, 3 and 8 were upregulated during the degeneration as were several VHL target genes connected to the extracellular matrix. Misregulation of these genes may influence retinal structure and may therefore facilitate growth of vessels into the photoreceptor layer. Thus, an early and sustained activation of a hypoxia-like response in retinal cells leads to abnormal vasculature and severe retinal degeneration in the adult mouse retina.

  9. Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people.

    PubMed

    Aguilar, Carlos; Castet, Eric

    2017-01-01

    People with low vision, especially those with Central Field Loss (CFL), need magnification to read. The flexibility of Electronic Vision Enhancement Systems (EVES) offers several ways of magnifying text. Due to the restricted field of view of EVES, the need for magnification is conflicting with the need to navigate through text (panning). We have developed and implemented a real-time gaze-controlled system whose goal is to optimize the possibility of magnifying a portion of text while maintaining global viewing of the other portions of the text (condition 1). Two other conditions were implemented that mimicked commercially available advanced systems known as CCTV (closed-circuit television systems)-conditions 2 and 3. In these two conditions, magnification was uniformly applied to the whole text without any possibility to specifically select a region of interest. The three conditions were implemented on the same computer to remove differences that might have been induced by dissimilar equipment. A gaze-contingent artificial 10° scotoma (a mask continuously displayed in real time on the screen at the gaze location) was used in the three conditions in order to simulate macular degeneration. Ten healthy subjects with a gaze-contingent scotoma read aloud sentences from a French newspaper in nine experimental one-hour sessions. Reading speed was measured and constituted the main dependent variable to compare the three conditions. All subjects were able to use condition 1 and they found it slightly more comfortable to use than condition 2 (and similar to condition 3). Importantly, reading speed results did not show any significant difference between the three systems. In addition, learning curves were similar in the three conditions. This proof of concept study suggests that the principles underlying the gaze-controlled enhanced system might be further developed and fruitfully incorporated in different kinds of EVES for low vision reading.

  10. High-accuracy microassembly by intelligent vision systems and smart sensor integration

    NASA Astrophysics Data System (ADS)

    Schilp, Johannes; Harfensteller, Mark; Jacob, Dirk; Schilp, Michael

    2003-10-01

    Innovative production processes and strategies from batch production to high volume scale are playing a decisive role in generating microsystems economically. In particular assembly processes are crucial operations during the production of microsystems. Due to large batch sizes many microsystems can be produced economically by conventional assembly techniques using specialized and highly automated assembly systems. At laboratory stage microsystems are mostly assembled by hand. Between these extremes there is a wide field of small and middle sized batch production wherefore common automated solutions rarely are profitable. For assembly processes at these batch sizes a flexible automated assembly system has been developed at the iwb. It is based on a modular design. Actuators like grippers, dispensers or other process tools can easily be attached due to a special tool changing system. Therefore new joining techniques can easily be implemented. A force-sensor and a vision system are integrated into the tool head. The automated assembly processes are based on different optical sensors and smart actuators like high-accuracy robots or linear-motors. A fiber optic sensor is integrated in the dispensing module to measure contactless the clearance between the dispense needle and the substrate. Robot vision systems using the strategy of optical pattern recognition are also implemented as modules. In combination with relative positioning strategies, an assembly accuracy of the assembly system of less than 3 μm can be realized. A laser system is used for manufacturing processes like soldering.

  11. Structure light telecentric stereoscopic vision 3D measurement system based on Scheimpflug condition

    NASA Astrophysics Data System (ADS)

    Mei, Qing; Gao, Jian; Lin, Hui; Chen, Yun; Yunbo, He; Wang, Wei; Zhang, Guanjin; Chen, Xin

    2016-11-01

    We designed a new three-dimensional (3D) measurement system for micro components: a structure light telecentric stereoscopic vision 3D measurement system based on the Scheimpflug condition. This system creatively combines the telecentric imaging model and the Scheimpflug condition on the basis of structure light stereoscopic vision, having benefits of a wide measurement range, high accuracy, fast speed, and low price. The system measurement range is 20 mm×13 mm×6 mm, the lateral resolution is 20 μm, and the practical vertical resolution reaches 2.6 μm, which is close to the theoretical value of 2 μm and well satisfies the 3D measurement needs of micro components such as semiconductor devices, photoelectron elements, and micro-electromechanical systems. In this paper, we first introduce the principle and structure of the system and then present the system calibration and 3D reconstruction. We then present an experiment that was performed for the 3D reconstruction of the surface topography of a wafer, followed by a discussion. Finally, the conclusions are presented.

  12. An Automated Mouse Tail Vascular Access System by Vision and Pressure Feedback

    PubMed Central

    Chang, Yen-Chi; Berry-Pusey, Brittany; Yasin, Rashid; Vu, Nam; Maraglia, Brandon; Chatziioannou, Arion X.; Tsao, Tsu-Chin

    2015-01-01

    This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system. PMID:26478693

  13. An Automated Mouse Tail Vascular Access System by Vision and Pressure Feedback.

    PubMed

    Chang, Yen-Chi; Berry-Pusey, Brittany; Yasin, Rashid; Vu, Nam; Maraglia, Brandon; Chatziioannou, Arion X; Tsao, Tsu-Chin

    2015-08-01

    This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system.

  14. Air and Water System (AWS) Design and Technology Selection for the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Jones, Harry; Kliss, Mark

    2005-01-01

    This paper considers technology selection for the crew air and water recycling systems to be used in long duration human space exploration. The specific objectives are to identify the most probable air and water technologies for the vision for space exploration and to identify the alternate technologies that might be developed. The approach is to conduct a preliminary first cut systems engineering analysis, beginning with the Air and Water System (AWS) requirements and the system mass balance, and then define the functional architecture, review the International Space Station (ISS) technologies, and discuss alternate technologies. The life support requirements for air and water are well known. The results of the mass flow and mass balance analysis help define the system architectural concept. The AWS includes five subsystems: Oxygen Supply, Condensate Purification, Urine Purification, Hygiene Water Purification, and Clothes Wash Purification. AWS technologies have been evaluated in the life support design for ISS node 3, and in earlier space station design studies, in proposals for the upgrade or evolution of the space station, and in studies of potential lunar or Mars missions. The leading candidate technologies for the vision for space exploration are those planned for Node 3 of the ISS. The ISS life support was designed to utilize Space Station Freedom (SSF) hardware to the maximum extent possible. The SSF final technology selection process, criteria, and results are discussed. Would it be cost-effective for the vision for space exploration to develop alternate technology? This paper will examine this and other questions associated with AWS design and technology selection.

  15. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    PubMed Central

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  16. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    PubMed

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  17. Broad Band Antireflection Coating on Zinc Sulphide Window for Shortwave infrared cum Night Vision System

    NASA Astrophysics Data System (ADS)

    Upadhyaya, A. S.; Bandyopadhyay, P. K.

    2012-11-01

    In state of art technology, integrated devices are widely used or their potential advantages. Common system reduces weight as well as total space covered by its various parts. In the state of art surveillance system integrated SWIR and night vision system used for more accurate identification of object. In this system a common optical window is used, which passes the radiation of both the regions, further both the spectral regions are separated in two channels. ZnS is a good choice for a common window, as it transmit both the region of interest, night vision (650 - 850 nm) as well as SWIR (0.9 - 1.7 μm). In this work a broad band anti reflection coating is developed on ZnS window to enhance the transmission. This seven layer coating is designed using flip flop design method. After getting the final design, some minor refinement is done, using simplex method. SiO2 and TiO2 coating material combination is used for this work. The coating is fabricated by physical vapour deposition process and the materials were evaporated by electron beam gun. Average transmission of both side coated substrate from 660 to 1700 nm is 95%. This coating also acts as contrast enhancement filter for night vision devices, as it reflect the region of 590 - 660 nm. Several trials have been conducted to check the coating repeatability, and it is observed that transmission variation in different trials is not very much and it is under the tolerance limit. The coating also passes environmental test for stability.

  18. A vision-based dynamic rotational angle measurement system for large civil structures.

    PubMed

    Lee, Jong-Jae; Ho, Hoai-Nam; Lee, Jong-Han

    2012-01-01

    In this paper, we propose a vision-based rotational angle measurement system for large-scale civil structures. Despite the fact that during the last decade several rotation angle measurement systems were introduced, they however often required complex and expensive equipment. Therefore, alternative effective solutions with high resolution are in great demand. The proposed system consists of commercial PCs, commercial camcorders, low-cost frame grabbers, and a wireless LAN router. The calculation of rotation angle is obtained by using image processing techniques with pre-measured calibration parameters. Several laboratory tests were conducted to verify the performance of the proposed system. Compared with the commercial rotation angle measurement, the results of the system showed very good agreement with an error of less than 1.0% in all test cases. Furthermore, several tests were conducted on the five-story modal testing tower with a hybrid mass damper to experimentally verify the feasibility of the proposed system.

  19. A synthetic vision system using directionally selective motion detectors to recognize collision.

    PubMed

    Yue, Shigang; Rind, F Claire

    2007-01-01

    Reliably recognizing objects approaching on a collision course is extremely important. A synthetic vision system is proposed to tackle the problem of collision recognition in dynamic environments. The system combines the outputs of four whole-field motion-detecting neurons, each receiving inputs from a network of neurons employing asymmetric lateral inhibition to suppress their responses to one direction of motion. An evolutionary algorithm is then used to adjust the weights between the four motion-detecting neurons to tune the system to detect collisions in two test environments. To do this, a population of agents, each representing a proposed synthetic visual system, either were shown images generated by a mobile Khepera robot navigating in a simplified laboratory environment or were shown images videoed outdoors from a moving vehicle. The agents had to cope with the local environment correctly in order to survive. After 400 generations, the best agent recognized imminent collisions reliably in the familiar environment where it had evolved. However, when the environment was swapped, only the agent evolved to cope in the robotic environment still signaled collision reliably. This study suggests that whole-field direction-selective neurons, with selectivity based on asymmetric lateral inhibition, can be organized into a synthetic vision system, which can then be adapted to play an important role in collision detection in complex dynamic scenes.

  20. Aspects of Synthetic Vision Display Systems and the Best Practices of the NASA's SVS Project

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Jones, Denise R.; Young, Steven D.; Arthur, Jarvis J.; Prinzel, Lawrence J.; Glaab, Louis J.; Harrah, Steven D.; Parrish, Russell V.

    2008-01-01

    NASA s Synthetic Vision Systems (SVS) Project conducted research aimed at eliminating visibility-induced errors and low visibility conditions as causal factors in civil aircraft accidents while enabling the operational benefits of clear day flight operations regardless of actual outside visibility. SVS takes advantage of many enabling technologies to achieve this capability including, for example, the Global Positioning System (GPS), data links, radar, imaging sensors, geospatial databases, advanced display media and three dimensional video graphics processors. Integration of these technologies to achieve the SVS concept provides pilots with high-integrity information that improves situational awareness with respect to terrain, obstacles, traffic, and flight path. This paper attempts to emphasize the system aspects of SVS - true systems, rather than just terrain on a flight display - and to document from an historical viewpoint many of the best practices that evolved during the SVS Project from the perspective of some of the NASA researchers most heavily involved in its execution. The Integrated SVS Concepts are envisagements of what production-grade Synthetic Vision systems might, or perhaps should, be in order to provide the desired functional capabilities that eliminate low visibility as a causal factor to accidents and enable clear-day operational benefits regardless of visibility conditions.

  1. Experimental study on a smart wheelchair system using a combination of stereoscopic and spherical vision.

    PubMed

    Nguyen, Jordan S; Su, Steven W; Nguyen, Hung T

    2013-01-01

    This paper is concerned with the experimental study performance of a smart wheelchair system named TIM (Thought-controlled Intelligent Machine), which uses a unique camera configuration for vision. Included in this configuration are stereoscopic cameras for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, and a spherical camera system for 360-degrees of monocular vision. The camera combination provides obstacle detection and mapping in unknown environments during real-time autonomous navigation of the wheelchair. With the integration of hands-free wheelchair control technology, designed as control methods for people with severe physical disability, the smart wheelchair system can assist the user with automated guidance during navigation. An experimental study on this system was conducted with a total of 10 participants, consisting of 8 able-bodied subjects and 2 tetraplegic (C-6 to C-7) subjects. The hands-free control technologies utilized for this testing were a head-movement controller (HMC) and a brain-computer interface (BCI). The results showed the assistance of TIM's automated guidance system had a statistically significant reduction effect (p-value = 0.000533) on the completion times of the obstacle course presented in the experimental study, as compared to the test runs conducted without the assistance of TIM.

  2. Night vision imaging system design, integration and verification in spacecraft vacuum thermal test

    NASA Astrophysics Data System (ADS)

    Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing

    2015-08-01

    The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.

  3. Microwave vision for robots

    NASA Technical Reports Server (NTRS)

    Lewandowski, Leon; Struckman, Keith

    1994-01-01

    Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.

  4. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss

    PubMed Central

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788

  5. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss.

    PubMed

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960's on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research.

  6. Down-to-the-runway enhanced flight vision system (EFVS) approach test results

    NASA Astrophysics Data System (ADS)

    McKinley, John B.; Heidhausen, Eric; Cramer, James A.; Krone, Norris J., Jr.

    2008-04-01

    Flight tests where conducted at Cambridge-Dorchester Airport (KCGE) and Easton Municipal Airport / Newnam Field (KESN) in a Cessna 402B aircraft using a head-up display (HUD) and a Kollsman Enhanced Vision System (EVS-I) infrared camera. These tests were sponsored by the MITRE Corporation's Center for Advanced Aviation System Development (CAASD) and the Federal Aviation Administration. Imagery of the EVS-I infrared camera, HUD guidance cues, and out-the-window video were each separately recorded at an engineering workstation for each approach, roll-out, and taxi operation. The EVS-I imagery was displayed on the HUD with guidance cues generated by the mission computer. Also separately recorded was the inertial flight path data. Enhanced Flight Vision System (EFVS) approaches were conducted from the final approach fix to runway flare, touchdown, roll-out and taxi using the HUD and EVS-I sensor as the only visual reference. Flight conditions included two-pilot crew, day, night, non-precision course offset approaches, ILS approach, crosswind approaches, and missed approaches. Results confirmed the feasibility for safe conduct of down-to-the-runway precision approaches in low visibility to runways with and without precision approach systems, when consideration is given to proper aircraft instrumentation, pilot training, and acceptable procedures. Operational benefits include improved runway occupancy rates, and reduced delays and diversions.

  7. Enhanced and synthetic vision system for autonomous all weather approach and landing

    NASA Astrophysics Data System (ADS)

    Korn, Bernd R.

    2007-04-01

    Within its research project ADVISE-PRO (Advanced visual system for situation awareness enhancement - prototype, 2003 - 2006) that will be presented in this contribution, DLR has combined elements of Enhanced Vision and Synthetic Vision to one integrated system to allow all low visibility operations independently from the infrastructure on ground. The core element of this system is the adequate fusion of all information that is available on-board. This fusion process is organized in a hierarchical manner. The most important subsystems are a) the sensor based navigation which determines the aircraft's position relative to the runway by automatically analyzing sensor data (MMW, IR, radar altimeter) without using neither (D)GPS nor precise knowledge about the airport geometry, b) an integrity monitoring of navigation data and terrain data which verifies on-board navigation data ((D)GPS + INS) with sensor data (MMW-Radar, IR-Sensor, Radar altimeter) and airport / terrain databases, c) an obstacle detection system and finally d) a consistent description of situation and respective HMI for the pilot.

  8. An Integrated Vision-Based System for Spacecraft Attitude and Topology Determination for Formation Flight Missions

    NASA Technical Reports Server (NTRS)

    Rogers, Aaron; Anderson, Kalle; Mracek, Anna; Zenick, Ray

    2004-01-01

    With the space industry's increasing focus upon multi-spacecraft formation flight missions, the ability to precisely determine system topology and the orientation of member spacecraft relative to both inertial space and each other is becoming a critical design requirement. Topology determination in satellite systems has traditionally made use of GPS or ground uplink position data for low Earth orbits, or, alternatively, inter-satellite ranging between all formation pairs. While these techniques work, they are not ideal for extension to interplanetary missions or to large fleets of decentralized, mixed-function spacecraft. The Vision-Based Attitude and Formation Determination System (VBAFDS) represents a novel solution to both the navigation and topology determination problems with an integrated approach that combines a miniature star tracker with a suite of robust processing algorithms. By combining a single range measurement with vision data to resolve complete system topology, the VBAFDS design represents a simple, resource-efficient solution that is not constrained to certain Earth orbits or formation geometries. In this paper, analysis and design of the VBAFDS integrated guidance, navigation and control (GN&C) technology will be discussed, including hardware requirements, algorithm development, and simulation results in the context of potential mission applications.

  9. Gesture therapy: a vision-based system for upper extremity stroke rehabilitation.

    PubMed

    Sucar, L; Luis, Roger; Leder, Ron; Hernandez, Jorge; Sanchez, Israel

    2010-01-01

    Stroke is the main cause of motor and cognitive disabilities requiring therapy in the world. Therefor it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. We have developed a low-cost vision-based system that allows stroke survivors to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a virtual environment for facilitating repetitive movement training, with computer vision algorithms that track the hand of a patient, using an inexpensive camera and a personal computer. This system, called Gesture Therapy, includes a gripper with a pressure sensor to include hand and finger rehabilitation; and it tracks the head of the patient to detect and avoid trunk compensation. It has been evaluated in a controlled clinical trial at the National Institute for Neurology and Neurosurgery in Mexico City, comparing it with conventional occupational therapy. In this paper we describe the latest version of the Gesture Therapy System and summarize the results of the clinical trail.

  10. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  11. Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke

    NASA Astrophysics Data System (ADS)

    Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro

    Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.

  12. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  13. Vision-based system of AUV for an underwater pipeline tracker

    NASA Astrophysics Data System (ADS)

    Zhang, Tie-dong; Zeng, Wen-jing; Wan, Lei; Qin, Zai-bai

    2012-09-01

    This paper describes a new framework for detection and tracking of underwater pipeline, which includes software system and hardware system. It is designed for vision system of AUV based on monocular CCD camera. First, the real-time data flow from image capture card is pre-processed and pipeline features are extracted for navigation. The region saturation degree is advanced to remove false edge point group after Sobel operation. An appropriate way is proposed to clear the disturbance around the peak point in the process of Hough transform. Second, the continuity of pipeline layout is taken into account to improve the efficiency of line extraction. Once the line information has been obtained, the reference zone is predicted by Kalman filter. It denotes the possible appearance position of the pipeline in the image. Kalman filter is used to estimate this position in next frame so that the information of pipeline of each frame can be known in advance. Results obtained on real optic vision data in tank experiment are displayed and discussed. They show that the proposed system can detect and track the underwater pipeline online, and is effective and feasible.

  14. A synchronized multipoint vision-based system for displacement measurement of civil infrastructures.

    PubMed

    Ho, Hoai-Nam; Lee, Jong-Han; Park, Young-Soo; Lee, Jong-Jae

    2012-01-01

    This study presents an advanced multipoint vision-based system for dynamic displacement measurement of civil infrastructures. The proposed system consists of commercial camcorders, frame grabbers, low-cost PCs, and a wireless LAN access point. The images of target panels attached to a structure are captured by camcorders and streamed into the PC via frame grabbers. Then the displacements of targets are calculated using image processing techniques with premeasured calibration parameters. This system can simultaneously support two camcorders at the subsystem level for dynamic real-time displacement measurement. The data of each subsystem including system time are wirelessly transferred from the subsystem PCs to master PC and vice versa. Furthermore, synchronization process is implemented to ensure the time synchronization between the master PC and subsystem PCs. Several shaking table tests were conducted to verify the effectiveness of the proposed system, and the results showed very good agreement with those from a conventional sensor with an error of less than 2%.

  15. A real-time surface inspection system for precision steel balls based on machine vision

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen

    2016-07-01

    Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s-1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.

  16. Computer vision

    SciTech Connect

    Not Available

    1982-01-01

    This paper discusses material from areas such as artificial intelligence, psychology, computer graphics, and image processing. The intent is to assemble a selection of this material in a form that will serve both as a senior/graduate-level academic text and as a useful reference to those building vision systems. This book has a strong artificial intelligence flavour, emphasising the belief that both the intrinsic image information and the internal model of the world are important in successful vision systems. The book is organised into four parts, based on descriptions of objects at four different levels of abstraction. These are: generalised images-images and image-like entities; segmented images-images organised into subimages that are likely to correspond to interesting objects; geometric structures-quantitative models of image and world structures; relational structures-complex symbolic descriptions of image and world structures. The book contains author and subject indexes.

  17. A Vision-Based System for Object Identification and Information Retrieval in a Smart Home

    NASA Astrophysics Data System (ADS)

    Grech, Raphael; Monekosso, Dorothy; de Jager, Deon; Remagnino, Paolo

    This paper describes a hand held device developed to assist people to locate and retrieve information about objects in a home. The system developed is a standalone device to assist persons with memory impairments such as people suffering from Alzheimer's disease. A second application is object detection and localization for a mobile robot operating in an ambient assisted living environment. The device relies on computer vision techniques to locate a tagged object situated in the environment. The tag is a 2D color printed pattern with a detection range and a field of view such that the user may point from a distance of over 1 meter.

  18. IR measurements and image processing for enhanced-vision systems in civil aviation

    NASA Astrophysics Data System (ADS)

    Beier, Kurt R.; Fries, Jochen; Mueller, Rupert M.; Palubinskas, Gintautas

    2001-08-01

    A series of IR measurements with a FLIR (Forward Looking Infrared) system during landing approaches to various airports have been performed. A real time image processing procedure to detect and identify the runway and eventual obstacles is discussed and demonstrated. It is based on IR image segmentation and information derived from synthetic vision data. Thhe extracted information from IR images will be combined with the appropriate information from a MMW (millimeter wave) radar sensor in the subsequent fusion processor. This fused information aims to increase the pilot's situation awareness.

  19. Automatic inspection of analog and digital meters in a robot vision system

    NASA Technical Reports Server (NTRS)

    Trivedi, Mohan M.; Marapane, Suresh; Chen, Chuxin

    1988-01-01

    A critical limitation of most of the robots utilized in industrial environments arises due to their inability to utilize sensory feedback. This forces robot operation into totally preprogrammed or teleoperation modes. In order to endow the new generation of robots with higher levels of autonomy techniques for sensing of their work environments and for accurate and efficient analysis of the sensory data must be developed. In this paper detailed development of vision system modules for inspecting various types of meters, both analog and digital, encountered in a robotic inspection and manipulation tasks are described. These modules are tested using industrial robots having multisensory input capability.

  20. Development of an aviator's helmet-mounted night-vision goggle system

    NASA Astrophysics Data System (ADS)

    Wilson, Gerry H.; McFarlane, Robert J.

    1990-10-01

    Helmet Mounted Systems (HMS) must be lightweight, balanced and compatible with life support and head protection assemblies. This paper discusses the design of one particular HMS, the GEC Ferranti NITE-OP/NIGHTBIRD aviator's Night Vision Goggle (NVG) developed under contracts to the Ministry of Defence for all three services in the United Kingdom (UK) for Rotary Wing and fast jet aircraft. The existing equipment constraints, safety, human factor and optical performance requirements are discussed before the design solution is presented after consideration of these material and manufacturing options.

  1. Integration of a Multi-Camera Vision System and Strapdown Inertial Navigation System (SDINS) with a Modified Kalman Filter

    PubMed Central

    Parnian, Neda; Golnaraghi, Farid

    2010-01-01

    This paper describes the development of a modified Kalman filter to integrate a multi-camera vision system and strapdown inertial navigation system (SDINS) for tracking a hand-held moving device for slow or nearly static applications over extended periods of time. In this algorithm, the magnitude of the changes in position and velocity are estimated and then added to the previous estimation of the position and velocity, respectively. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. The proposed Kalman filter removes the effect of the gravitational force in the state-space model. As a result, the resulting error is eliminated and the resulting position is smoother and ripple-free. PMID:22219667

  2. Integration of a multi-camera vision system and strapdown inertial navigation system (SDINS) with a modified Kalman filter.

    PubMed

    Parnian, Neda; Golnaraghi, Farid

    2010-01-01

    This paper describes the development of a modified Kalman filter to integrate a multi-camera vision system and strapdown inertial navigation system (SDINS) for tracking a hand-held moving device for slow or nearly static applications over extended periods of time. In this algorithm, the magnitude of the changes in position and velocity are estimated and then added to the previous estimation of the position and velocity, respectively. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. The proposed Kalman filter removes the effect of the gravitational force in the state-space model. As a result, the resulting error is eliminated and the resulting position is smoother and ripple-free.

  3. Synthesized night vision goggle

    NASA Astrophysics Data System (ADS)

    Zhou, Haixian

    2000-06-01

    A Synthesized Night Vision Goggle that will be described int his paper is a new type of night vision goggle with multiple functions. It consists of three parts: main observing system, picture--superimposed system (or Cathode Ray Tube system) and Charge-Coupled Device system.

  4. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers

    PubMed Central

    Olivares-Mendez, Miguel A.; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F.; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-01-01

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597

  5. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers.

    PubMed

    Olivares-Mendez, Miguel A; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-12-12

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing.

  6. Enhancement of vision systems based on runway detection by image processing techniques

    NASA Astrophysics Data System (ADS)

    Gulec, N.; Sen Koktas, N.

    2012-06-01

    An explicit way of facilitating approach and landing operations of fixed-wing aircraft in degraded visual environments is presenting a coherent image of the designated runway via vision systems and hence increasing the situational awareness of the flight crew. Combined vision systems, in general, aim to provide a clear view of the aircraft exterior to the pilots using information from databases and imaging sensors. This study presents a novel method that consists of image-processing and tracking algorithms, which utilize information from navigation systems and databases along with the images from daylight and infrared cameras, for the recognition and tracking of the designated runway through the approach and landing operation. Video data simulating the straight-in approach of an aircraft from an altitude of 5000 ft down to 100 ft is synthetically generated by a COTS tool. A diverse set of atmospheric conditions such as fog and low light levels are simulated in these videos. Detection and false alarm rates are used as the primary performance metrics. The results are presented in a format where the performance metrics are compared against the altitude of the aircraft. Depending on the visual environment and the source of the video, the performance metrics reach up to 98% for DR and down to 5% for FAR.

  7. The use of contact lens telescopic systems in low vision rehabilitation.

    PubMed

    Vincent, Stephen J

    2017-03-20

    Refracting telescopes are afocal compound optical systems consisting of two lenses that produce an apparent magnification of the retinal image. They are routinely used in visual rehabilitation in the form of monocular or binocular hand held low vision aids, and head or spectacle-mounted devices to improve distance visual acuity, and with slight modifications, to enhance acuity for near and intermediate tasks. Since the advent of ground glass haptic lenses in the 1930's, contact lenses have been employed as a useful refracting element of telescopic systems; primarily as a mobile ocular lens (the eyepiece), that moves with the eye. Telescopes which incorporate a contact lens eyepiece significantly improve the weight, comesis, and field of view compared to traditional spectacle-mounted telescopes, in addition to potential related psycho-social benefits. This review summarises the underlying optics and use of contact lenses to provide telescopic magnification from the era of Descartes, to Dallos, and the present day. The limitations and clinical challenges associated with such devices are discussed, along with the potential future use of reflecting telescopes incorporated within scleral lenses and tactile contact lens systems in low vision rehabilitation.

  8. FLILO (flying infrared for low-level operations): an enhanced vision system

    NASA Astrophysics Data System (ADS)

    Guell, Jeff J.

    2000-06-01

    FLILO is an Enhanced Vision System (EVS); which enhances Situational Awareness for safe low level/night time and moderate weather flight operations (including: take- off/landing, taxiing, approaches, drop zone identification, Short Austere Air Field operations, etc), by providing electronic/real time vision to the pilots. It consists of a series of imaging sensors, an Image Processor and a wide field-of-view (FOV) see-through Helmet Mounted Display (HMD) integrated with a Head Tracker. The current solution for safe night time/low level military flight operations is the use of the Turret-FLIR (Forward-Looking InfraRed). This system requires an additional operator/crew member (navigator) who controls the Turret's movement and relays the information to the pilots. The image is presented on a Head-Down-Display. FLILO presents the information directly to the pilots on an HMD, therefore each pilot has an independent view controlled by their heads position, while utilizing the same sensors that are static and fixed to the aircraft structure. Since there are no moving parts, the system provides high reliability, while remaining more affordable than the Turret-FLIR solution. FLILO does not require a ball-turret, therefore there is no extra drag or range impact on the aircraft's performance. Furthermore, with future use of real-time multi-band/multi-sensor image fusion, FLILO is the right step towards obtaining safe autonomous landing guidance/0-0 flight operations capability.

  9. Implementation of a new segmentation algorithm using the Eye-RIS CMOS vision system

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Arena, Paolo; De Fiore, Sebastiano; Vagliasindi, Guido; Fortuna, Luigi; Arik, Sabri

    2009-05-01

    Segmentation is the process of representing a digital image into multiple meaningful regions. Since these applications require more computational power in real time applications, we have implemented a new segmentation algorithm using the capabilities of Eye-RIS Vision System to execute the algorithm in very short time. The segmentation algorithm is implemented mainly in three steps. In the first step, which is pre-processing step, the images are acquired and noise filtering through Gaussian function is performed. In the second step, Sobel operators based edge detection approach is implemented on the system. In the last step, morphologic and logic operations are used to segment the images as post processing. The experimental results performed for different images show the accuracy of the proposed segmentation algorithm. Visual inspection and timing analysis (7.83 ms, 127 frame/sec) prove that the proposed segmentation algorithm can be executed for real time video processing applications. Also, these results prove the capability of Eye-RIS Vision System for real time image processing applications

  10. Multi-spectrum-based enhanced synthetic vision system for aircraft DVE operations

    NASA Astrophysics Data System (ADS)

    Kashyap, Sudesh K.; Naidu, V. P. S.; Shanthakumar, N.

    2016-04-01

    This paper focus on R&D being carried out at CSIR-NAL on Enhanced Synthetic Vision System (ESVS) for Indian regional transport aircraft to enhance all weather operational capabilities with safety and pilot Situation Awareness (SA) improvements. Flight simulator has been developed to study ESVS related technologies and to develop ESVS operational concepts for all weather approach and landing and to provide quantitative and qualitative information that could be used to develop criteria for all-weather approach and landing at regional airports in India. Enhanced Vision System (EVS) hardware prototype with long wave Infrared sensor and low light CMOS camera is used to carry out few field trials on ground vehicle at airport runway at different visibility conditions. Data acquisition and playback system has been developed to capture EVS sensor data (image) in time synch with test vehicle inertial navigation data during EVS field experiments and to playback the experimental data on ESVS flight simulator for ESVS research and concept studies. Efforts are on to conduct EVS flight experiments on CSIR-NAL research aircraft HANSA in Degraded Visual Environment (DVE).

  11. Computer vision in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Sommer, Gerald

    1990-11-01

    Computervision is used to overcome the mismatch between user models and implementation models of software systems for image analysis in nuclear medicine. Computer vision in nuclear medicine results in an active support of the user by the system. This is reached by modeling of imaging equipment and schedules scenes of interest and the process of visual image interpretation. Computer vision is demonstrated especially in the low level and medium level range. Special highlights are given for the estimation of image quality an uniform approach to enhancement and restoration of images and analysis of shape and dynamics of patterns. 1.

  12. Alaskan flight trials of a synthetic vision system for instrument landings of a piston twin aircraft

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew K.; Alter, Keith W.; Jennings, Chad W.; Powell, J. D.

    1999-07-01

    Stanford University has developed a low-cost prototype synthetic vision system and flight tested it onboard general aviation aircraft. The display aids pilots by providing an 'out the window' view, making visualization of the desired flight path a simple task. Predictor symbology provides guidance on straight and curved paths presented in a 'tunnel- in-the-sky' format. Based on commodity PC hardware to achieve low cost, the Tunnel Display system uses differential GPS (typically from Stanford prototype Wide Area Augmentation System hardware) for positioning and GPS-aided inertial sensors for attitude determination. The display has been flown onboard Piper Dakota and Beechcraft Queen Air aircraft at several different locations. This paper describes the system, its development, and flight trials culminating with tests in Alaska during the summer of 1998. Operational experience demonstrated the Tunnel Display's ability to increase flight- path following accuracy and situational awareness while easing the task instrument flying.

  13. A simple machine vision-driven system for measuring optokinetic reflex in small animals.

    PubMed

    Shirai, Yoshihiro; Asano, Kenta; Takegoshi, Yoshihiro; Uchiyama, Shu; Nonobe, Yuki; Tabata, Toshihide

    2013-09-01

    The optokinetic reflex (OKR) is useful to monitor the function of the visual and motor nervous systems. However, OKR measurement is not open to all because dedicated commercial equipment or detailed instructions for building in-house equipment is rarely offered. Here we describe the design of an easy-to-install/use yet reliable OKR measuring system including a computer program to visually locate the pupil and a mathematical procedure to estimate the pupil azimuth from the location data. The pupil locating program was created on a low-cost machine vision development platform, whose graphical user interface allows one to compose and operate the program without programming expertise. Our system located mouse pupils at a high success rate (~90 %), estimated their azimuth precisely (~94 %), and detected changes in OKR gain due to the pharmacological modulation of the cerebellar flocculi. The system would promote behavioral assessment in physiology, pharmacology, and genetics.

  14. Developing Crew Health Care and Habitability Systems for the Exploration Vision

    NASA Technical Reports Server (NTRS)

    Laurini, Kathy; Sawin, Charles F.

    2006-01-01

    This paper will discuss the specific mission architectures associated with the NASA Exploration Vision and review the challenges and drivers associated with developing crew health care and habitability systems to manage human system risks. Crew health care systems must be provided to manage crew health within acceptable limits, as well as respond to medical contingencies that may occur during exploration missions. Habitability systems must enable crew performance for the tasks necessary to support the missions. During the summer of 2005, NASA defined its exploration architecture including blueprints for missions to the moon and to Mars. These mission architectures require research and technology development to focus on the operational risks associated with each mission, as well as the risks to long term astronaut health. This paper will review the highest priority risks associated with the various missions and discuss NASA s strategies and plans for performing the research and technology development necessary to manage the risks to acceptable levels.

  15. Commercial Flight Crew Decision-Making during Low-Visibility Approach Operations Using Fused Synthetic/Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III

    2007-01-01

    NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.

  16. Solid state active/passive night vision imager using continuous-wave laser diodes and silicon focal plane arrays

    NASA Astrophysics Data System (ADS)

    Vollmerhausen, Richard H.

    2013-04-01

    Passive imaging offers covertness and low power, while active imaging provides longer range target acquisition without the need for natural or external illumination. This paper describes a focal plane array (FPA) concept that has the low noise needed for state-of-the-art passive imaging and the high-speed gating needed for active imaging. The FPA is used with highly efficient but low-peak-power laser diodes to create a night vision imager that has the size, weight, and power attributes suitable for man-portable applications. Video output is provided in both the active and passive modes. In addition, the active mode is Class 1 eye safe and is not visible to the naked eye or to night vision goggles.

  17. Hardware implementation of a neural vision system based on a neural network using integrated and fire neurons

    NASA Astrophysics Data System (ADS)

    González, M.; Lamela, H.; Jiménez, M.; Gimeno, J.; Ruiz-Llata, M.

    2007-04-01

    In this paper we present the scheme for a control circuit used in an image processing system which is to be implemented in a neural network which has a high level of connectivity and reconfiguration of neurons for integration and trigger based on the Address-Event Representation. This scheme will be employed as a pre-processing stage for a vision system which employs as its core processing an Optical Broadcast Neural Network (OBNN). [Optical Engineering letters 42 (9), 2488(2003)]. The proposed vision system allows the possibility to introduce patterns from any acquisition system of images, for posterior processing.

  18. Acquired color vision deficiency.

    PubMed

    Simunovic, Matthew P

    2016-01-01

    Acquired color vision deficiency occurs as the result of ocular, neurologic, or systemic disease. A wide array of conditions may affect color vision, ranging from diseases of the ocular media through to pathology of the visual cortex. Traditionally, acquired color vision deficiency is considered a separate entity from congenital color vision deficiency, although emerging clinical and molecular genetic data would suggest a degree of overlap. We review the pathophysiology of acquired color vision deficiency, the data on its prevalence, theories for the preponderance of acquired S-mechanism (or tritan) deficiency, and discuss tests of color vision. We also briefly review the types of color vision deficiencies encountered in ocular disease, with an emphasis placed on larger or more detailed clinical investigations.

  19. Machine vision

    SciTech Connect

    Horn, D.

    1989-06-01

    To keep up with the speeds of modern production lines, most machine vision applications require very powerful computers (often parallel-processing machines), which process millions of points of data in real time. The human brain performs approximately 100 billion logical floating-point operations each second. That is 400 times the speed of a Cray-1 supercomputer. The right software must be developed for parallel-processing computers. The NSF has awarded Rensselaer Polytechnic Institute (Troy, N.Y.) a $2 million grant for parallel- and image-processing software research. Over the last 15 years, Rensselaer has been conducting image-processing research, including work with high-definition TV (HDTV) and image coding and understanding. A similar NSF grant has been awarded to Michigan State University (East Lansing, Mich.) Neural networks are supposed to emulate human learning patterns. These networks and their hardware implementations (neurocomputers) show a great deal of promise for machine vision systems because they allow the systems to understand the use sensory data input more effectively. Neurocomputers excel at pattern-recognition tasks when input data are fuzzy or the vision algorithm is not optimal and is difficult to ascertain.

  20. A vision-based system for intelligent monitoring: human behaviour analysis and privacy by context.

    PubMed

    Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco

    2014-05-20

    Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.

  1. Terrain Portrayal for Synthetic Vision Systems Head-Down Displays Evaluation Results: Compilation of Pilot Transcripts

    NASA Technical Reports Server (NTRS)

    Hughes, Monica F.; Glaab, Louis J.

    2007-01-01

    The Terrain Portrayal for Head-Down Displays (TP-HDD) simulation experiment addressed multiple objectives involving twelve display concepts (two baseline concepts without terrain and ten synthetic vision system (SVS) variations), four evaluation maneuvers (two en route and one approach maneuver, plus a rare-event scenario), and three pilot group classifications. The TP-HDD SVS simulation was conducted in the NASA Langley Research Center's (LaRC's) General Aviation WorkStation (GAWS) facility. The results from this simulation establish the relationship between terrain portrayal fidelity and pilot situation awareness, workload, stress, and performance and are published in the NASA TP entitled Terrain Portrayal for Synthetic Vision Systems Head-Down Displays Evaluation Results. This is a collection of pilot comments during each run of the TP-HDD simulation experiment. These comments are not the full transcripts, but a condensed version where only the salient remarks that applied to the scenario, the maneuver, or the actual research itself were compiled.

  2. Development of Four Vision Camera System for a Micro-Uav

    NASA Astrophysics Data System (ADS)

    Grenzdörffer, G.; Niemeyer, F.; Schmidt, F.

    2012-07-01

    Due to regulations micro-UAV's with a maximum take-off weight of <5kg are commonly bound to applications within the line of sight. An extension of the ground coverage is possible by using a set of oblique cameras. The development of such a multi camera system with a total weight of 1 kg under photogrammetric aspects is quite challenging. The introduced four vision camera system consists of four industrial grade oblique 1.3 mega pixel cameras (four vision) with 9 mm lenses and one nadir looking camera with a 6 mm lens. Despite common consumer grade cameras triggering and image data is stored externally on a small PC with a hard disk of 64 GB and a weight of only 250 g. The key question to be answered in this paper is, how good in a photogrammetric and radiometric sense are the small cameras and do they need an individual calibration treatment or is a single set of calibration parameters sufficient for all cameras?

  3. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles.

    PubMed

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-07-13

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft's nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft's nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  4. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    PubMed Central

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-01-01

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path. PMID:26184213

  5. A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context

    PubMed Central

    Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco

    2014-01-01

    Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services. PMID:24854209

  6. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    NASA Astrophysics Data System (ADS)

    Castellini, P.; Cecchini, S.; Stroppa, L.; Paone, N.

    2015-02-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes.

  7. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    PubMed

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  8. Vision-based on-board collision avoidance system for aircraft navigation

    NASA Astrophysics Data System (ADS)

    Candamo, Joshua; Kasturi, Rangachar; Goldgof, Dmitry; Sarkar, Sudeep

    2006-05-01

    This paper presents an automated classification system for images based on their visual complexity. The image complexity is approximated using a clutter measure, and parameters for processing it are dynamically chosen. The classification method is part of a vision-based collision avoidance system for low altitude aerial vehicles, intended to be used during search and rescue operations in urban settings. The collision avoidance system focuses on detecting thin obstacles such as wires and power lines. Automatic parameter selection for edge detection shows a 5% and 12% performance improvement for medium and heavily cluttered images respectively. The automatic classification enabled the algorithm to identify near invisible power lines in a 60 frame video footage from a SUAV helicopter crashing during a search and rescue mission at hurricane Katrina, without any manual intervention.

  9. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology

    PubMed Central

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-01-01

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40–50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production. PMID:27571078

  10. Automated vision system for fabric defect inspection using Gabor filters and PCNN.

    PubMed

    Li, Yundong; Zhang, Cheng

    2016-01-01

    In this study, an embedded machine vision system using Gabor filters and Pulse Coupled Neural Network (PCNN) is developed to identify defects of warp-knitted fabrics automatically. The system consists of smart cameras and a Human Machine Interface (HMI) controller. A hybrid detection algorithm combing Gabor filters and PCNN is running on the SOC processor of the smart camera. First, Gabor filters are employed to enhance the contrast of images captured by a CMOS sensor. Second, defect areas are segmented by PCNN with adaptive parameter setting. Third, smart cameras will notice the controller to stop the warp-knitting machine once defects are found out. Experimental results demonstrate that the hybrid method is superior to Gabor and wavelet methods on detection accuracy. Actual operations in a textile factory verify the effectiveness of the inspection system.

  11. A Respiratory Movement Monitoring System Using Fiber-Grating Vision Sensor for Diagnosing Sleep Apnea Syndrome

    NASA Astrophysics Data System (ADS)

    Takemura, Yasuhiro; Sato, Jun-Ya; Nakajima, Masato

    2005-01-01

    A non-restrictive and non-contact respiratory movement monitoring system that finds the boundary between chest and abdomen automatically and detects the vertical movement of each part of the body separately is proposed. The system uses a fiber-grating vision sensor technique and the boundary position detection is carried out by calculating the centers of gravity of upward moving and downward moving sampling points, respectively. In the experiment to evaluate the ability to detect the respiratory movement signals of each part and to discriminate between obstructive and central apneas, detected signals of the two parts and their total clearly showed the peculiarities of obstructive and central apnea. The cross talk between the two categories classified automatically according to several rules that reflect the peculiarities was ≤ 15%. This result is sufficient for discriminating central sleep apnea syndrome from obstructive sleep apnea syndrome and indicates that the system is promising as screening equipment. Society of Japan

  12. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology.

    PubMed

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-08-25

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40-50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production.

  13. Combat Systems Vision 2030 Combat System Architecture: Design Principles and Methodology

    DTIC Science & Technology

    1991-12-01

    infrastructure to support it. Currently, industry activity in the area of information system development is high. In essence, corporations have automated their...lifecycle costs , etc., and distributes or allocates them to the subsystems of the functional architecture. At this point, the functional architecture... cost , etc., to the combat system elements. The third step in developing a feasibility design is that of tradeoff and optimization. The best design is

  14. Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    PubMed Central

    2010-01-01

    Background Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands. Methods The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand). The controller, termed cognitive vision system (CVS), mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1) the user triggers the system and controls the orientation of the hand; 2) a high-level controller automatically selects the grasp type and size; and 3) an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances. Results The system correctly estimated grasp type and size (nine commands in total) in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only). Conclusions The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties) and autonomous decision making (i.e., selecting the grasp type and size). The automatic

  15. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  16. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  17. Error analysis and system implementation for structured light stereo vision 3D geometric detection in large scale condition

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Xuping; Wang, Jiaqi; Zhang, Yixin; Wang, Shun; Zhu, Fan

    2012-11-01

    Stereo vision based 3D metrology technique is an effective approach for relatively large scale object's 3D geometric detection. In this paper, we present a specified image capture system, which implements LVDS interface embedded CMOS sensor and CAN bus to ensure synchronous trigger and exposure. We made an error analysis for structured light vision measurement in large scale condition, based on which we built and tested the system prototype both indoor and outfield. The result shows that the system is very suitable for large scale metrology applications.

  18. Lambda Vision

    NASA Astrophysics Data System (ADS)

    Czajkowski, Michael

    2014-06-01

    There is an explosion in the quantity and quality of IMINT data being captured in Intelligence Surveillance and Reconnaissance (ISR) today. While automated exploitation techniques involving computer vision are arriving, only a few architectures can manage both the storage and bandwidth of large volumes of IMINT data and also present results to analysts quickly. Lockheed Martin Advanced Technology Laboratories (ATL) has been actively researching in the area of applying Big Data cloud computing techniques to computer vision applications. This paper presents the results of this work in adopting a Lambda Architecture to process and disseminate IMINT data using computer vision algorithms. The approach embodies an end-to-end solution by processing IMINT data from sensors to serving information products quickly to analysts, independent of the size of the data. The solution lies in dividing up the architecture into a speed layer for low-latent processing and a batch layer for higher quality answers at the expense of time, but in a robust and fault-tolerant way. This approach was evaluated using a large corpus of IMINT data collected by a C-130 Shadow Harvest sensor over Afghanistan from 2010 through 2012. The evaluation data corpus included full motion video from both narrow and wide area field-of-views. The evaluation was done on a scaled-out cloud infrastructure that is similar in composition to those found in the Intelligence Community. The paper shows experimental results to prove the scalability of the architecture and precision of its results using a computer vision algorithm designed to identify man-made objects in sparse data terrain.

  19. Development and validation of equations utilizing lamb vision system output to predict lamb carcass fabrication yields.

    PubMed

    Cunha, B C N; Belk, K E; Scanga, J A; LeValley, S B; Tatum, J D; Smith, G C

    2004-07-01

    This study was performed to validate previous equations and to develop and evaluate new regression equations for predicting lamb carcass fabrication yields using outputs from a lamb vision system-hot carcass component (LVS-HCC) and the lamb vision system-chilled carcass LM imaging component (LVS-CCC). Lamb carcasses (n = 149) were selected after slaughter, imaged hot using the LVS-HCC, and chilled for 24 to 48 h at -3 to 1 degrees C. Chilled carcasses yield grades (YG) were assigned on-line by USDA graders and by expert USDA grading supervisors with unlimited time and access to the carcasses. Before fabrication, carcasses were ribbed between the 12th and 13th ribs and imaged using the LVS-CCC. Carcasses were fabricated into bone-in subprimal/primal cuts. Yields calculated included 1) saleable meat yield (SMY); 2) subprimal yield (SPY); and 3) fat yield (FY). On-line (whole-number) USDA YG accounted for 59, 58, and 64%; expert (whole-number) USDA YG explained 59, 59, and 65%; and expert (nearest-tenth) USDA YG accounted for 60, 60, and 67% of the observed variation in SMY, SPY, and FY, respectively. The best prediction equation developed in this trial using LVS-HCC output and hot carcass weight as independent variables explained 68, 62, and 74% of the variation in SMY, SPY, and FY, respectively. Addition of output from LVS-CCC improved predictive accuracy of the equations; the combined output equations explained 72 and 66% of the variability in SMY and SPY, respectively. Accuracy and repeatability of measurement of LM area made with the LVS-CCC also was assessed, and results suggested that use of LVS-CCC provided reasonably accurate (R2 = 0.59) and highly repeatable (repeatability = 0.98) measurements of LM area. Compared with USDA YG, use of the dual-component lamb vision system to predict cut yields of lamb carcasses improved accuracy and precision, suggesting that this system could have an application as an objective means for pricing carcasses in a value

  20. Development of a Machine-Vision System for Recording of Force Calibration Data

    NASA Astrophysics Data System (ADS)

    Heamawatanachai, Sumet; Chaemthet, Kittipong; Changpan, Tawat

    This paper presents the development of a new system for recording of force calibration data using machine vision technology. Real time camera and computer system were used to capture images of the reading from the instruments during calibration. Then, the measurement images were transformed and translated to numerical data using optical character recognition (OCR) technique. These numerical data along with raw images were automatically saved to memories as the calibration database files. With this new system, the human error of recording would be eliminated. The verification experiments were done by using this system for recording the measurement results from an amplifier (DMP 40) with load cell (HBM-Z30-10kN). The NIMT's 100-kN deadweight force standard machine (DWM-100kN) was used to generate test forces. The experiments setup were done in 3 categories; 1) dynamics condition (record during load changing), 2) statics condition (record during fix load), and 3) full calibration experiments in accordance with ISO 376:2011. The captured images from dynamics condition experiment gave >94% without overlapping of number. The results from statics condition experiment were >98% images without overlapping. All measurement images without overlapping were translated to number by the developed program with 100% accuracy. The full calibration experiments also gave 100% accurate results. Moreover, in case of incorrect translation of any result, it is also possible to trace back to the raw calibration image to check and correct it. Therefore, this machine-vision-based system and program should be appropriate for recording of force calibration data.

  1. Machine vision system: a tool for quality inspection of food and agricultural products.

    PubMed

    Patel, Krishna Kumar; Kar, A; Jha, S N; Khan, M A

    2012-04-01

    Quality inspection of food and agricultural produce are difficult and labor intensive. Simultaneously, with increased expectations for food products of high quality and safety standards, the need for accurate, fast and objective quality determination of these characteristics in food products continues to grow. However, these operations generally in India are manual which is costly as well as unreliable because human decision in identifying quality factors such as appearance, flavor, nutrient, texture, etc., is inconsistent, subjective and slow. Machine vision provides one alternative for an automated, non-destructive and cost-effective technique to accomplish these requirements. This inspection approach based on image analysis and processing has found a variety of different applications in the food industry. Considerable research has highlighted its potential for the inspection and grading of fruits and vegetables, grain quality and characteristic examination and quality evaluation of other food products like bakery products, pizza, cheese, and noodles etc. The objective of this paper is to provide in depth introduction of machine vision system, its components and recent work reported on food and agricultural produce.

  2. Lensless vision system for in-plane positioning of a patterned plate with subpixel resolution.

    PubMed

    Sandoz, Patrick; Jacquot, Maxime

    2011-12-01

    Whereas vision is an efficient way for noncontact sensing of many physical quantities, it assumes a cumbersome imaging system that may be very problematic in confined environments. In such contexts, the design of a compact vision probe can be based on digital holography that is a lensless imaging principle. In this interferometric method, object scenes are reconstructed numerically through wave propagation computations applied to a diffracted optical field recorded as an interferogram. We applied this approach to the visual positioning of a micropatterned glass plate. The pseudoperiodic pattern deposited on the surface is suited for absolute in-plane position determination as well as for fine object-feature interpolation leading to subpixel resolution. Results obtained demonstrate a lateral resolution of 0.1 μm, corresponding to 1/20th of a pixel, from a 150 μm period of the pseudoperiodic pattern and with a demonstrated excursion range of 1.6 cm. In the future, such position encoding could be applied to the backside of standardized sample holders for the easy localization of regions of interest when specimens are transferred from an instrument to another one, for instance in nanotechnology processes.

  3. Base program interim phase test procedure - Coherent Laser Vision System (CLVS). Final report, September 27, 1994--January 30, 1997

    SciTech Connect

    1997-05-01

    The purpose of the CLVS research project is to develop a prototype fiber-optic based Coherent Laser Vision System suitable for DOE`s EM Robotics program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update geometrics on the order of once per second. The CLVS project plan required implementation in two phases of the contract, a Base Contract and a continuance option. This is the Test Procedure and test/demonstration results presenting a proof-of-concept for a system providing three-dimensional (3D) vision with the performance capability required to update geometrics on the order of once per second.

  4. Interactive Image Processing As An Aid To Designing Robot Vision Systems

    NASA Astrophysics Data System (ADS)

    Batchelor, B. G.; Cotter, S. M.; Page, G. J.; Hopkins, S. H.

    1983-10-01

    Interactive image processing has proved to be a valuable aid to prototype development for industrial inspection systems. This paper advocates extending its use to exploratory analysis of robot vision applications. Preliminary studies have shown that it is equally effective in this role, although it is not usually possible to achieve the computational speeds needed for real-time control of the robot using a software-based image processor. Its use, as in inspection research, is likely to be limited to algorithm design/selection. The Autoview image processor (British Robotic Systems Ltd.) has recently been interfaced to a Placemate 5 robot (Pendar Robotics Ltd.) and further programmable manipulation devices, including an xy-coordinate table and a stepping turntable are currently being connected. Using these and similar devices, research will be conducted into such tasks as assembly, Dalletisinq and robot-assisted inspection.

  5. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  6. Study of Synthetic Vision Systems (SVS) and Velocity-vector Based Command Augmentation System (V-CAS) on Pilot Performance

    NASA Technical Reports Server (NTRS)

    Liu, Dahai; Goodric, Ken; Peak, Bob

    2006-01-01

    This study investigated the effects of synthetic vision system (SVS) concepts and advanced flight controls on single pilot performance (SPP). Specifically, we evaluated the benefits and interactions of two levels of terrain portrayal, guidance symbology, and control-system response type on SPP in the context of lower-landing minima (LLM) approaches. Performance measures consisted of flight technical error (FTE) and pilot perceived workload. In this study, pilot rating, control type, and guidance symbology were not found to significantly affect FTE or workload. It is likely that transfer from prior experience, limited scope of the evaluation task, specific implementation limitations, and limited sample size were major factors in obtaining these results.

  7. Artificial human vision camera

    NASA Astrophysics Data System (ADS)

    Goudou, J.-F.; Maggio, S.; Fagno, M.

    2014-10-01

    In this paper we present a real-time vision system modeling the human vision system. Our purpose is to inspire from human vision bio-mechanics to improve robotic capabilities for tasks such as objects detection and tracking. This work describes first the bio-mechanical discrepancies between human vision and classic cameras and the retinal processing stage that takes place in the eye, before the optic nerve. The second part describes our implementation of these principles on a 3-camera optical, mechanical and software model of the human eyes and associated bio-inspired attention model.

  8. Sensor fusion to enable next generation low cost Night Vision systems

    NASA Astrophysics Data System (ADS)

    Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.

    2010-04-01

    The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be

  9. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision.

  10. How to assess vision.

    PubMed

    Marsden, Janet

    2016-09-21

    Rationale and key points An objective assessment of the patient's vision is important to assess variation from 'normal' vision in acute and community settings, to establish a baseline before examination and treatment in the emergency department, and to assess any changes during ophthalmic outpatient appointments. » Vision is one of the essential senses that permits people to make sense of the world. » Visual assessment does not only involve measuring central visual acuity, it also involves assessing the consequences of reduced vision. » Assessment of vision in children is crucial to identify issues that might affect vision and visual development, and to optimise lifelong vision. » Untreatable loss of vision is not an inevitable consequence of ageing. » Timely and repeated assessment of vision over life can reduce the incidence of falls, prevent injury and optimise independence. Reflective activity 'How to' articles can help update you practice and ensure it remains evidence based. Apply this article to your practice. Reflect on and write a short account of: 1. How this article might change your practice when assessing people holistically. 2. How you could use this article to educate your colleagues in the assessment of vision.

  11. Active optical zoom system

    DOEpatents

    Wick, David V.

    2005-12-20

    An active optical zoom system changes the magnification (or effective focal length) of an optical imaging system by utilizing two or more active optics in a conventional optical system. The system can create relatively large changes in system magnification with very small changes in the focal lengths of individual active elements by leveraging the optical power of the conventional optical elements (e.g., passive lenses and mirrors) surrounding the active optics. The active optics serve primarily as variable focal-length lenses or mirrors, although adding other aberrations enables increased utility. The active optics can either be LC SLMs, used in a transmissive optical zoom system, or DMs, used in a reflective optical zoom system. By appropriately designing the optical system, the variable focal-length lenses or mirrors can provide the flexibility necessary to change the overall system focal length (i.e., effective focal length), and therefore magnification, that is normally accomplished with mechanical motion in conventional zoom lenses. The active optics can provide additional flexibility by allowing magnification to occur anywhere within the FOV of the system, not just on-axis as in a conventional system.

  12. Vision based interface system for hands free control of an intelligent wheelchair

    PubMed Central

    Ju, Jin Sun; Shin, Yunhee; Kim, Eun Yi

    2009-01-01

    Background Due to the shift of the age structure in today's populations, the necessities for developing the devices or technologies to support them have been increasing. Traditionally, the wheelchair, including powered and manual ones, is the most popular and important rehabilitation/assistive device for the disabled and the elderly. However, it is still highly restricted especially for severely disabled. As a solution to this, the Intelligent Wheelchairs (IWs) have received considerable attention as mobility aids. The purpose of this work is to develop the IW interface for providing more convenient and efficient interface to the people the disability in their limbs. Methods This paper proposes an intelligent wheelchair (IW) control system for the people with various disabilities. To facilitate a wide variety of user abilities, the proposed system involves the use of face-inclination and mouth-shape information, where the direction of an IW is determined by the inclination of the user's face, while proceeding and stopping are determined by the shapes of the user's mouth. Our system is composed of electric powered wheelchair, data acquisition board, ultrasonic/infra-red sensors, a PC camera, and vision system. Then the vision system to analyze user's gestures is performed by three stages: detector, recognizer, and converter. In the detector, the facial region of the intended user is first obtained using Adaboost, thereafter the mouth region is detected based on edge information. The extracted features are sent to the recognizer, which recognizes the face inclination and mouth shape using statistical analysis and K-means clustering, respectively. These recognition results are then delivered to the converter to control the wheelchair. Result & conclusion The advantages of the proposed system include 1) accurate recognition of user's intention with minimal user motion and 2) robustness to a cluttered background and the time-varying illumination. To prove these

  13. Self-reported visual impairment and impact on vision-related activities in an elderly Nigerian population: report from the Ibadan Study of Ageing

    PubMed Central

    Bekibele, CO; Gureje, Oye

    2010-01-01

    Background Studies have shown an association between visual impairment and poor overall function. Studies from Africa and developing countries show high prevalence of visual impairment. More information is needed on the community prevalence and impact of visual impairment among elderly Africans. Methods A multi-stage stratified sampling of households was implemented to select persons aged 65 years and over in the south-western and north-central parts of Nigeria. Impairments of distant and near vision were based on subjective self-reports obtained with the use of items derived from the World Health Organization multi-country World Health Survey questionnaire. Impairment was defined as reporting much difficulty to questions on distant and near vision. Disabilities in activities of daily living (ADL) and instrumental activities of daily living (IADL) were evaluated by interview, using standardized scales. Results A total of 2054 subjects 957 (46.6%) males and 1097 (53.4) females responded to the questions on vision. 22% (n=453) of the respondents reported distant vision impairment, and 18% (n=377) reported near vision impairment (not mutually exclusive). 15% (n= 312) however reported impairment for both far and near vision. Impairment of distant vision increased progressively with age (P < 0.01). Persons with self reported near vision impairment had elevated risk of functional disability in several IADLs and ADLs than those with out. Distant vision impairment was less associated with role limitations in both ADLs and IADLs. Conclusion The prevalence of self reported distant visual impairment was high but that for near visual impairment was less than expected in this elderly African population. Impairment of near vision was found to carry with it a higher burden of functional disability than that of distant vision. PMID:18780258

  14. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  15. LES SOFTWARE FOR THE DESIGN OF LOW EMISSION COMBUSTION SYSTEMS FOR VISION 21 PLANTS

    SciTech Connect

    Clifford E. Smith; Steven M. Cannon; Virgil Adumitroaie; David L. Black; Karl V. Meredith

    2005-01-01

    In this project, an advanced computational software tool was developed for the design of low emission combustion systems required for Vision 21 clean energy plants. Vision 21 combustion systems, such as combustors for gas turbines, combustors for indirect fired cycles, furnaces and sequestrian-ready combustion systems, will require innovative low emission designs and low development costs if Vision 21 goals are to be realized. The simulation tool will greatly reduce the number of experimental tests; this is especially desirable for gas turbine combustor design since the cost of the high pressure testing is extremely costly. In addition, the software will stimulate new ideas, will provide the capability of assessing and adapting low-emission combustors to alternate fuels, and will greatly reduce the development time cycle of combustion systems. The revolutionary combustion simulation software is able to accurately simulate the highly transient nature of gaseous-fueled (e.g. natural gas, low BTU syngas, hydrogen, biogas etc.) turbulent combustion and assess innovative concepts needed for Vision 21 plants. In addition, the software is capable of analyzing liquid-fueled combustion systems since that capability was developed under a concurrent Air Force Small Business Innovative Research (SBIR) program. The complex physics of the reacting flow field are captured using 3D Large Eddy Simulation (LES) methods, in which large scale transient motion is resolved by time-accurate numerics, while the small scale motion is modeled using advanced subgrid turbulence and chemistry closures. In this way, LES combustion simulations can model many physical aspects that, until now, were impossible to predict with 3D steady-state Reynolds Averaged Navier-Stokes (RANS) analysis, i.e. very low NOx emissions, combustion instability (coupling of unsteady heat and acoustics), lean blowout, flashback, autoignition, etc. LES methods are becoming more and more practical by linking together tens

  16. Development and evaluation of a vision based poultry debone line monitoring system

    NASA Astrophysics Data System (ADS)

    Usher, Colin T.; Daley, W. D. R.

    2013-05-01

    Efficient deboning is key to optimizing production yield (maximizing the amount of meat removed from a chicken frame while reducing the presence of bones). Many processors evaluate the efficiency of their deboning lines through manual yield measurements, which involves using a special knife to scrape the chicken frame for any remaining meat after it has been deboned. Researchers with the Georgia Tech Research Institute (GTRI) have developed an automated vision system for estimating this yield loss by correlating image characteristics with the amount of meat left on a skeleton. The yield loss estimation is accomplished by the system's image processing algorithms, which correlates image intensity with meat thickness and calculates the total volume of meat remaining. The team has established a correlation between transmitted light intensity and meat thickness with an R2 of 0.94. Employing a special illuminated cone and targeted software algorithms, the system can make measurements in under a second and has up to a 90-percent correlation with yield measurements performed manually. This same system is also able to determine the probability of bone chips remaining in the output product. The system is able to determine the presence/absence of clavicle bones with an accuracy of approximately 95 percent and fan bones with an accuracy of approximately 80%. This paper describes in detail the approach and design of the system, results from field testing, and highlights the potential benefits that such a system can provide to the poultry processing industry.

  17. A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems

    NASA Astrophysics Data System (ADS)

    Mcfadyen, Aaron; Mejias, Luis

    2016-01-01

    This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.

  18. Transition of Attention in Terminal Area NextGen Operations Using Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle K. E.; Kramer, Lynda J.; Shelton, Kevin J.; Arthur, Shelton, J. J., III; Prinzel, Lance J., III; Norman, Robert M.

    2011-01-01

    This experiment investigates the capability of Synthetic Vision Systems (SVS) to provide significant situation awareness in terminal area operations, specifically in low visibility conditions. The use of a Head-Up Display (HUD) and Head-Down Displays (HDD) with SVS is contrasted to baseline standard head down displays in terms of induced workload and pilot behavior in 1400 RVR visibility levels. Variances across performance and pilot behavior were reviewed for acceptability when using HUD or HDD with SVS under reduced minimums to acquire the necessary visual components to continue to land. The data suggest superior performance for HUD implementations. Improved attentional behavior is also suggested for HDD implementations of SVS for low-visibility approach and landing operations.

  19. Effective target binarization method for linear timed address-event vision system

    NASA Astrophysics Data System (ADS)

    Xu, Jiangtao; Zou, Jiawei; Yan, Shi; Gao, Zhiyuan

    2016-06-01

    This paper presents an effective target binarization method for a linear timed address-event (TAE) vision system. In the preprocessing phase, TAE data are processed by denoising, thinning, and edge connection methods sequentially to obtain the denoised- and clear-event contours. Then, the object region will be confirmed by an event-pair matching method. Finally, the image open and close operations of morphology methods are introduced to remove the artifacts generated by event-pair mismatching. Several degraded images were processed by our method and some traditional binarization methods, and the experimental results are provided. As compared with other methods, the proposed method performs efficiently on extracting the target region and gets satisfactory binarization results from object images with low-contrast and nonuniform illumination.

  20. Center extraction deviation correction of SMD-LEDs in the target-based vision measurement system

    NASA Astrophysics Data System (ADS)

    Ma, Yueyang; Zhao, Hong; Gu, Feifei; Bu, Penghui

    2017-04-01

    Surface mounted device-type light emitting diodes (SMD-LEDs) are commonly utilized as feature points in target-based vision measurement systems (T-VMSs) due to their high luminance. The non-uniform characteristics of said luminance introduce deviation errors in image center extraction, however, which degrades the positioning precision of the T-VMS. In this study, we first analyzed two factors responsible for deviation errors: The deflection angle between the LED’s normal direction and the observation direction, and the distance between the LED and the observer. We then established a correction method based on a lookup table to compensate for deviation errors according to these two factors. We also analyzed the correction direction of the deviation error. We applied the proposed method in an actual T-VMS to confirm its feasibility and effectiveness, and found that it does indeed correct deviation errors effectively.

  1. VAS: A Vision Advisor System combining agents and object-oriented databases

    NASA Technical Reports Server (NTRS)

    Eilbert, James L.; Lim, William; Mendelsohn, Jay; Braun, Ron; Yearwood, Michael

    1994-01-01

    A model-based approach to identifying and finding the orientation of non-overlapping parts on a tray has been developed. The part models contain both exact and fuzzy descriptions of part features, and are stored in an object-oriented database. Full identification of the parts involves several interacting tasks each of which is handled by a distinct agent. Using fuzzy information stored in the model allowed part features that were essentially at the noise level to be extracted and used for identification. This was done by focusing attention on the portion of the part where the feature must be found if the current hypothesis of the part ID is correct. In going from one set of parts to another the only thing that needs to be changed is the database of part models. This work is part of an effort in developing a Vision Advisor System (VAS) that combines agents and objected-oriented databases.

  2. Omnidirectional stereo vision sensor based on single camera and catoptric system.

    PubMed

    Zhou, Fuqiang; Chai, Xinghua; Chen, Xin; Song, Ya

    2016-09-01

    An omnidirectional stereo vision sensor based on one single camera and catoptric system is proposed. As crucial components, one camera and two pyramid mirrors are used for imaging. The omnidirectional measurement towards different directions in the horizontal field can be performed by four pairs of virtual cameras, with a consummate synchronism and an improved compactness. Moreover, the perspective projection invariance is ensured in the imaging process, which avoids the imaging distortion reflected by the curved mirrors. In this paper, the structure model of the sensor was established and a sensor prototype was designed. The influences of the structural parameters on the field of view and the measurement accuracy were also discussed. In addition, real experiments and analyses were performed to evaluate the performance of the proposed sensor in the measurement application. The results proved the feasibility of the sensor, and exhibited a considerable accuracy in 3D coordinate reconstruction.

  3. Interleaved imaging: an imaging system design inspired by rod-cone vision

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Wandell, Brian A.

    2009-01-01

    Under low illumination conditions, such as moonlight, there simply are not enough photons present to create a high quality color image with integration times that avoid camera-shake. Consequently, conventional imagers are designed for daylight conditions and modeled on human cone vision. Here, we propose a novel sensor design that parallels the human retina and extends sensor performance to span daylight and moonlight conditions. Specifically, we describe an interleaved imaging architecture comprising two collections of pixels. One set of pixels is monochromatic and high sensitivity; a second, interleaved set of pixels is trichromatic and lower sensitivity. The sensor implementation requires new image processing techniques that allow for graceful transitions between different operating conditions. We describe these techniques and simulate the performance of this sensor under a range of conditions. We show that the proposed system is capable of producing high quality images spanning photopic, mesopic and near scotopic conditions.

  4. Real-time model-based vision system for object acquisition and tracking

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd

    1987-01-01

    A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.

  5. Development of a vision non-contact sensing system for telerobotic applications

    NASA Astrophysics Data System (ADS)

    Karkoub, M.; Her, M.-G.; Ho, M.-I.; Huang, C.-C.

    2013-08-01

    The study presented here describes a novel vision-based motion detection system for telerobotic operations such as distant surgical procedures. The system uses a CCD camera and image processing to detect the motion of a master robot or operator. Colour tags are placed on the arm and head of a human operator to detect the up/down, right/left motion of the head as well as the right/left motion of the arm. The motion of the colour tags are used to actuate a slave robot or a remote system. The determination of the colour tags' motion is achieved through image processing using eigenvectors and colour system morphology and the relative head, shoulder and wrist rotation angles through inverse dynamics and coordinate transformation. A program is used to transform this motion data into motor control commands and transmit them to a slave robot or remote system through wireless internet. The system performed well even in complex environments with errors that did not exceed 2 pixels with a response time of about 0.1 s. The results of the experiments are available at: http://www.youtube.com/watch?v=yFxLaVWE3f8 and http://www.youtube.com/watch?v=_nvRcOzlWHw

  6. Monocular Vision- and IMU-Based System for Prosthesis Pose Estimation During Total Hip Replacement Surgery.

    PubMed

    Su, Shaojie; Zhou, Yixin; Wang, Zhihua; Chen, Hong

    2017-03-30

    The average age of population increases worldwide, so does the number of total hip replacement surgeries. Total hip replacement, however, often involves a risk of dislocation and prosthetic impingement. To minimize the risk after surgery, we propose an instrumented hip prosthesis that estimates the relative pose between prostheses intraoperatively and ensures the placement of prostheses within a safe zone. We create a model of the hip prosthesis as a ball and socket joint, which has four degrees of freedom (DOFs), including 3-DOF rotation and 1-DOF translation. We mount a camera and an inertial measurement unit (IMU) inside the hollow ball, or "femoral head prosthesis," while printing customized patterns on the internal surface of the socket, or "acetabular cup." Since the sensors were rigidly fixed to the femoral head prosthesis, measuring its motions poses a sensor ego-motion estimation problem. By matching feature points in images of the reference patterns, we propose a monocular vision based method with a relative error of less than 7% in the 3-DOF rotation and 8% in the 1-DOF translation. Further, to reduce system power consumption, we apply the IMU with its data fused by an extended Kalman filter to replace the camera in the 3-DOF rotation estimation, which yields a less than 4.8% relative error and a 21.6% decrease in power consumption. Experimental results show that the best approach to prosthesis pose estimation is a combination of monocular vision-based translation estimation and IMU-based rotation estimation, and we have verified the feasibility and validity of this system in prosthesis pose estimation.

  7. Development of a vision-based pH reading system

    NASA Astrophysics Data System (ADS)

    Hur, Min Goo; Kong, Young Bae; Lee, Eun Je; Park, Jeong Hoon; Yang, Seung Dae; Moon, Ha Jung; Lee, Dong Hoon

    2015-10-01

    pH paper is generally used for pH interpretation in the QC (quality control) process of radiopharmaceuticals. pH paper is easy to handle and useful for small samples such as radio-isotopes and radioisotope (RI)-labeled compounds for positron emission tomography (PET). However, pHpaper-based detecting methods may have some errors due limitations of eye sight and inaccurate readings. In this paper, we report a new device for pH reading and related software. The proposed pH reading system is developed with a vision algorithm based on the RGB library. The pH reading system is divided into two parts. First is the reading device that consists of a light source, a CCD camera and a data acquisition (DAQ) board. To improve the accuracy of the sensitivity, we utilize the three primary colors of the LED (light emission diode) in the reading device. The use of three colors is better than the use of a single color for a white LED because of wavelength. The other is a graph user interface (GUI) program for a vision interface and report generation. The GUI program inserts the color codes of the pH paper into the database; then, the CCD camera captures the pH paper and compares its color with the RGB database image in the reading mode. The software captures and reports information on the samples, such as pH results, capture images, and library images, and saves them as excel files.

  8. Automatic vision system for analysis of microscopic behavior of flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Dickenson, Eric; Daemi, M. Farhang

    1997-10-01

    This paper describes the development of a novel automated and efficient vision system to obtain velocity and concentration measurement within a porous medium. An aqueous fluid lace with a fluorescent dye to microspheres flows through a transparent, refractive-index-matched column packed with transparent crystals. For illumination purposes, a planar sheet of laser passes through the column as a CCD camera records all the laser illuminated planes. Detailed microscopic velocity and concentration fields have been computed within a 3D volume of the column. For measuring velocities, while the aqueous fluid, laced with fluorescent microspheres, flows through the transparent medium, a CCD camera records the motions of the fluorescing particles by a video cassette recorder. The recorded images are acquired automatically frame by frame and transferred to the computer for processing, by using a frame grabber an written relevant algorithms through an RS-232 interface. Since the grabbed image is poor in this stage, some preprocessings are used to enhance particles within images. Finally, these enhanced particles are monitored to calculate velocity vectors in the plane of the beam. For concentration measurements, while the aqueous fluid, laced with a fluorescent organic dye, flows through the transparent medium, a CCD camera sweeps back and forth across the column and records concentration slices on the planes illuminated by the laser beam traveling simultaneously with the camera. Subsequently, these recorded images are transferred to the computer for processing in similar fashion to the velocity measurement. In order to have a fully automatic vision system, several detailed image processing techniques are developed to match exact images that have different intensities values but the same topological characteristics. This results in normalized interstitial chemical concentrations as a function of time within the porous column.

  9. Can Effective Synthetic Vision System Displays be Implemented on Limited Size Display Spaces?

    NASA Technical Reports Server (NTRS)

    Comstock, J. Raymond, Jr.; Glaab, Lou J.; Prinzel, Lance J.; Elliott, Dawn M.

    2004-01-01

    The Synthetic Vision Systems (SVS) element of the NASA Aviation Safety Program is striving to eliminate poor visibility as a causal factor in aircraft accidents, and to enhance operational capabilities of all types or aircraft. To accomplish these safety and situation awareness improvements, the SVS concepts are designed to provide a clear view of the world ahead through the display of computer generated imagery derived from an onboard database of terrain, obstacle and airport information. An important issue for the SVS concept is whether useful and effective Synthetic Vision System (SVS) displays can be implemented on limited size display spaces as would be required to implement this technology on older aircraft with physically smaller instrument spaces. In this study, prototype SVS displays were put on the following display sizes: (a) size "A' (e.g. 757 EADI), (b) form factor "D" (e.g. 777 PFD), and (c) new size "X" (Rectangular flat-panel, approximately 20 x 25 cm). Testing was conducted in a high-resolution graphics simulation facility at NASA Langley Research Center. Specific issues under test included the display size as noted above, the field-of-view (FOV) to be shown on the display and directly related to FOV is the degree of minification of the displayed image or picture. Using simulated approaches with display size and FOV conditions held constant no significant differences by these factors were found. Preferred FOV based on performance was determined by using approaches during which pilots could select FOV. Mean preference ratings for FOV were in the following order: (1) 30 deg., (2) Unity, (3) 60 deg., and (4) 90 deg., and held true for all display sizes tested. Limitations of the present study and future research directions are discussed.

  10. Flight Testing of Night Vision Systems in Rotorcraft (Test en vol de systemes de vision nocturne a bord des aeronefs a voilure tournante)

    DTIC Science & Technology

    2007-07-01

    Stewart Aerospace Engineering Test Establishment Cold Lake, Alberta CANADA Over thousands of years of evolution humans have developed exceptional...G. (2004). Trial Iguana : A flight evaluation of conformal symbology using display night vision goggles. Paper presented at 30th European Rotorcraft

  11. A Poet's Vision.

    ERIC Educational Resources Information Center

    Marshall, Suzanne; Newman, Dan

    1997-01-01

    Describes a series of activities to help middle school students develop an artist's vision and then convey that vision through poetry. Describes how lessons progress from looking at concrete objects to observations of settings and characters, gradually adding memory and imagination to direct observation, and finishing with revision. Notes that…

  12. Vision problems

    MedlinePlus

    ... shade or curtain hanging across part of your visual field. Optic neuritis : inflammation of the optic nerve ... Impaired vision; Blurred vision Images Crossed eyes Eye Visual acuity test Slit-lamp exam Visual field test ...

  13. Research on vision-based error detection system for optic fiber winding

    NASA Astrophysics Data System (ADS)

    Lu, Wenchao; Li, Huipeng; Yang, Dewei; Zhang, Min

    2011-11-01

    Optic fiber coils are the hearts of fiber optic gyroscopes (FOGs). To detect the irresistible errors during the process of winding of optical fibers, such as gaps, climbs and partial rises between fibers, when fiber optic winding machines are operated, and to enable fully automated winding, we researched and designed this vision-based error detection system for optic fiber winding, on the basis of digital image collection and process[1]. When a Fiber-optic winding machine is operated, background light is used as illumination system to strength the contrast of images between fibers and background. Then microscope and CCD as imaging system and image collecting system are used to receive the analog images of fibers. After that analog images are shifted into digital imagines, which can be processed and analyzed by computers. Canny edge detection and a contour-tracing algorithm are used as the main image processing method. The distances between the fiber peaks were then measured and compared with the desired values. If these values fall outside of a predetermined tolerance zone, an error is detected and classified either as a gap, climb or rise. we used OpenCV and MATLAB database as basic function library and used VC++6.0 as the platform to show the results. The test results showed that the system was useful, and the edge detection and contour-tracing algorithm were effective, because of the high rate of accuracy. At the same time, the results of error detection are correct.

  14. PlanktoVision – an automated analysis system for the identification of phytoplankton

    PubMed Central

    2013-01-01

    Background Phytoplankton communities are often used as a marker for the determination of fresh water quality. The routine analysis, however, is very time consuming and expensive as it is carried out manually by trained personnel. The goal of this work is to develop a system for an automated analysis. Results A novel open source system for the automated recognition of phytoplankton by the use of microscopy and image analysis was developed. It integrates the segmentation of the organisms from the background, the calculation of a large range of features, and a neural network for the classification of imaged organisms into different groups of plankton taxa. The analysis of samples containing 10 different taxa showed an average recognition rate of 94.7% and an average error rate of 5.5%. The presented system has a flexible framework which easily allows expanding it to include additional taxa in the future. Conclusions The implemented automated microscopy and the new open source image analysis system - PlanktoVision - showed classification results that were comparable or better than existing systems and the exclusion of non-plankton particles could be greatly improved. The software package is published as free software and is available to anyone to help make the analysis of water quality more reproducible and cost effective. PMID:23537512

  15. MOBLAB: a mobile laboratory for testing real-time vision-based systems in path monitoring

    NASA Astrophysics Data System (ADS)

    Cumani, Aldo; Denasi, Sandra; Grattoni, Paolo; Guiducci, Antonio; Pettiti, Giuseppe; Quaglia, Giorgio

    1995-01-01

    In the framework of the EUREKA PROMETHEUS European Project, a Mobile Laboratory (MOBLAB) has been equipped for studying, implementing and testing real-time algorithms which monitor the path of a vehicle moving on roads. Its goal is the evaluation of systems suitable to map the position of the vehicle within the environment where it moves, to detect obstacles, to estimate motion, to plan the path and to warn the driver about unsafe conditions. MOBLAB has been built with the financial support of the National Research Council and will be shared with teams working in the PROMETHEUS Project. It consists of a van equipped with an autonomous power supply, a real-time image processing system, workstations and PCs, B/W and color TV cameras, and TV equipment. This paper describes the laboratory outline and presents the computer vision system and the strategies that have been studied and are being developed at I.E.N. `Galileo Ferraris'. The system is based on several tasks that cooperate to integrate information gathered from different processes and sources of knowledge. Some preliminary results are presented showing the performances of the system.

  16. Accurate calibration of a stereo-vision system in image-guided radiotherapy

    SciTech Connect

    Liu Dezhi; Li Shidong

    2006-11-15

    Image-guided radiotherapy using a three-dimensional (3D) camera as the on-board surface imaging system requires precise and accurate registration of the 3D surface images in the treatment machine coordinate system. Two simple calibration methods, an analytical solution as three-point matching and a least-squares estimation method as multipoint registration, were introduced to correlate the stereo-vision surface imaging frame with the machine coordinate system. Both types of calibrations utilized 3D surface images of a calibration template placed on the top of the treatment couch. Image transformational parameters were derived from corresponding 3D marked points on the surface images to their given coordinates in the treatment room coordinate system. Our experimental results demonstrated that both methods had provided the desired calibration accuracy of 0.5 mm. The multipoint registration method is more robust particularly for noisy 3D surface images. Both calibration methods have been used as our weekly QA tools for a 3D image-guided radiotherapy system.

  17. Calibrators measurement system for headlamp tester of motor vehicle base on machine vision

    NASA Astrophysics Data System (ADS)

    Pan, Yue; Zhang, Fan; Xu, Xi-ping; Zheng, Zhe

    2014-09-01

    With the development of photoelectric detection technology, machine vision has a wider use in the field of industry. The paper mainly introduces auto lamps tester calibrator measuring system, of which CCD image sampling system is the core. Also, it shows the measuring principle of optical axial angle and light intensity, and proves the linear relationship between calibrator's facula illumination and image plane illumination. The paper provides an important specification of CCD imaging system. Image processing by MATLAB can get flare's geometric midpoint and average gray level. By fitting the statistics via the method of the least square, we can get regression equation of illumination and gray level. It analyzes the error of experimental result of measurement system, and gives the standard uncertainty of synthesis and the resource of optical axial angle. Optical axial angle's average measuring accuracy is controlled within 40''. The whole testing process uses digital means instead of artificial factors, which has higher accuracy, more repeatability and better mentality than any other measuring systems.

  18. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    SciTech Connect

    Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  19. Vision-based object detection and recognition system for intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Ran, Bin; Liu, Henry X.; Martono, Wilfung

    1999-01-01

    Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.

  20. Small Boats in an Ocean of School Activities: Towards a European Vision on Education

    ERIC Educational Resources Information Center

    Villalba, Ernesto

    2008-01-01

    The paper discusses the concept of schools as "multi-purpose learning centres", proposed by the European Commission in the year 2000 as part of the Lisbon Strategy to improve competitiveness. This concept was arguably the "European vision" for school education and was meant to drive the modernization of school education.…

  1. Artificial-vision stereo system as a source of visual information for preventing the collision of vehicles

    SciTech Connect

    Machtovoi, I.A.

    1994-10-01

    This paper explains the principle of automatically determining the position of extended and point objects in 2-D space of recognizing them by means of an artificial-vision stereo system from the measured coordinates of conjugate points in stereo pairs, and also analyzes methods of identifying these points.

  2. Vision 2000: A Framework for Reviewing the Mandate of Ontario's System of Colleges of Applied Arts and Technology.

    ERIC Educational Resources Information Center

    Ontario Council of Regents, Toronto.

    An introduction is provided to Vision 2000, a project initiated by Ontario's Minister of Colleges and Universities to review the mandate of the province's Colleges of Applied Arts and Technology (CAAT). Section 1 discusses the challenges facing Ontario's educational system, the minister's mandate to the CAAT Council of Regents, and the objectives…

  3. A low-cost color vision system for automatic estimation of apple fruit orientation and maximum equatorial diameter

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The overall objective of this research was to develop an in-field presorting and grading system to separate undersized and defective fruit from fresh market-grade apples. To achieve this goal, a cost-effective machine vision inspection prototype was built, which consisted of a low-cost color camera,...

  4. An inexpensive Arduino-based LED stimulator system for vision research.

    PubMed

    Teikari, Petteri; Najjar, Raymond P; Malkki, Hemi; Knoblauch, Kenneth; Dumortier, Dominique; Gronfier, Claude; Cooper, Howard M

    2012-11-15

    Light emitting diodes (LEDs) are being used increasingly as light sources in life sciences applications such as in vision research, fluorescence microscopy and in brain-computer interfacing. Here we present an inexpensive but effective visual stimulator based on light emitting diodes (LEDs) and open-source Arduino microcontroller prototyping platform. The main design goal of our system was to use off-the-shelf and open-source components as much as possible, and to reduce design complexity allowing use of the system to end-users without advanced electronics skills. The main core of the system is a USB-connected Arduino microcontroller platform designed initially with a specific emphasis on the ease-of-use creating interactive physical computing environments. The pulse-width modulation (PWM) signal of Arduino was used to drive LEDs allowing linear light intensity control. The visual stimulator was demonstrated in applications such as murine pupillometry, rodent models for cognitive research, and heterochromatic flicker photometry in human psychophysics. These examples illustrate some of the possible applications that can be easily implemented and that are advantageous for students, educational purposes and universities with limited resources. The LED stimulator system was developed as an open-source project. Software interface was developed using Python with simplified examples provided for Matlab and LabVIEW. Source code and hardware information are distributed under the GNU General Public Licence (GPL, version 3).

  5. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting.

    PubMed

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-04

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell's natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  6. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting

    NASA Astrophysics Data System (ADS)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-01

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1–2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  7. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting

    PubMed Central

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-01-01

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1–2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery. PMID:26941071

  8. Wireless power and data transmission system for a micro implantable intraocular vision aid.

    PubMed

    Hijazi, N; Krisch, I; Hosticka, B J

    2002-01-01

    Wireless power and data transmission system developed for an intraocular vision aid for blind patients will be described. This system is applicable for patients suffering from bilateral corneal opacification but with intact posterior ocular. The system consists of an external unit as well as an implant. The external unit is required for image acquisition, channel coding, IR data transmission, and RF power transmission to the implant. The implantable unit contains a CMOS receiver, a receiver antenna coil, and the microdisplay based on a LED array. The CMOS receiver serves for reception and decoding of image data as well as driving circuits for the miniaturized LED array. In this case, mechanical wiring between the external unit and the implant is neither useful nor comfortable. An optimal technical solution needs a wireless data transfer. If the power is transferred to the implant wireless, too, the solution grows ideal. The system described in this communication employs wireless power and data transmission using an 13.56 MHz RF link for power transmission and an near IR (NIR) optical link for data transmission from an external CMOS camera and telemetry unit to the implantable micro display.

  9. Determination of high temperature strains using a PC based vision system

    NASA Technical Reports Server (NTRS)

    Mcneill, Stephen R.; Sutton, Michael A.; Russell, Samuel S.

    1992-01-01

    With the widespread availability of video digitizers and cheap personal computers, the use of computer vision as an experimental tool is becoming common place. These systems are being used to make a wide variety of measurements that range from simple surface characterization to velocity profiles. The Sub-Pixel Digital Image Correlation technique has been developed to measure full field displacement and gradients of the surface of an object subjected to a driving force. The technique has shown its utility by measuring the deformation and movement of objects that range from simple translation to fluid velocity profiles to crack tip deformation of solid rocket fuel. This technique has recently been improved and used to measure the surface displacement field of an object at high temperature. The development of a PC based Sub-Pixel Digital Image Correlation system has yielded an accurate and easy to use system for measuring surface displacements and gradients. Experiments have been performed to show the system is viable for measuring thermal strain.

  10. Color vision deficiencies

    NASA Astrophysics Data System (ADS)

    Vannorren, D.

    1982-04-01

    Congenital and acquired color vision defects are described in the context of physiological data. Light sources, photometry, color systems and test methods are described. A list of medicines is also presented. The practical social consequences of color vision deficiencies are discussed.

  11. Is binocular vision worth considering in people with low vision?

    PubMed

    Uzdrowska, Marta; Crossland, Michael; Broniarczyk-Loba, Anna

    2014-01-01

    In someone with good vision, binocular vision provides benefits which could not be obtained by monocular viewing only. People with visual impairment often have abnormal binocularity. However, they often use both eyes simultaneously in their everyday activities. Much remains to be known about binocular vision in people with visual impairment. As the binocular status of people with low vision strongly influences their treatment and rehabilitation, it should be evaluated and considered before diagnosis and further recommendations.

  12. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    NASA Astrophysics Data System (ADS)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the

  13. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    PubMed Central

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-01-01

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318

  14. Terrestrial Reference Systems and Frames. A review of current activities

    NASA Astrophysics Data System (ADS)

    Boucher, C. C.

    2009-12-01

    Terrestrial Reference Systems (TRS) refer to an important domain of Geodesy, involving both theoretical and applied aspects, as well as deep connections with Astronomy, Earth Sciences and Geo-information. The concept of TRS implies several visions : - An astronomical vision, using TRS to study translational and rotational motion of the Earth in inertial space - An Earth Science vision, using TRS to build physical models of the Earth system, and its various components (solid earth, oceans, atmosphere, hydrosphere) - A metrological vision, using TRS together with suitable coordinate systems (geographical coordinates, map projections…) to define geographical position of objects in the Earth’s vicinity A survey of current activities in this area is presented, referring to works done by the International Association of Geodesy (IAG) and more specifically its Commission 1, GGOS and IERS. A focus is done on concepts and terminology, as well as progresses to get a wide acceptance on the International Terrestrial Reference System (ITRS) and its system of realizations through global, regional and national frames, as well as through specific systems such as satellite navigation systems.

  15. A vision-based automated guided vehicle system with marker recognition for indoor use.

    PubMed

    Lee, Jeisung; Hyun, Chang-Ho; Park, Mignon

    2013-08-07

    We propose an intelligent vision-based Automated Guided Vehicle (AGV) system using fiduciary markers. In this paper, we explore a low-cost, efficient vehicle guiding method using a consumer grade web camera and fiduciary markers. In the proposed method, the system uses fiduciary markers with a capital letter or triangle indicating direction in it. The markers are very easy to produce, manipulate, and maintain. The marker information is used to guide a vehicle. We use hue and saturation values in the image to extract marker candidates. When the known size fiduciary marker is detected by using a bird's eye view and Hough transform, the positional relation between the marker and the vehicle can be calculated. To recognize the character in the marker, a distance transform is used. The probability of feature matching was calculated by using a distance transform, and a feature having high probability is selected as a captured marker. Four directional signals and 10 alphabet features are defined and used as markers. A 98.87% recognition rate was achieved in the testing phase. The experimental results with the fiduciary marker show that the proposed method is a solution for an indoor AGV system.

  16. Automatic detection system of shaft part surface defect based on machine vision

    NASA Astrophysics Data System (ADS)

    Jiang, Lixing; Sun, Kuoyuan; Zhao, Fulai; Hao, Xiangyang

    2015-05-01

    Surface physical damage detection is an important part of the shaft parts quality inspection and the traditional detecting methods are mostly human eye identification which has many disadvantages such as low efficiency, bad reliability. In order to improve the automation level of the quality detection of shaft parts and establish its relevant industry quality standard, a machine vision inspection system connected with MCU was designed to realize the surface detection of shaft parts. The system adopt the monochrome line-scan digital camera and use the dark-field and forward illumination technology to acquire images with high contrast; the images were segmented to Bi-value images through maximum between-cluster variance method after image filtering and image enhancing algorithms; then the mainly contours were extracted based on the evaluation criterion of the aspect ratio and the area; then calculate the coordinates of the centre of gravity of defects area, namely locating point coordinates; At last, location of the defects area were marked by the coding pen communicated with MCU. Experiment show that no defect was omitted and false alarm error rate was lower than 5%, which showed that the designed system met the demand of shaft part on-line real-time detection.

  17. Image distortion correction for single-lens stereo vision system employing a biprism

    NASA Astrophysics Data System (ADS)

    Qian, Beibei; Lim, Kah Bin

    2016-07-01

    A single-lens stereo vision system employing a biprism placed in front of the camera will generate unusual distortion in the captured image. Different from the typical image distortions due to lenses, this distortion is mainly induced by the thick biprism and appears to be incompatible with existing lens distortion models. A fully constrained and model-free distortion correction method is proposed. It employs all the projective invariants of a planar checkerboard template as the correction constraints, including straight lines, cross-ratio, and convergence at vanishing point, along with the distortion-free reference point as an additional constraint from the system. The extracted sample points are corrected by minimizing the total cost function formed by all these constraints. With both sets of distorted and corrected points, and the intermediate points interpolated by a local transformation, the correction maps are determined. Thereafter, all the subsequent images could be distortion corrected by the correction maps. This method performs well on the distorted image data captured by the system and shows improvements in accuracy on the camera calibration and depth recovery compared with other correction methods.

  18. Biological model of vision for an artificial system that learns to perceive its environment

    SciTech Connect

    Blackburn, M.R.; Nguyen, H.G.

    1989-06-01

    The objective is to design an artificial vision system for use in robotics applications. Because the desired performance is equivalent to that achieved by nature, the authors anticipate that the objective will be accomplished most efficiently through modeling aspects of the neuroanatomy and neurophysiology of the biological visual system. Information enters the biological visual system through the retina and is passed to the lateral geniculate and optic tectum. The lateral geniculate nucleus (LGN) also receives information from the cerebral cortex and the result of these two inflows is returned to the cortex. The optic tectum likewise receives the retinal information in a context of other converging signals and organizes motor responses. A computer algorithm is described which implements models of the biological visual mechanisms of the retina, thalamic lateral geniculate and perigeniculate nuclei, and primary visual cortex. Motion and pattern analyses are performed in parallel and interact in the cortex to construct perceptions. We hypothesize that motion reflexes serve as unconditioned pathways for the learning and recall of pattern information. The algorithm demonstrates this conditioning through a learning function approximating heterosynaptic facilitation.

  19. Real-time Enhancement, Registration, and Fusion for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than-human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests.

  20. Information theory analysis of sensor-array imaging systems for computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.