Science.gov

Sample records for active vision systems

  1. ROVER: A prototype active vision system

    NASA Astrophysics Data System (ADS)

    Coombs, David J.; Marsh, Brian D.

    1987-08-01

    The Roving Eyes project is an experiment in active vision. We present the design and implementation of a prototype that tracks colored balls in images from an on-line charge coupled device (CCD) camera. Rover is designed to keep up with its rapidly changing environment by handling best and average case conditions and ignoring the worst case. This allows Rover's techniques to be less sophisticated and consequently faster. Each of Rover's major functional units is relatively isolated from the others, and an executive which knows all the functional units directs the computation by deciding which jobs would be most effective to run. This organization is realized with a priority queue of jobs and their arguments. Rover's structure not only allows it to adapt its strategy to the environment, but also makes the system extensible. A capability can be added to the system by adding a functional module with a well defined interface and by modifying the executive to make use of the new module. The current implementation is discussed in the appendices.

  2. Vector Disparity Sensor with Vergence Control for Active Vision Systems

    PubMed Central

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737

  3. Active Vision in Marmosets: A Model System for Visual Neuroscience

    PubMed Central

    Reynolds, John H.; Miller, Cory T.

    2014-01-01

    The common marmoset (Callithrix jacchus), a small-bodied New World primate, offers several advantages to complement vision research in larger primates. Studies in the anesthetized marmoset have detailed the anatomy and physiology of their visual system (Rosa et al., 2009) while studies of auditory and vocal processing have established their utility for awake and behaving neurophysiological investigations (Lu et al., 2001a,b; Eliades and Wang, 2008a,b; Osmanski and Wang, 2011; Remington et al., 2012). However, a critical unknown is whether marmosets can perform visual tasks under head restraint. This has been essential for studies in macaques, enabling both accurate eye tracking and head stabilization for neurophysiology. In one set of experiments we compared the free viewing behavior of head-fixed marmosets to that of macaques, and found that their saccadic behavior is comparable across a number of saccade metrics and that saccades target similar regions of interest including faces. In a second set of experiments we applied behavioral conditioning techniques to determine whether the marmoset could control fixation for liquid reward. Two marmosets could fixate a central point and ignore peripheral flashing stimuli, as needed for receptive field mapping. Both marmosets also performed an orientation discrimination task, exhibiting a saturating psychometric function with reliable performance and shorter reaction times for easier discriminations. These data suggest that the marmoset is a viable model for studies of active vision and its underlying neural mechanisms. PMID:24453311

  4. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    PubMed Central

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-01

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activities, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation of the performance of human activity recognition. PMID:23353144

  5. Reduction of computational complexity in the image/video understanding systems with active vision

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-10-01

    The vision system evolved not only as a recognition system, but also as a sensory system for reaching, grasping and other motion activities. In advanced creatures, it became a component of prediction function, allowing creation of environmental models and activity planning. Fast information processing and decision making is vital for any living creature, and requires reduction of informational and computational complexities. The brain achieves this goal using symbolic coding, hierarchical compression, and selective processing of visual information. Network-Symbolic representation, where both systematic structural / logical methods and neural / statistical methods are the parts of a single mechanism, is the most feasible for such models. It converts visual information into the relational Network-Symbolic structures, instead of precise computations of a 3-dimensional models. Narrow foveal vision provides separation of figure from ground, object identification, semantic analysis, and precise control of actions. Rough wide peripheral vision identifies and tracks salient motion, guiding foveal system to salient objects. It also provides scene context. Objects with rigid bodies and other stable systems have coherent relational structures. Hierarchical compression and Network-Symbolic transformations derive more abstract structures that allow invariably recognize a particular structure as an exemplar of class. Robotic systems equipped with such smart vision will be able effectively navigate in any environment, understand situation, and act accordingly.

  6. Expert system modeling of a vision system

    NASA Astrophysics Data System (ADS)

    Reihani, Kamran; Thompson, Wiley E.

    1992-05-01

    The proposed artificial intelligence-based vision model incorporates natural recognition processes depicted as a visual pyramid and hierarchical representation of objects in the database. The visual pyramid, with based and apex representing pixels and image, respectively, is used as an analogy for a vision system. This paper provides an overview of recognition activities and states in the framework of an inductive model. Also, it presents a natural vision system and a counterpart expert system model that incorporates the described operations.

  7. Coherent laser vision system

    SciTech Connect

    Sebastion, R.L.

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  8. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  9. Low Vision Enhancement System

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.

  10. [Quality system Vision 2000].

    PubMed

    Pasini, Evasio; Pitocchi, Oreste; de Luca, Italo; Ferrari, Roberto

    2002-12-01

    A recent document of the Italian Ministry of Health points out that all structures which provide services to the National Health System should implement a Quality System according to the ISO 9000 standards. Vision 2000 is the new version of the ISO standard. Vision 2000 is less bureaucratic than the old version. The specific requests of the Vision 2000 are: a) to identify, to monitor and to analyze the processes of the structure, b) to measure the results of the processes so as to ensure that they are effective, d) to implement actions necessary to achieve the planned results and the continual improvement of these processes, e) to identify customer requests and to measure customer satisfaction. Specific attention should be also dedicated to the competence and training of the personnel involved in the processes. The principles of the Vision 2000 agree with the principles of total quality management. The present article illustrates the Vision 2000 standard and provides practical examples of the implementation of this standard in cardiological departments. PMID:12611210

  11. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  12. Bird Vision System

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Bird Vision system is a multicamera photogrammerty software application that runs on a Microsoft Windows XP platform and was developed at Kennedy Space Center by ASRC Aerospace. This software system collects data about the locations of birds within a volume centered on the Space Shuttle and transmits it in real time to the laptop computer of a test director in the Launch Control Center (LCC) Firing Room.

  13. Design of secondary optics for IRED in active night vision systems.

    PubMed

    Xin, Di; Liu, Hua; Jing, Lei; Wang, Yao; Xu, Wenbin; Lu, Zhenwu

    2013-01-14

    An effective optical design method is proposed to solve the problem of adjustable view angle for infrared illuminator in active night vision systems. A novel total internal reflection (TIR) lens with three segments of the side surface is designed as the secondary optics of infrared emitting diode (IRED). It can provide three modes with different view angles to achieve a complete coverage of the monitored area. As an example, a novel TIR lens is designed for SONY FCB-EX 480CP camera. Optical performance of the novel TIR lens is investigated by both numerical simulation and experiments. The results demonstrate that it can meet the requirements of different irradiation distances quit well with view angles of 7.5°, 22° and 50°. The mean optical efficiency is improved from 62% to 75% and the mean irradiance uniformity is improved from 65% to 85% compared with the traditional structure. PMID:23389004

  14. A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1993-01-01

    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the

  15. Industrial robot's vision systems

    NASA Astrophysics Data System (ADS)

    Iureva, Radda A.; Raskin, Evgeni O.; Komarov, Igor I.; Maltseva, Nadezhda K.; Fedosovsky, Michael E.

    2016-03-01

    Due to the improved economic situation in the high technology sectors, work on the creation of industrial robots and special mobile robotic systems are resumed. Despite this, the robotic control systems mostly remained unchanged. Hence one can see all advantages and disadvantages of these systems. This is due to lack of funds, which could greatly facilitate the work of the operator, and in some cases, completely replace it. The paper is concerned with the complex machine vision of robotic system for monitoring of underground pipelines, which collects and analyzes up to 90% of the necessary information. Vision Systems are used to identify obstacles to the process of movement on a trajectory to determine their origin, dimensions and character. The object is illuminated in a structured light, TV camera records projected structure. Distortions of the structure uniquely determine the shape of the object in view of the camera. The reference illumination is synchronized with the camera. The main parameters of the system are the basic distance between the generator and the lights and the camera parallax angle (the angle between the optical axes of the projection unit and camera).

  16. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2004-12-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  17. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2005-01-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  18. On the architecture of the micro machine vision system

    NASA Astrophysics Data System (ADS)

    Li, Xudong; Wang, Xiaohao; Zhou, Zhaoying; Zong, Guanghua

    2006-01-01

    Micro machine vision system is an important part of a micromanipulating system which has been used widely in many fields. As the research activities on the micromanipulating system go deeper, micro machine vision system catches more attention. In this paper, micro machine vision system is treated as a kind of machine vision system with constrains and characteristics introduced by specific application environment. Unlike the traditional machine vision system, a micro machine vision system usually does not aim at the reconstruction of the scene. It is introduced to obtain expected position information so that the manipulation can be accomplished accurately. The architecture of the micro machine vision system is proposed. The key issues related to a micro machine vision system such as system layout, optical imaging device and vision system calibration are discussed to explain the proposed architecture further. A task-oriented micro machine vision system for biological micromanipulating system is shown as an example, which is in compliance with the proposed architecture.

  19. VISION Digital Video Library System.

    ERIC Educational Resources Information Center

    Rusk, Michael D.

    2001-01-01

    Describes the VISION Digital Library System, a project implemented by the University of Kansas that uses locally developed applications to segment and automatically index video clips. Explains that the focus of VISION is to make possible the gathering and indexing of large amounts of video material, storing material on a database system, and…

  20. Dynamically re-configurable CMOS imagers for an active vision system

    NASA Technical Reports Server (NTRS)

    Yang, Guang (Inventor); Pain, Bedabrata (Inventor)

    2005-01-01

    A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.

  1. Active vision system for planning and programming of industrial robots in one-of-a-kind manufacturing

    NASA Astrophysics Data System (ADS)

    Berger, Ulrich; Schmidt, Achim

    1995-10-01

    The aspects of automation technology in industrial one-of-a-kind manufacturing are discussed. An approach to improve the quality and cost relation is developed and an overview of an 3D- vision supported automation system is given. This system is based on an active vision sensor for 3D-geometry feedback. Its measurement principle, the coded light approach, is explained. The experimental environment for the technical validation of the automation approach is demonstrated, where robot based processes (assembly, arc welding and flame cutting) are graphically simulated and off-line programmed. A typical process sequence for automated one- of-a-kind manufacturing is described. The results of this research development are applied to a project on the automated disassembling of car parts for recycling using industrial robots.

  2. Coevolution of active vision and feature selection.

    PubMed

    Floreano, Dario; Kato, Toshifumi; Marocco, Davide; Sauser, Eric

    2004-03-01

    We show that complex visual tasks, such as position- and size-invariant shape recognition and navigation in the environment, can be tackled with simple architectures generated by a coevolutionary process of active vision and feature selection. Behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons are evolved while they freely interact with their environments. We describe the application of this methodology in three sets of experiments, namely, shape discrimination, car driving, and robot navigation. We show that these systems develop sensitivity to a number of oriented, retinotopic, visual-feature-oriented edges, corners, height, and a behavioral repertoire to locate, bring, and keep these features in sensitive regions of the vision system, resembling strategies observed in simple insects. PMID:15052484

  3. Generic motion platform for active vision

    NASA Astrophysics Data System (ADS)

    Weiman, Carl F. R.; Vincze, Markus

    1996-10-01

    The term 'active vision' was first used by Bajcsy at a NATO workshop in 1982 to describe an emerging field of robot vision which departed sharply from traditional paradigms of image understanding and machine vision. The new approach embeds a moving camera platform as an in-the-loop component of robotic navigation or hand-eye coordination. Visually served steering of the focus of attention supercedes the traditional functions of recognition and gaging. Custom active vision platforms soon proliferated in research laboratories in Europe and North America. In 1990 the National Science Foundation funded the design of a common platform to promote cooperation and reduce cost in active vision research. This paper describes the resulting platform. The design was driven by payload requirements for binocular motorized C-mount lenses on a platform whose performance and articulation emulate those of the human eye- head system. The result was a 4-DOF mechanisms driven by servo controlled DC brush motors. A crossbeam supports two independent worm-gear driven camera vergence mounts at speeds up to 1,000 degrees per second over a range of +/- 90 degrees from dead ahead. This crossbeam is supported by a pan-tilt mount whose horizontal axis intersects the vergence axes for translation-free camera rotation about these axes at speeds up to 500 degrees per second.

  4. Coherent laser vision system (CLVS)

    SciTech Connect

    1997-02-13

    The purpose of the CLVS research project is to develop a prototype fiber-optic based Coherent Laser Vision System suitable for DOE`s EM Robotics program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update geometric data on the order of once per second. The CLVS project plan required implementation in two phases of the contract, a Base Contract and a continuance option. This is the Base Program Interim Phase Topical Report presenting the results of Phase 1 of the CLVS research project. Test results and demonstration results provide a proof-of-concept for a system providing three-dimensional (3D) vision with the performance capability required to update geometric data on the order of once per second.

  5. Real-time vision systems

    SciTech Connect

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  6. A stereoscopic vision system

    NASA Astrophysics Data System (ADS)

    Kiraly, Zsolt

    In this investigation an optical system is introduced that is suitable for inspecting the interiors of confined spaces, such as the walls of containers, cavities, reservoirs, fuel tanks, pipelines, and the gastrointestinal tract. The optical system transmits wirelessly stereoscopic (three-dimensional) video to a computer which displays the video on the screen where it can be viewed with shutter glasses. To minimize space requirements, the video from the two cameras (required to produce stereoscopic images) is multiplexed into a single stream for transmission. The video is demultiplexed inside the computer, corrected for fisheye distortion and lens misalignment, and cropped to the proper size. Algorithms were developed that enable the system to perform these tasks. A proof-of-concept device was constructed that demonstrates the operation and the practicality of the optical system. Using this device, tests were performed validating the concepts and the algorithms.

  7. Stereoscopic vision system

    NASA Astrophysics Data System (ADS)

    Király, Zsolt; Springer, George S.; Van Dam, Jacques

    2006-04-01

    In this investigation, an optical system is introduced for inspecting the interiors of confined spaces, such as the walls of containers, cavities, reservoirs, fuel tanks, pipelines, and the gastrointestinal tract. The optical system wirelessly transmits stereoscopic video to a computer that displays the video in realtime on the screen, where it is viewed with shutter glasses. To minimize space requirements, the videos from the two cameras (required to produce stereoscopic images) are multiplexed into a single stream for transmission. The video is demultiplexed inside the computer, corrected for fisheye distortion and lens misalignment, and cropped to the proper size. Algorithms are developed that enable the system to perform these tasks. A proof-of-concept device is constructed that demonstrates the operation and the practicality of the optical system. Using this device, tests are performed assessing validities of the concepts and the algorithms.

  8. Active vision in satellite scene analysis

    NASA Technical Reports Server (NTRS)

    Naillon, Martine

    1994-01-01

    In earth observation or planetary exploration it is necessary to have more and, more autonomous systems, able to adapt to unpredictable situations. This imposes the use, in artificial systems, of new concepts in cognition, based on the fact that perception should not be separated from recognition and decision making levels. This means that low level signal processing (perception level) should interact with symbolic and high level processing (decision level). This paper is going to describe the new concept of active vision, implemented in Distributed Artificial Intelligence by Dassault Aviation following a 'structuralist' principle. An application to spatial image interpretation is given, oriented toward flexible robotics.

  9. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  10. Vision inspection system and method

    NASA Technical Reports Server (NTRS)

    Huber, Edward D. (Inventor); Williams, Rick A. (Inventor)

    1997-01-01

    An optical vision inspection system (4) and method for multiplexed illuminating, viewing, analyzing and recording a range of characteristically different kinds of defects, depressions, and ridges in a selected material surface (7) with first and second alternating optical subsystems (20, 21) illuminating and sensing successive frames of the same material surface patch. To detect the different kinds of surface features including abrupt as well as gradual surface variations, correspondingly different kinds of lighting are applied in time-multiplexed fashion to the common surface area patches under observation.

  11. Vision Loss With Sexual Activity.

    PubMed

    Lee, Michele D; Odel, Jeffrey G; Rudich, Danielle S; Ritch, Robert

    2016-01-01

    A 51-year-old white man presented with multiple episodes of transient painless unilateral vision loss precipitated by sexual intercourse. Examination was significant for closed angles bilaterally. His visual symptoms completely resolved following treatment with laser peripheral iridotomies. PMID:25265010

  12. VISION 21 SYSTEMS ANALYSIS METHODOLOGIES

    SciTech Connect

    G.S. Samuelsen; A. Rao; F. Robson; B. Washom

    2003-08-11

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into power plant systems that meet performance and emission goals of the Vision 21 program. The study efforts have narrowed down the myriad of fuel processing, power generation, and emission control technologies to selected scenarios that identify those combinations having the potential to achieve the Vision 21 program goals of high efficiency and minimized environmental impact while using fossil fuels. The technology levels considered are based on projected technical and manufacturing advances being made in industry and on advances identified in current and future government supported research. Included in these advanced systems are solid oxide fuel cells and advanced cycle gas turbines. The results of this investigation will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  13. Precise calibration of binocular vision system used for vision measurement.

    PubMed

    Cui, Yi; Zhou, Fuqiang; Wang, Yexin; Liu, Liu; Gao, He

    2014-04-21

    Binocular vision calibration is of great importance in 3D machine vision measurement. With respect to binocular vision calibration, the nonlinear optimization technique is a crucial step to improve the accuracy. The existing optimization methods mostly aim at minimizing the sum of reprojection errors for two cameras based on respective 2D image pixels coordinate. However, the subsequent measurement process is conducted in 3D coordinate system which is not consistent with the optimization coordinate system. Moreover, the error criterion with respect to optimization and measurement is different. The equal pixel distance error in 2D image plane leads to diverse 3D metric distance error at different position before the camera. To address these issues, we propose a precise calibration method for binocular vision system which is devoted to minimizing the metric distance error between the reconstructed point through optimal triangulation and the ground truth in 3D measurement coordinate system. In addition, the inherent epipolar constraint and constant distance constraint are combined to enhance the optimization process. To evaluate the performance of the proposed method, both simulative and real experiments have been carried out and the results show that the proposed method is reliable and efficient to improve measurement accuracy compared with conventional method. PMID:24787804

  14. 77 FR 2342 - Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-17

    ... Federal Aviation Administration Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision... Transportation (DOT). ACTION: Notice of RTCA Special Committee 213, Enhanced Flight Vision/ Synthetic Vision... meeting of RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS)....

  15. Compact Autonomous Hemispheric Vision System

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.

    2012-01-01

    Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.

  16. Analysis of the development and the prospects about vehicular infrared night vision system

    NASA Astrophysics Data System (ADS)

    Li, Jing; Fan, Hua-ping; Xie, Zu-yun; Zhou, Xiao-hong; Yu, Hong-qiang; Huang, Hui

    2013-08-01

    Through the classification of vehicular infrared night vision system and comparing the mainstream vehicle infrared night vision products, we summarized the functions of vehicular infrared night vision system which conclude night vision, defogging , strong-light resistance and biological recognition. At the same time , the vehicular infrared night vision system's markets of senior car and fire protection industry were analyzed。Finally, the conclusion was given that vehicle infrared night vision system would be used as a safety essential active safety equipment to promote the night vision photoelectric industry and automobile industry.

  17. Active stereo vision routines using PRISM-3

    NASA Astrophysics Data System (ADS)

    Antonisse, Hendrick J.

    1992-11-01

    This paper describes work in progress on a set of visual routines and supporting capabilities implemented on the PRISM-3 real-time vision system. The routines are used in an outdoor robot retrieval task. The task requires the robot to locate a donor agent -- a Hero2000 -- which holds the object to be retrieved, to navigate to the donor, to accept the object from the donor, and return to its original location. The routines described here will form an integral part of the navigation and wide-area search tasks. Active perception is exploited to locate the donor using real-time stereo ranging directed by a pan/tilt/verge mechanism. A framework for orchestrating visual search has been implemented and is briefly described.

  18. Designing vision systems for robotic applications

    SciTech Connect

    Trivedi, M.M.

    1988-01-01

    Intelligent robotic systems utilize sensory information to perceive the nature of their work environment. Of the many sensor modalities, vision is recognized as one of the most important and cost-effective sensors utilized in practical systems. In this paper, we address the problem of designing vision systems to perform a variety of robotic inspection and manipulation tasks. We describe the nature and characteristics of the robotic task domain and discuss the computational hierarchy governing the process of scene interpretation. We also present a case study illustrating the design of a specific vision system developed for performing inspection and manipulation tasks associated with a control panel. 27 refs., 6 figs.

  19. Space environment robot vision system

    NASA Technical Reports Server (NTRS)

    Wood, H. John; Eichhorn, William L.

    1990-01-01

    A prototype twin-camera stereo vision system for autonomous robots has been developed at Goddard Space Flight Center. Standard charge coupled device (CCD) imagers are interfaced with commercial frame buffers and direct memory access to a computer. The overlapping portions of the images are analyzed using photogrammetric techniques to obtain information about the position and orientation of objects in the scene. The camera head consists of two 510 x 492 x 8-bit CCD cameras mounted on individually adjustable mounts. The 16 mm efl lenses are designed for minimum geometric distortion. The cameras can be rotated in the pitch, roll, and yaw (pan angle) directions with respect to their optical axes. Calibration routines have been developed which automatically determine the lens focal lengths and pan angle between the two cameras. The calibration utilizes observations of a calibration structure with known geometry. Test results show the precision attainable is plus or minus 0.8 mm in range at 2 m distance using a camera separation of 171 mm. To demonstrate a task needed on Space Station Freedom, a target structure with a movable I beam was built. The camera head can autonomously direct actuators to dock the I-beam to another one so that they could be bolted together.

  20. COHERENT LASER VISION SYSTEM (CLVS) OPTION PHASE

    SciTech Connect

    Robert Clark

    1999-11-18

    The purpose of this research project was to develop a prototype fiber-optic based Coherent Laser Vision System (CLVS) suitable for DOE's EM Robotic program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update the dimensional spatial data on the order of once per second. The system has total immunity to ambient lighting conditions.

  1. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    NASA Astrophysics Data System (ADS)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  2. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators. PMID:25286349

  3. Development Of A Vision Guided Robot System

    NASA Astrophysics Data System (ADS)

    Torfeh-Isfahani, Mohammad; Yeung, Kim F.

    1987-10-01

    This paper presents the development of an intelligent vision guided system through the integration of a vision system into a robot. Systems like the one described in this paper are able to work alone. They can be used in many automated assembly operations. Such systems can do repetitive tasks more efficiently and accurately than human operators because of the immunity of machines to human factors such as boredom, fatigue, and stress. In order to better understand the capabilities of such systems, this paper will highlight what can be accomplished by such systems by detailing the development of such a system. This system is already built and functional.

  4. Compact Through-The-Torch Vision System

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Gutow, David A.

    1992-01-01

    Changes in gas/tungsten-arc welding torch equipped with through-the-torch vision system make it smaller and more resistant to welding environment. Vision subsystem produces image of higher quality, flow of gas enhanced, and parts replaced quicker and easier. Coaxial series of lenses and optical components provides overhead view of joint and weld puddle real-time control. Designed around miniature high-resolution video camera. Smaller size enables torch to weld joints formerly inaccessible.

  5. Volumetric imaging system for the ionosphere (VISION)

    NASA Astrophysics Data System (ADS)

    Dymond, Kenneth F.; Budzien, Scott A.; Nicholas, Andrew C.; Thonnard, Stefan E.; Fortna, Clyde B.

    2002-01-01

    The Volumetric Imaging System for the Ionosphere (VISION) is designed to use limb and nadir images to reconstruct the three-dimensional distribution of electrons over a 1000 km wide by 500 km high slab beneath the satellite with 10 km x 10 km x 10 km voxels. The primary goal of the VISION is to map and monitor global and mesoscale (> 10 km) electron density structures, such as the Appleton anomalies and field-aligned irregularity structures. The VISION consists of three UV limb imagers, two UV nadir imagers, a dual frequency Global Positioning System (GPS) receiver, and a coherently emitting three frequency radio beacon. The limb imagers will observe the O II 83.4 nm line (daytime electron density), O I 135.6 nm line (nighttime electron density and daytime O density), and the N2 Lyman-Birge-Hopfield (LBH) bands near 143.0 nm (daytime N2 density). The nadir imagers will observe the O I 135.6 nm line (nighttime electron density and daytime O density) and the N2 LBH bands near 143.0 nm (daytime N2 density). The GPS receiver will monitor the total electron content between the satellite containing the VISION and the GPS constellation. The three frequency radio beacon will be used with ground-based receiver chains to perform computerized radio tomography below the satellite containing the VISION. The measurements made using the two radio frequency instruments will be used to validate the VISION UV measurements.

  6. Flight testing an integrated synthetic vision system

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-05-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream G-V aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  7. Flight Testing an Integrated Synthetic Vision System

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  8. Three-Dimensional Robotic Vision System

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1989-01-01

    Stereoscopy and motion provide clues to outlines of objects. Digital image-processing system acts as "intelligent" automatic machine-vision system by processing views from stereoscopic television cameras into three-dimensional coordinates of moving object in view. Epipolar-line technique used to find corresponding points in stereoscopic views. Robotic vision system analyzes views from two television cameras to detect rigid three-dimensional objects and reconstruct numerically in terms of coordinates of corner points. Stereoscopy and effects of motion on two images complement each other in providing image-analyzing subsystem with clues to natures and locations of principal features.

  9. Near real-time stereo vision system

    NASA Astrophysics Data System (ADS)

    Matthies, Larry H.; Anderson, Charles H.

    1991-12-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  10. Near real-time stereo vision system

    NASA Astrophysics Data System (ADS)

    Anderson, Charles H.; Matthies, Larry H.

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  11. Using perturbations to identify the brain circuits underlying active vision

    PubMed Central

    Wurtz, Robert H.

    2015-01-01

    The visual and oculomotor systems in the brain have been studied extensively in the primate. Together, they can be regarded as a single brain system that underlies active vision—the normal vision that begins with visual processing in the retina and extends through the brain to the generation of eye movement by the brainstem. The system is probably one of the most thoroughly studied brain systems in the primate, and it offers an ideal opportunity to evaluate the advantages and disadvantages of the series of perturbation techniques that have been used to study it. The perturbations have been critical in moving from correlations between neuronal activity and behaviour closer to a causal relation between neuronal activity and behaviour. The same perturbation techniques have also been used to tease out neuronal circuits that are related to active vision that in turn are driving behaviour. The evolution of perturbation techniques includes ablation of both cortical and subcortical targets, punctate chemical lesions, reversible inactivations, electrical stimulation, and finally the expanding optogenetic techniques. The evolution of perturbation techniques has supported progressively stronger conclusions about what neuronal circuits in the brain underlie active vision and how the circuits themselves might be organized. PMID:26240420

  12. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    PubMed

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'. PMID:23437044

  13. Study of a dual mode SWIR active imaging system for direct imaging and non-line-of-sight vision

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Christnacher, Frank; Velten, Andreas

    2015-05-01

    The application of non-line of sight vision and see around a corner has been demonstrated in the recent past on laboratory level with round trip path lengths on the scale of 1 m as well as 10 m. This method uses a computational imaging approach to analyze the scattered information of objects which are hidden from the direct sensors field of view. Recent demonstrator systems were driven at laser wavelengths (800 nm and 532 nm) which are far from the eye-safe shortwave infrared (SWIR) wavelength band i.e. between 1.4 μm and 2 μm. Therefore, the application in public or inhabited areas is difficult with respect to international laser safety conventions. In the present work, the authors evaluate the application of recent eye safe laser sources and sensor devices for non-line of sight sensing and give predictions on range and resolution. Further, the realization of a dual mode concept is studied enabling both, the direct view on a scene and the indirect view on a hidden scene. While recent laser gated viewing sensors have high spatial resolution, their application in non-line of sight imaging suffer from a too low temporal resolution due to minimal sensor gate width of around 150 ns. On the other hand, Geiger-mode single photon counting devices have high temporal resolution, but their spatial resolution is (until now) limited to array sizes of some thousand sensor elements. In this publication the authors present detailed theoretical and experimental evaluations.

  14. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    PubMed

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled. PMID:26466433

  15. Approach to constructing reconfigurable computer vision system

    NASA Astrophysics Data System (ADS)

    Xue, Jianru; Zheng, Nanning; Wang, Xiaoling; Zhang, Yongping

    2000-10-01

    In this paper, we propose an approach to constructing reconfigurable vision system. We found that timely and efficient execution of early tasks can significantly enhance the performance of whole computer vision tasks, and abstract out a set of basic, computationally intensive stream operations that may be performed in parallel and embodies them in a series of specific front-end processors. These processors which based on FPGAs (Field programmable gate arrays) can be re-programmable to permit a range of different types of feature maps, such as edge detection & linking, image filtering. Front-end processors and a powerful DSP constitute a computing platform which can perform many CV tasks. Additionally we adopt the focus-of-attention technologies to reduce the I/O and computational demands by performing early vision processing only within a particular region of interest. Then we implement a multi-page, dual-ported image memory interface between the image input and computing platform (including front-end processors, DSP). Early vision features were loaded into banks of dual-ported image memory arrays, which are continually raster scan updated at high speed from the input image or video data stream. Moreover, the computing platform can be complete asynchronous, random access to the image data or any other early vision feature maps through the dual-ported memory banks. In this way, the computing platform resources can be properly allocated to a region of interest and decoupled from the task of dealing with a high speed serial raster scan input. Finally, we choose PCI Bus as the main channel between the PC and computing platform. Consequently, front-end processors' control registers and DSP's program memory were mapped into the PC's memory space, which provides user access to reconfigure the system at any time. We also present test result of a computer vision application based on the system.

  16. Image Control In Automatic Welding Vision System

    NASA Technical Reports Server (NTRS)

    Richardson, Richard W.

    1988-01-01

    Orientation and brightness varied to suit welding conditions. Commands from vision-system computer drive servomotors on iris and Dove prism, providing proper light level and image orientation. Optical-fiber bundle carries view of weld area as viewed along axis of welding electrode. Image processing described in companion article, "Processing Welding Images for Robot Control" (MFS-26036).

  17. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  18. Mobile robot on-board vision system

    SciTech Connect

    McClure, V.W.; Nai-Yung Chen.

    1993-06-15

    An automatic robot system is described comprising: an AGV transporting and transferring work piece, a control computer on board the AGV, a process machine for working on work pieces, a flexible robot arm with a gripper comprising two gripper fingers at one end of the arm, wherein the robot arm and gripper are controllable by the control computer for engaging a work piece, picking it up, and setting it down and releasing it at a commanded location, locating beacon means mounted on the process machine, wherein the locating beacon means are for locating on the process machine a place to pick up and set down work pieces, vision means, including a camera fixed in the coordinate system of the gripper means, attached to the robot arm near the gripper, such that the space between said gripper fingers lies within the vision field of said vision means, for detecting the locating beacon means, wherein the vision means provides the control computer visual information relating to the location of the locating beacon means, from which information the computer is able to calculate the pick up and set down place on the process machine, wherein said place for picking up and setting down work pieces on the process machine is a nest means and further serves the function of holding a work piece in place while it is worked on, the robot system further comprising nest beacon means located in the nest means detectable by the vision means for providing information to the control computer as to whether or not a work piece is present in the nest means.

  19. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  20. Zoom Vision System For Robotic Welding

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Hudyma, Russell M.

    1990-01-01

    Rugged zoom lens subsystem proposed for use in along-the-torch vision system of robotic welder. Enables system to adapt, via simple mechanical adjustments, to gas cups of different lengths, electrodes of different protrusions, and/or different distances between end of electrode and workpiece. Unnecessary to change optical components to accommodate changes in geometry. Easy to calibrate with respect to object in view. Provides variable focus and variable magnification.

  1. Prototype Optical Correlator For Robotic Vision System

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1993-01-01

    Known and unknown images fed in electronically at high speed. Optical correlator and associated electronic circuitry developed for vision system of robotic vehicle. System recognizes features of landscape by optical correlation between input image of scene viewed by video camera on robot and stored reference image. Optical configuration is Vander Lugt correlator, in which Fourier transform of scene formed in coherent light and spatially modulated by hologram of reference image to obtain correlation.

  2. Vision enhanced navigation for unmanned systems

    NASA Astrophysics Data System (ADS)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  3. Visual Turing test for computer vision systems.

    PubMed

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-03-24

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a "visual Turing test": an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question ("just-in-time truthing"). The test is then administered to the computer-vision system, one question at a time. After the system's answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers-the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  4. Evaluation of active vision by a car's antifog headlamps

    NASA Astrophysics Data System (ADS)

    Barun, Vladimir V.; Levitin, Konstantin M.

    1996-10-01

    A special case of civilian active vision has been investigated here, namely, a vision system by car anti-fog headlamps. A method to estimate the light-engineering criteria for headlamp performances and simulate the operation of the system through a turbid medium, such as fog, is developed on the base of the analytical procedures of the radiative transfer theory. This method features in include the spaced light source and receiver of a driver's active vision system, the complicated azimuth-nonsymmetrical emissive pattern of the headlamps, and the fine angular dependence of the fog phase function near the backscattering direction. The final formulas are derived in an analytical form providing additional convenience and simplicity for the computations. The image contrast of a road object with arbitrary orientation, dimensions, and shape and its limiting visibility range are studied as a function of meteorological visibility range in fog as well as of various emissive pattern, mounting, and adjustment parameters of the headlamps. Optimization both light-engineering and geometrical characteristics of the headlamps is shown to be possible to enable the opportunity to enhance the visibility range and, hence, traffic safety.

  5. Missileborne Artificial Vision System (MAVIS)

    NASA Technical Reports Server (NTRS)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  6. Visual Turing test for computer vision systems

    PubMed Central

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-01-01

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  7. Stereoscopic Vision System For Robotic Vehicle

    NASA Technical Reports Server (NTRS)

    Matthies, Larry H.; Anderson, Charles H.

    1993-01-01

    Distances estimated from images by cross-correlation. Two-camera stereoscopic vision system with onboard processing of image data developed for use in guiding robotic vehicle semiautonomously. Combination of semiautonomous guidance and teleoperation useful in remote and/or hazardous operations, including clean-up of toxic wastes, exploration of dangerous terrain on Earth and other planets, and delivery of materials in factories where unexpected hazards or obstacles can arise.

  8. Three-dimensional motion estimation using genetic algorithms from image sequence in an active stereo vision system

    NASA Astrophysics Data System (ADS)

    Dipanda, Albert; Ajot, Jerome; Woo, Sanghyuk

    2003-06-01

    This paper proposes a method for estimating 3D rigid motion parameters from an image sequence of a moving object. The 3D surface measurement is achieved using an active stereovision system composed of a camera and a light projector, which illuminates objects to be analyzed by a pyramid-shaped laser beam. By associating the laser rays and the spots in the 2D image, the 3D points corresponding to these spots are reconstructed. Each image of the sequence provides a set of 3D points, which is modeled by a B-spline surface. Therefore, estimating the motion between two images of the sequence boils down to matching two B-spline surfaces. We consider the matching environment as an optimization problem and find the optimal solution using Genetic Algorithms. A chromosome is encoded by concatenating six binary coded parameters, the three angles of rotation and the x-axis, y-axis and z-axis translations. We have defined an original fitness function to calculate the similarity measure between two surfaces. The matching process is performed iteratively: the number of points to be matched grows as the process advances and results are refined until convergence. Experimental results with a real image sequence are presented to show the effectiveness of the method.

  9. Vision Drives Correlated Activity without Patterned Spontaneous Activity in Developing Xenopus Retina

    PubMed Central

    Demas, James A.; Payne, Hannah; Cline, Hollis T.

    2011-01-01

    Developing amphibians need vision to avoid predators and locate food before visual system circuits fully mature. Xenopus tadpoles can respond to visual stimuli as soon as retinal ganglion cells (RGCs) innervate the brain, however, in mammals, chicks and turtles, RGCs reach their central targets many days, or even weeks, before their retinas are capable of vision. In the absence of vision, activity-dependent refinement in these amniote species is mediated by waves of spontaneous activity that periodically spread across the retina, correlating the firing of action potentials in neighboring RGCs. Theory suggests that retinorecipient neurons in the brain use patterned RGC activity to sharpen the retinotopy first established by genetic cues. We find that in both wild type and albino Xenopus tadpoles, RGCs are spontaneously active at all stages of tadpole development studied, but their population activity never coalesces into waves. Even at the earliest stages recorded, visual stimulation dominates over spontaneous activity and can generate patterns of RGC activity similar to the locally correlated spontaneous activity observed in amniotes. In addition, we show that blocking AMPA and NMDA type glutamate receptors significantly decreases spontaneous activity in young Xenopus retina, but that blocking GABAA receptor blockers does not. Our findings indicate that vision drives correlated activity required for topographic map formation. They further suggest that developing retinal circuits in the two major subdivisions of tetrapods, amphibians and amniotes, evolved different strategies to supply appropriately patterned RGC activity to drive visual circuit refinement. PMID:21312343

  10. 3D vision upgrade kit for the TALON robot system

    NASA Astrophysics Data System (ADS)

    Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott

    2010-02-01

    In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.

  11. Kiwi Forego Vision in the Guidance of Their Nocturnal Activities

    PubMed Central

    Martin, Graham R.; Wilson, Kerry-Jayne; Martin Wild, J.; Parsons, Stuart; Fabiana Kubke, M.; Corfield, Jeremy

    2007-01-01

    Background In vision, there is a trade-off between sensitivity and resolution, and any eye which maximises information gain at low light levels needs to be large. This imposes exacting constraints upon vision in nocturnal flying birds. Eyes are essentially heavy, fluid-filled chambers, and in flying birds their increased size is countered by selection for both reduced body mass and the distribution of mass towards the body core. Freed from these mass constraints, it would be predicted that in flightless birds nocturnality should favour the evolution of large eyes and reliance upon visual cues for the guidance of activity. Methodology/Principal Findings We show that in Kiwi (Apterygidae), flightlessness and nocturnality have, in fact, resulted in the opposite outcome. Kiwi show minimal reliance upon vision indicated by eye structure, visual field topography, and brain structures, and increased reliance upon tactile and olfactory information. Conclusions/Significance This lack of reliance upon vision and increased reliance upon tactile and olfactory information in Kiwi is markedly similar to the situation in nocturnal mammals that exploit the forest floor. That Kiwi and mammals evolved to exploit these habitats quite independently provides evidence for convergent evolution in their sensory capacities that are tuned to a common set of perceptual challenges found in forest floor habitats at night and which cannot be met by the vertebrate visual system. We propose that the Kiwi visual system has undergone adaptive regressive evolution driven by the trade-off between the relatively low rate of gain of visual information that is possible at low light levels, and the metabolic costs of extracting that information. PMID:17332846

  12. 360 degree vision system: opportunities in transportation

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2007-09-01

    Panoramic technologies are experiencing new and exciting opportunities in the transportation industries. The advantages of panoramic imagers are numerous: increased areas coverage with fewer cameras, imaging of multiple target simultaneously, instantaneous full horizon detection, easier integration of various applications on the same imager and others. This paper reports our work on panomorph optics and potential usage in transportation applications. The novel panomorph lens is a new type of high resolution panoramic imager perfectly suitable for the transportation industries. The panomorph lens uses optimization techniques to improve the performance of a customized optical system for specific applications. By adding a custom angle to pixel relation at the optical design stage, the optical system provides an ideal image coverage which is designed to reduce and optimize the processing. The optics can be customized for the visible, near infra-red (NIR) or infra-red (IR) wavebands. The panomorph lens is designed to optimize the cost per pixel which is particularly important in the IR. We discuss the use of the 360 vision system which can enhance on board collision avoidance systems, intelligent cruise controls and parking assistance. 360 panoramic vision systems might enable safer highways and significant reduction in casualties.

  13. Ball stud inspection system using machine vision.

    PubMed

    Shin, Dongik; Han, Changsoo; Moon, Young Shik

    2002-01-01

    In this paper, a vision-based inspection system that measures the dimensions of a ball stud is designed and implemented. The system acquires silhouetted images by backlighting and extracts the outlines of the nearly dichotomized images in subpixel accuracy. The sets of boundary data are modeled with reasonable geometric primitives and the parameters of the models are estimated in a manner that minimizes error. Jig-fixtures and servo systems for the inspection are also contrived. The system rotates an inspected object to recognize the objects in space not on a plane. The system moves the object vertically so that it may take several pictures of different parts of the object, resulting in improvement of measuring resolution. The performance of the system is evaluated by measurement of the dimensions of a standard ball, a standard cylinder, and a ball stud. PMID:12014800

  14. 75 FR 60478 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ... Corporation of Mountain View, California (collectively ``complainants''). 74 FR 34589-90 (July 16, 2009). The... COMMISSION In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing... importation of certain machine vision software, machine vision systems, or products containing same by...

  15. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  16. Synthetic vision systems: operational considerations simulation experiment

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  17. Real-time Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-01-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  18. Real-time enhanced vision system

    NASA Astrophysics Data System (ADS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-05-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  19. Enhanced vision system for laparoscopic surgery.

    PubMed

    Tamadazte, Brahim; Fiard, Gaelle; Long, Jean-Alexandre; Cinquin, Philippe; Voros, Sandrine

    2013-01-01

    Laparoscopic surgery offers benefits to the patients but poses new challenges to the surgeons, including a limited field of view. In this paper, we present an innovative vision system that can be combined with a traditional laparoscope, and provides the surgeon with a global view of the abdominal cavity, bringing him or her closer to open surgery conditions. We present our first experiments performed on a testbench mimicking a laparoscopic setup: they demonstrate an important time gain in performing a complex task consisting bringing a thread into the field of view of the laparoscope. PMID:24111032

  20. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  1. Airborne Use of Night Vision Systems

    NASA Astrophysics Data System (ADS)

    Mepham, S.

    1990-04-01

    Mission Management Department of the Royal Aerospace Establishment has won a Queen's Award for Technology, jointly with GEC Sensors, in recognition of innovation and success in the development and application of night vision technology for fixed wing aircraft. This work has been carried out to satisfy the operational needs of the Royal Air Force. These are seen to be: - Operations in the NATO Central Region - To have a night as well as a day capability - To carry out low level, high speed penetration - To attack battlefield targets, especially groups of tanks - To meet these objectives at minimum cost The most effective way to penetrate enemy defences is at low level and survivability would be greatly enhanced with a first pass attack. It is therefore most important that not only must the pilot be able to fly at low level to the target but also he must be able to detect it in sufficient time to complete a successful attack. An analysis of the average operating conditions in Central Europe during winter clearly shows that high speed low level attacks can only be made for about 20 per cent of the 24 hours. Extending this into good night conditions raises the figure to 60 per cent. Whilst it is true that this is for winter conditions and in summer the situation is better, the overall advantage to be gained is clear. If our aircraft do not have this capability the potential for the enemy to advance his troops and armour without hinderance for considerable periods is all too obvious. There are several solutions to providing such a capability. The one chosen for Tornado GR1 is to use Terrain Following Radar (TFR). This system is a complete 24 hour capability. However it has two main disadvantages, it is an active system which means it can be jammed or homed into, and is useful in attacking pre-planned targets. Second it is an expensive system which precludes fitting to other than a small number of aircraft.

  2. Part identification in robotic assembly using vision system

    NASA Astrophysics Data System (ADS)

    Balabantaray, Bunil Kumar; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system acts an important role in making robotic assembly system autonomous. Identification of the correct part is an important task which needs to be carefully done by a vision system to feed the robot with correct information for further processing. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Interest point detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus it needs to choose the correct tool for the process with respect to the given environment. In this paper analysis of three major corner detection algorithms is performed on the basis of their accuracy, speed and robustness to noise. The work is performed on the Matlab R2012a. An attempt has been made to find the best algorithm for the problem.

  3. Online updating of synthetic vision system databases

    NASA Astrophysics Data System (ADS)

    Simard, Philippe

    In aviation, synthetic vision systems render artificial views of the world (using a database of the world and pose information) to support navigation and situational awareness in low visibility conditions. The database needs to be periodically updated to ensure its consistency with reality, since it reflects at best a nominal state of the environment. This thesis presents an approach for automatically updating the geometry of synthetic vision system databases and 3D models in general. The approach is novel in that it profits from all of the available prior information: intrinsic/extrinsic camera parameters and geometry of the world. Geometric inconsistencies (or anomalies) between the model and reality are quickly localized; this localization serves to significantly reduce the complexity of the updating problem. Given a geometric model of the world, a sample image and known camera motion, a predicted image can be generated based on a differential approach. Model locations where predictions do not match observations are assumed to be incorrect. The updating is then cast as an optimization problem where differences between observations and predictions are minimized. To cope with system uncertainties, a mechanism that automatically infers their impact on prediction validity is derived. This method not only renders the anomaly detection process robust but also prevents the overfitting of the data. The updating framework is examined at first using synthetic data and further tested in both a laboratory environment and using a helicopter in flight. Experimental results show that the algorithm is effective and robust across different operating conditions.

  4. DLP™-based dichoptic vision test system

    NASA Astrophysics Data System (ADS)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  5. DLP™-based dichoptic vision test system

    PubMed Central

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3%; remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer’s sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events. PMID:20210457

  6. Forward Obstacle Detection System by Stereo Vision

    NASA Astrophysics Data System (ADS)

    Iwata, Hiroaki; Saneyoshi, Keiji

    Forward obstacle detection is needed to prevent car accidents. We have developed forward obstacle detection system which has good detectability and the accuracy of distance only by using stereo vision. The system runs in real time by using a stereo processing system based on a Field-Programmable Gate Array (FPGA). Road surfaces are detected and the space to drive can be limited. A smoothing filter is also used. Owing to these, the accuracy of distance is improved. In the experiments, this system could detect forward obstacles 100 m away. Its error of distance up to 80 m was less than 1.5 m. It could immediately detect cutting-in objects.

  7. Robot vision system programmed in Prolog

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Hack, Ralf

    1995-10-01

    This is the latest in a series of publications which develop the theme of programming a machine vision system using the artificial intelligence language Prolog. The article states the long-term objective of the research program of which this work forms part. Many but not yet all of the goals laid out in this plan have already been achieved in an integrated system, which uses a multi-layer control hierarchy. The purpose of the present paper is to demonstrate that a system based upon a Prolog controller is capable of making complex decisions and operating a standard robot. The authors chose, as a vehicle for this exercise, the task of playing dominoes against a human opponent. This game was selected for this demonstration since it models a range of industrial assembly tasks, where parts are to be mated together. (For example, a 'daisy chain' of electronic equipment and the interconnecting cables/adapters may be likened to a chain of dominoes.)

  8. Fiber optic coherent laser radar 3d vision system

    SciTech Connect

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-12-31

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  9. Overview of NETL In-House Vision 21 Activities

    SciTech Connect

    Wildman, David J.

    2001-11-06

    The Office of Science and Technology at the National Energy Technology Laboratory, conducts research in support of Department of Energy's Fossil Energy Program. The research is funded through a variety of programs with each program focusing on a particular aspect of fossil energy. Since the Vision 21 Concept is based on the Advanced Power System Programs (Integrated Gasification Combined Cycle, Pressurized Fluid Bed, HIPPS, Advanced Turbine Systems, and Fuel Cells) it is not surprising that much of the research supports the Vision 21 Concept. The research is classified and presented according to ''enabling technologies'' and ''supporting technologies'' as defined by the Vision 21 Program. Enabling technology include fuel flexible gasification, fuel flexible combustion, hydrogen separation from fuel gas, advanced combustion systems, circulating fluid bed technology, and fuel cells. Supporting technologies include development of advanced materials, computer simulations, computation al fluid dynamics modeling, and advanced environmental control. An overview of Vision 21 related research is described, emphasizing recent accomplishments and capabilities.

  10. Conducting IPN actuators for biomimetic vision system

    NASA Astrophysics Data System (ADS)

    Festin, Nicolas; Plesse, Cedric; Chevrot, Claude; Teyssié, Dominique; Pirim, Patrick; Vidal, Frederic

    2011-04-01

    In recent years, many studies on electroactive polymer (EAP) actuators have been reported. One promising technology is the elaboration of electronic conducting polymers based actuators with Interpenetrating Polymer Networks (IPNs) architecture. Their many advantageous properties as low working voltage, light weight and high lifetime (several million cycles) make them very attractive for various applications including robotics. Our laboratory recently synthesized new conducting IPN actuators based on high molecular Nitrile Butadiene Rubber, poly(ethylene oxide) derivative and poly(3,4-ethylenedioxithiophene). The presence of the elastomer greatly improves the actuator performances such as mechanical resistance and output force. In this article we present the IPN and actuator synthesis, characterizations and design allowing their integration in a biomimetic vision system.

  11. Vision-based registration for augmented reality system using monocular and binocular vision

    NASA Astrophysics Data System (ADS)

    Vallerand, Steve; Kanbara, Masayuki; Yokoya, Naokazu

    2003-05-01

    In vision-based augmented reality systems, the relation between the real and virtual worlds needs to be estimated to perform the registration of virtual objects. This paper suggests a vision-based registration method for video see-through augmented reality systems using binocular cameras which increases the quality of the registration performed using three points of a known marker. The originality of this work is the use of both monocular vision-based and stereoscopic vision-based techniques in order to complete the registration. Also, a method that performs a correction of the 2D positions in the images of the marker points is proposed. The correction improves the registration stability and accuracy of the system. The stability of the registration obtained with the proposed registration method combined or not with the correction method is compared to the registration obtained with standard stereoscopic registration.

  12. Hi-Vision telecine system using pickup tube

    NASA Astrophysics Data System (ADS)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  13. Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-01-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  14. Technological process supervising using vision systems cooperating with the LabVIEW vision builder

    NASA Astrophysics Data System (ADS)

    Hryniewicz, P.; Banaś, W.; Gwiazda, A.; Foit, K.; Sękala, A.; Kost, G.

    2015-11-01

    One of the most important tasks in the production process is to supervise its proper functioning. Lack of required supervision over the production process can lead to incorrect manufacturing of the final element, through the production line downtime and hence to financial losses. The worst result is the damage of the equipment involved in the manufacturing process. Engineers supervise the production flow correctness use the great range of sensors supporting the supervising of a manufacturing element. Vision systems are one of sensors families. In recent years, thanks to the accelerated development of electronics as well as the easier access to electronic products and attractive prices, they become the cheap and universal type of sensors. These sensors detect practically all objects, regardless of their shape or even the state of matter. The only problem is considered with transparent or mirror objects, detected from the wrong angle. Integrating the vision system with the LabVIEW Vision and the LabVIEW Vision Builder it is possible to determine not only at what position is the given element but also to set its reorientation relative to any point in an analyzed space. The paper presents an example of automated inspection. The paper presents an example of automated inspection of the manufacturing process in a production workcell using the vision supervising system. The aim of the work is to elaborate the vision system that could integrate different applications and devices used in different production systems to control the manufacturing process.

  15. Flight test comparison between enhanced vision (FLIR) and synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-05-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA"s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA's Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  16. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  17. Nuclear bimodal new vision solar system missions

    SciTech Connect

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    This paper presents an analysis of the potential mission capability using space reactor bimodal systems for planetary missions. Missions of interest include the Main belt asteroids, Jupiter, Saturn, Neptune, and Pluto. The space reactor bimodal system, defined by an Air Force study for Earth orbital missions, provides 10 kWe power, 1000 N thrust, 850 s Isp, with a 1500 kg system mass. Trajectories to the planetary destinations were examined and optimal direct and gravity assisted trajectories were selected. A conceptual design for a spacecraft using the space reactor bimodal system for propulsion and power, that is capable of performing the missions of interest, is defined. End-to-end mission conceptual designs for bimodal orbiter missions to Jupiter and Saturn are described. All missions considered use the Delta 3 class or Atlas 2AS launch vehicles. The space reactor bimodal power and propulsion system offers both; new vision {open_quote}{open_quote}constellation{close_quote}{close_quote} type missions in which the space reactor bimodal spacecraft acts as a carrier and communication spacecraft for a fleet of microspacecraft deployed at different scientific targets and; conventional missions with only a space reactor bimodal spacecraft and its science payload. {copyright} {ital 1996 American Institute of Physics.}

  18. Intelligent Computer Vision System for Automated Classification

    SciTech Connect

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-21

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPtauS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  19. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  20. Computer vision for driver assistance systems

    NASA Astrophysics Data System (ADS)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  1. Intensity measurement of automotive headlamps using a photometric vision system

    NASA Astrophysics Data System (ADS)

    Patel, Balvant; Cruz, Jose; Perry, David L.; Himebaugh, Frederic G.

    1996-01-01

    Requirements for automotive head lamp luminous intensity tests are introduced. The rationale for developing a non-goniometric photometric test system is discussed. The design of the Ford photometric vision system (FPVS) is presented, including hardware, software, calibration, and system use. Directional intensity plots and regulatory test results obtained from the system are compared to corresponding results obtained from a Ford goniometric test system. Sources of error for the vision system and goniometer are discussed. Directions for new work are identified.

  2. GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa

    2004-01-01

    The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.

  3. A vision architecture for the extravehicular activity retriever

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1992-01-01

    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools, equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This report documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios will be discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the

  4. High Speed Research - External Vision System (EVS)

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Imagine flying a supersonic passenger jet (like the Concorde) at 1500 mph with no front windows in the cockpit - it may one day be a reality, as seen in this animation still. NASA engineers are working to develop technology that would replace the forward cockpit windows in future supersonic passenger jets with large sensor displays. These displays would use video images, enhanced by computer-generated graphics, to take the place of the view out the front windows. The envisioned eXternal Visibility System (XVS) would guide pilots to an airport, warn them of other aircraft near their path, and provide additional visual aides for airport approaches, landings and takeoffs. Currently, supersonic transports like the Anglo-French Concorde droop the front of the jet (the 'nose') downward to allow the pilots to see forward during takeoffs and landings. By enhancing the pilots' vision with high-resolution video displays, future supersonic transport designers could eliminate the heavy and expensive, mechanically-drooped nose. A future U.S. supersonic passenger jet, as envisioned by NASA's High-Speed Research (HSR) program, would carry 300 passengers more than 5000 nautical miles per hour more than 1500 miles per hour (more than twice the speed of sound). Traveling from Los Angeles to Tokyo would take only four hours, with an anticipated fare increase of only 20 percent over current ticket prices for substantially slower subsonic flights. Animation by Joey Ponthieux, Computer Sciences Corporation, Inc.

  5. Lighting And Optics Expert System For Machine Vision

    NASA Astrophysics Data System (ADS)

    Novini, Amir

    1989-03-01

    Machine Vision and the field of Artificial Intelligence are both new technologies which have evolved mainly within the past decade with the growth of computers and microchips. And, although research continues, both have emerged from the experimental state to industrial reality. Today's machine vision systems are solving thousands of manufacturing problems in various industries, and the impact of Artificial Intelligence, and more specifically, the use of "Expert Systems" in industry is also being realized. This paper will examine how the two technologies can cross paths, and how an Expert System can become an important part of an overall machine vision solution. An actual example of a development of an Expert System that helps solve machine vision lighting and optics problems will be discussed. The lighting and optics Expert System was developed to assist the end user to configure the "Front End" of a vision system to help solve the overall machine vision problem more effectively, since lack of attention to lighting and optics has caused many failures of this technology. Other areas of machine vision technology where Expert Systems could apply will also be discussed.

  6. Lighting And Optics Expert System For Machine Vision

    NASA Astrophysics Data System (ADS)

    Novini, Amir

    1988-12-01

    Machine Vision and the field of Artificial Intelligence are both new technologies which have evolved mainly within the past decade with the growth of computers and microchips. And, although research continues, both have emerged from the experimental state to industrial reality. Today's machine vision systems are solving thousands of manufacturing problems in various industries, and the impact of Artificial Intelligence, and more specifically, the use of "Expert Systems" in industry is also being realized. This paper will examine how the two technologies can cross paths, and how an Expert System can become an important part of an overall machine vision solution. An actual example of a development of an Expert System that helps solve machine vision lighting and optics problems will be discussed. The lighting and optics Expert System was developed to assist the end user to configure the "Front End" of a vision system to help solve the overall machine vision problem more effectively, since lack of attention to lighting and optics has caused many failures of this technology. Other areas of machine vision technology where Expert Systems could apply will also be discussed.

  7. Lighting and optics expert system for machine vision

    NASA Astrophysics Data System (ADS)

    Novini, Amir R.

    1991-03-01

    Machine Vision and the field of Artificial Intelligence are both new technologies which hive evolved mainly within the past decade with the growth of computers and microchips. And although research continues both have emerged from tF experimental state to industrial reality. Today''s machine vision systEns are solving thousands of manufacturing problems in various industries and the impact of Artificial Intelligence and more specifically the ue of " Expert Systems" in industry is also being realized. This pape will examine how the two technologies can cross paths and how an E7ert System can become an important part of an overall machine vision solution. An actual example of a development of an Expert System that helps solve machine vision lighting and optics problems will be discussed. The lighting and optics xpert System was developed to assist the end user to configure the " Front End" of a vision system to help solve the overall machine vision problem more effectively since lack of attention to lighting and optics has caused many failures of this technology. Other areas of machine vision technology where Expert Systems could apply will also be ciscussed.

  8. Robust active binocular vision through intrinsically motivated learning

    PubMed Central

    Lonini, Luca; Forestier, Sébastien; Teulière, Céline; Zhao, Yu; Shi, Bertram E.; Triesch, Jochen

    2013-01-01

    The efficient coding hypothesis posits that sensory systems of animals strive to encode sensory signals efficiently by taking into account the redundancies in them. This principle has been very successful in explaining response properties of visual sensory neurons as adaptations to the statistics of natural images. Recently, we have begun to extend the efficient coding hypothesis to active perception through a form of intrinsically motivated learning: a sensory model learns an efficient code for the sensory signals while a reinforcement learner generates movements of the sense organs to improve the encoding of the signals. To this end, it receives an intrinsically generated reinforcement signal indicating how well the sensory model encodes the data. This approach has been tested in the context of binocular vison, leading to the autonomous development of disparity tuning and vergence control. Here we systematically investigate the robustness of the new approach in the context of a binocular vision system implemented on a robot. Robustness is an important aspect that reflects the ability of the system to deal with unmodeled disturbances or events, such as insults to the system that displace the stereo cameras. To demonstrate the robustness of our method and its ability to self-calibrate, we introduce various perturbations and test if and how the system recovers from them. We find that (1) the system can fully recover from a perturbation that can be compensated through the system's motor degrees of freedom, (2) performance degrades gracefully if the system cannot use its motor degrees of freedom to compensate for the perturbation, and (3) recovery from a perturbation is improved if both the sensory encoding and the behavior policy can adapt to the perturbation. Overall, this work demonstrates that our intrinsically motivated learning approach for efficient coding in active perception gives rise to a self-calibrating perceptual system of high robustness. PMID:24223552

  9. Robust active binocular vision through intrinsically motivated learning.

    PubMed

    Lonini, Luca; Forestier, Sébastien; Teulière, Céline; Zhao, Yu; Shi, Bertram E; Triesch, Jochen

    2013-01-01

    The efficient coding hypothesis posits that sensory systems of animals strive to encode sensory signals efficiently by taking into account the redundancies in them. This principle has been very successful in explaining response properties of visual sensory neurons as adaptations to the statistics of natural images. Recently, we have begun to extend the efficient coding hypothesis to active perception through a form of intrinsically motivated learning: a sensory model learns an efficient code for the sensory signals while a reinforcement learner generates movements of the sense organs to improve the encoding of the signals. To this end, it receives an intrinsically generated reinforcement signal indicating how well the sensory model encodes the data. This approach has been tested in the context of binocular vison, leading to the autonomous development of disparity tuning and vergence control. Here we systematically investigate the robustness of the new approach in the context of a binocular vision system implemented on a robot. Robustness is an important aspect that reflects the ability of the system to deal with unmodeled disturbances or events, such as insults to the system that displace the stereo cameras. To demonstrate the robustness of our method and its ability to self-calibrate, we introduce various perturbations and test if and how the system recovers from them. We find that (1) the system can fully recover from a perturbation that can be compensated through the system's motor degrees of freedom, (2) performance degrades gracefully if the system cannot use its motor degrees of freedom to compensate for the perturbation, and (3) recovery from a perturbation is improved if both the sensory encoding and the behavior policy can adapt to the perturbation. Overall, this work demonstrates that our intrinsically motivated learning approach for efficient coding in active perception gives rise to a self-calibrating perceptual system of high robustness. PMID:24223552

  10. Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)

    NASA Astrophysics Data System (ADS)

    Ashcraft, Todd W.; Atac, Robert

    2012-06-01

    Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.

  11. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  12. Teacher Activism: Enacting a Vision for Social Justice

    ERIC Educational Resources Information Center

    Picower, Bree

    2012-01-01

    This qualitative study focused on educators who participated in grassroots social justice groups to explore the role teacher activism can play in the struggle for educational justice. Findings show teacher activists made three overarching commitments: to reconcile their vision for justice with the realities of injustice around them; to work within…

  13. Choosing the right video interface for military vision systems

    NASA Astrophysics Data System (ADS)

    Phillips, John

    2015-05-01

    This paper discusses how GigE Vision® video interfaces - the technology used to transfer data from a camera or image sensor to a mission computer or display - help designers reduce the cost and complexity of military imaging systems, while also improving usability and increasing intelligence for end-users. The paper begins with a detailed review of video connectivity approaches commonly used in military imaging systems, followed by an overview on the GigE Vision standard. With this background, the design, cost, and performance benefits that can be achieved when employing GigE Vision-compliant video interfaces in a vetronics retrofit upgrade project are outlined.

  14. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  15. The Tactile Vision Substitution System: Applications in Education and Employment

    ERIC Educational Resources Information Center

    Scadden, Lawrence A.

    1974-01-01

    The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)

  16. Standard machine vision systems used in different industrial applications

    NASA Astrophysics Data System (ADS)

    Bruehl, Wolfgang

    1993-12-01

    Fully standardized machine vision systems won't require task specific hard- or software development. This allows short project realization times at minimized cost. This paper describes two very different applications which were realized only by menu-guided configuration of the QueCheck standard machine vision system. The first is an in-line survey of oilpump castings necessary to protect the following working machine from being damaged by castings not according to the specified geometrical measures. The second application shows the replacement of time consuming manual particle size analysis of fertilizer pellets, by a continuous analysis with a vision system. At the same time the data of the vision system can be used to optimize particle size during production.

  17. Building Artificial Vision Systems with Machine Learning

    SciTech Connect

    LeCun, Yann

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  18. Human Factors And Safety Considerations Of Night Vision Systems Flight

    NASA Astrophysics Data System (ADS)

    Verona, Robert W.; Rash, Clarence E.

    1989-03-01

    Military aviation night vision systems greatly enhance the capability to operate during periods of low illumination. After flying with night vision devices, most aviators are apprehensive about returning to unaided night flight. Current night vision imaging devices allow aviators to fly during ambient light conditions which would be extremely dangerous, if not impossible, with unaided vision. However, the visual input afforded with these devices does not approach that experienced using the unencumbered, unaided eye during periods of daylight illumination. Many visual parameters, e,g., acuity, field-of-view, depth perception, etc., are compromised when night vision devices are used. The inherent characteristics of image intensification based sensors introduce new problems associated with the interpretation of visual information based on different spatial and spectral content from that of unaided vision. In addition, the mounting of these devices onto the helmet is accompanied by concerns of fatigue resulting from increased head supported weight and shift in center-of-gravity. All of these concerns have produced numerous human factors and safety issues relating to thb use of night vision systems. These issues are identified and discussed in terms of their possible effects on user performance and safety.

  19. 75 FR 44306 - Eleventh Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-28

    ...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration... WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be...

  20. Active vision and image/video understanding systems built upon network-symbolic models for perception-based navigation of mobile robots in real-world environments

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-12-01

    To be completely successful, robots need to have reliable perceptual systems that are similar to human vision. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects with respect to the observer and to each other. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a "raster" into a "vector" representation. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views. Once built, the model of visual scene changes slower then local information in the visual buffer. It allows for disambiguating visual information and effective control of actions and navigation via incremental relational changes in visual buffer. Network-Symbolic models can be seamlessly integrated into the NIST 4D/RCS architecture and better interpret images/video for situation awareness, target recognition, navigation and actions.

  1. Latency in Visionic Systems: Test Methods and Requirements

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  2. Design principle of the peripheral vision display system

    NASA Astrophysics Data System (ADS)

    Guo, Xiaowei; Wang, Yuefeng; Niu, Yanxiong; Yu, Lishen; Liu, Shen H.

    1996-09-01

    The peripheral vision display system (PVDS) presents the pilot with a gyro stabilized artificial horizon projected onto the instrument panel by means of a red laser light source. The pilot can detect changes to aircraft attitude without continuously referring back to his flight instruments. The PVDS effectively applies the peripheral vision of the pilot to overcome disorientation. This paper gives the principles of the PVDS, according to which, we have designed the PVDS and used it for aviation medicine.

  3. Machine vision systems using machine learning for industrial product inspection

    NASA Astrophysics Data System (ADS)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  4. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    SciTech Connect

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  5. 77 FR 56254 - Twentieth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-12

    ... Federal Aviation Administration Twentieth Meeting: RTCA Special Committee 213, Enhanced Flight Vision... of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 213, Enhanced Flight Vision... of the twentieth meeting of the RTCA Special Committee 213, Enhanced Flight Vision...

  6. 77 FR 36331 - Nineteenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-18

    ... Federal Aviation Administration Nineteenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision... of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 213, Enhanced Flight Vision... of the nineteenth meeting of RTCA Special Committee 213, Enhanced Flight Vision...

  7. Multiple-channel Streaming Delivery for Omnidirectional Vision System

    NASA Astrophysics Data System (ADS)

    Iwai, Yoshio; Nagahara, Hajime; Yachida, Masahiko

    An omnidirectional vision is an imaging system that can capture a surrounding image in whole direction by using a hyperbolic mirror and a conventional CCD camera. This paper proposes a streaming server that can efficiently transfer movies captured by an omnidirectional vision system through the Internet. The proposed system uses multiple channels to deliver multiple movies synchronously. Through this method, the system enables clients to view the different direction of omnidirectional movies and also support the function to change the view are during playback period. Our evaluation experiments show that our proposed streaming server can effectively deliver multiple movies via multiple channels.

  8. Machine vision system for online inspection of freshly slaughtered chickens

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A machine vision system was developed and evaluated for the automation of online inspection to differentiate freshly slaughtered wholesome chickens from systemically diseased chickens. The system consisted of an electron-multiplying charge-coupled-device camera used with an imaging spectrograph and ...

  9. Machine vision system for online wholesomeness inspection of poultry carcasses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A line-scan machine vision system and multispectral inspection algorithm were developed and evaluated for differentiation of wholesome and systemically diseased chickens on a high-speed processing line. The inspection system acquires line-scan images of chicken carcasses on a 140 bird-per-minute pro...

  10. Machine vision system for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.

  11. Musca domestica inspired machine vision system with hyperacuity

    NASA Astrophysics Data System (ADS)

    Riley, Dylan T.; Harman, William M.; Tomberlin, Eric; Barrett, Steven F.; Wilcox, Michael; Wright, Cameron H. G.

    2005-05-01

    Musca domestica, the common house fly, has a simple yet powerful and accessible vision system. Cajal indicated in 1885 the fly's vision system is the same as in the human retina. The house fly has some intriguing vision system features such as fast, analog, parallel operation. Furthermore, it has the ability to detect movement and objects at far better resolution than predicted by photoreceptor spacing, termed hyperacuity. We are investigating the mechanisms behind these features and incorporating them into next generation vision systems. We have developed a prototype sensor that employs a fly inspired arrangement of photodetectors sharing a common lens. The Gaussian shaped acceptance profile of each sensor coupled with overlapped sensor field of views provide the necessary configuration for obtaining hyperacuity data. The sensor is able to detect object movement with far greater resolution than that predicted by photoreceptor spacing. We have exhaustively tested and characterized the sensor to determine its practical resolution limit. Our tests coupled with theory from Bucklew and Saleh (1985) indicate that the limit to the hyperacuity response may only be related to target contrast. We have also implemented an array of these prototype sensors which will allow for two - dimensional position location. These high resolution, low contrast capable sensors are being developed for use as a vision system for an autonomous robot and the next generation of smart wheel chairs. However, they are easily adapted for biological endoscopy, downhole monitoring in oil wells, and other applications.

  12. A modular real-time vision system for humanoid robots

    NASA Astrophysics Data System (ADS)

    Trifan, Alina L.; Neves, António J. R.; Lau, Nuno; Cunha, Bernardo

    2012-01-01

    Robotic vision is nowadays one of the most challenging branches of robotics. In the case of a humanoid robot, a robust vision system has to provide an accurate representation of the surrounding world and to cope with all the constraints imposed by the hardware architecture and the locomotion of the robot. Usually humanoid robots have low computational capabilities that limit the complexity of the developed algorithms. Moreover, their vision system should perform in real time, therefore a compromise between complexity and processing times has to be found. This paper presents a reliable implementation of a modular vision system for a humanoid robot to be used in color-coded environments. From image acquisition, to camera calibration and object detection, the system that we propose integrates all the functionalities needed for a humanoid robot to accurately perform given tasks in color-coded environments. The main contributions of this paper are the implementation details that allow the use of the vision system in real-time, even with low processing capabilities, the innovative self-calibration algorithm for the most important parameters of the camera and its modularity that allows its use with different robotic platforms. Experimental results have been obtained with a NAO robot produced by Aldebaran, which is currently the robotic platform used in the RoboCup Standard Platform League, as well as with a humanoid build using the Bioloid Expert Kit from Robotis. As practical examples, our vision system can be efficiently used in real time for the detection of the objects of interest for a soccer playing robot (ball, field lines and goals) as well as for navigating through a maze with the help of color-coded clues. In the worst case scenario, all the objects of interest in a soccer game, using a NAO robot, with a single core 500Mhz processor, are detected in less than 30ms. Our vision system also includes an algorithm for self-calibration of the camera parameters as well

  13. Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Young, Steven D.

    2005-01-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  14. Technical Challenges in the Development of a NASA Synthetic Vision System Concept

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Parrish, Russell V.; Kramer, Lynda J.; Harrah, Steve; Arthur, J. J., III

    2002-01-01

    Within NASA's Aviation Safety Program, the Synthetic Vision Systems Project is developing display system concepts to improve pilot terrain/situation awareness by providing a perspective synthetic view of the outside world through an on-board database driven by precise aircraft positioning information updating via Global Positioning System-based data. This work is aimed at eliminating visibility-induced errors and low visibility conditions as a causal factor to civil aircraft accidents, as well as replicating the operational benefits of clear day flight operations regardless of the actual outside visibility condition. Synthetic vision research and development activities at NASA Langley Research Center are focused around a series of ground simulation and flight test experiments designed to evaluate, investigate, and assess the technology which can lead to operational and certified synthetic vision systems. The technical challenges that have been encountered and that are anticipated in this research and development activity are summarized.

  15. Binocular stereo vision system design for lunar rover

    NASA Astrophysics Data System (ADS)

    Chu, Jun; Jiao, Chunlin; Guo, Hang; Zhang, Xiaoyu

    2007-11-01

    In this paper, we integrate a pair of CCD cameras and a digital pan/title of two degrees of freedom into a binocular stereo vision system, which simulates the panoramic cameras system of the lunar rover. The constraints for placement and parameters choice of the stereo cameras pair are proposed based on science objective of Chang'e-IImission. And then these constraints are applied to our binocular stereo vision system and analyzed the location precise of it. Simulation and experimental result confirm the constraints proposed and the analysis of the location precise.

  16. Spherical vision cameras in a semi-autonomous wheelchair system.

    PubMed

    Nguyen, Jordan S; Su, Steven W; Nguyen, Hung T

    2010-01-01

    This paper is concerned with the methods developed for extending the capabilities of a spherical vision camera system to allow detection of surrounding objects and whether or not they pose a danger for movement in that direction during autonomous navigation of a power wheelchair. A Point Grey Research (PGR) Ladybug2 spherical vision camera system was attached to the power wheelchair for surrounding vision. The objective is to use this Ladybug2 system to provide information about obstacles all around the wheelchair and aid the automated decision-making process involved during navigation. Through instantaneous neural network classification of individual camera images to determine whether obstacles are present, detection of obstacles have been successfully achieved with accuracies reaching 96%. This assistive technology has the purpose of automated obstacle detection, navigational path planning and decision-making, and collision avoidance during navigation. PMID:21097098

  17. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  18. Low light level CMOS sensor for night vision systems

    NASA Astrophysics Data System (ADS)

    Gross, Elad; Ginat, Ran; Nesher, Ofer

    2015-05-01

    For many years image intensifier tubes were used for night vision systems. In 2014, Elbit systems developed a digital low-light level CMOS sensor, with similar sensitivity to a Gen II image-intensifiers, down to starlight conditions. In this work we describe: the basic principle behind this sensor, physical model for low-light performance estimation and results of field testing.

  19. 2020 Vision for Tank Waste Cleanup (One System Integration) - 12506

    SciTech Connect

    Harp, Benton; Charboneau, Stacy; Olds, Erik

    2012-07-01

    The mission of the Department of Energy's Office of River Protection (ORP) is to safely retrieve and treat the 56 million gallons of Hanford's tank waste and close the Tank Farms to protect the Columbia River. The millions of gallons of waste are a by-product of decades of plutonium production. After irradiated fuel rods were taken from the nuclear reactors to the processing facilities at Hanford they were exposed to a series of chemicals designed to dissolve away the rod, which enabled workers to retrieve the plutonium. Once those chemicals were exposed to the fuel rods they became radioactive and extremely hot. They also couldn't be used in this process more than once. Because the chemicals are caustic and extremely hazardous to humans and the environment, underground storage tanks were built to hold these chemicals until a more permanent solution could be found. The Cleanup of Hanford's 56 million gallons of radioactive and chemical waste stored in 177 large underground tanks represents the Department's largest and most complex environmental remediation project. Sixty percent by volume of the nation's high-level radioactive waste is stored in the underground tanks grouped into 18 'tank farms' on Hanford's central plateau. Hanford's mission to safely remove, treat and dispose of this waste includes the construction of a first-of-its-kind Waste Treatment Plant (WTP), ongoing retrieval of waste from single-shell tanks, and building or upgrading the waste feed delivery infrastructure that will deliver the waste to and support operations of the WTP beginning in 2019. Our discussion of the 2020 Vision for Hanford tank waste cleanup will address the significant progress made to date and ongoing activities to manage the operations of the tank farms and WTP as a single system capable of retrieving, delivering, treating and disposing Hanford's tank waste. The initiation of hot operations and subsequent full operations of the WTP are not only dependent upon the successful

  20. Concurrent algorithms for a mobile robot vision system

    SciTech Connect

    Jones, J.P.; Mann, R.C.

    1988-01-01

    The application of computer vision to mobile robots has generally been hampered by insufficient on-board computing power. The advent of VLSI-based general purpose concurrent multiprocessor systems promises to give mobile robots an increasing amount of on-board computing capability, and to allow computation intensive data analysis to be performed without high-bandwidth communication with a remote system. This paper describes the integration of robot vision algorithms on a 3-dimensional hypercube system on-board a mobile robot developed at Oak Ridge National Laboratory. The vision system is interfaced to navigation and robot control software, enabling the robot to maneuver in a laboratory environment, to find a known object of interest and to recognize the object's status based on visual sensing. We first present the robot system architecture and the principles followed in the vision system implementation. We then provide some benchmark timings for low-level image processing routines, describe a concurrent algorithm with load balancing for the Hough transform, a new algorithm for binary component labeling, and an algorithm for the concurrent extraction of region features from labeled images. This system analyzes a scene in less than 5 seconds and has proven to be a valuable experimental tool for research in mobile autonomous robots. 9 refs., 1 fig., 3 tabs.

  1. Synthetic vision system flight test results and lessons learned

    NASA Technical Reports Server (NTRS)

    Radke, Jeffrey

    1993-01-01

    Honeywell Systems and Research Center developed and demonstrated an active 35 GHz Radar Imaging system as part of the FAA/USAF/Industry sponsored Synthetic Vision System Technology Demonstration (SVSTD) Program. The objectives of this presentation are to provide a general overview of flight test results, a system level perspective that encompasses the efforts of the SVSTD and Augmented VIsual Display (AVID) programs, and more importantly, provide the AVID workshop participants with Honeywell's perspective on the lessons that were learned from the SVS flight tests. One objective of the SVSTD program was to explore several known system issues concerning radar imaging technology. The program ultimately resolved some of these issues, left others open, and in fact created several new concerns. In some instances, the interested community has drawn improper conclusions from the program by globally attributing implementation specific issues to radar imaging technology in general. The motivation for this presentation is therefore to provide AVID researchers with a better understanding of the issues that truly remain open, and to identify the perceived issues that are either resolved or were specific to Honeywell's implementation.

  2. A Laser-Based Vision System for Weld Quality Inspection

    PubMed Central

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308

  3. The impact of changing night vision goggle spectral response on night vision imaging system lighting compatibility

    NASA Astrophysics Data System (ADS)

    Task, Harry L.; Marasco, Peter L.

    2004-09-01

    The defining document outlining night-vision imaging system (NVIS) compatible lighting, MIL-L-85762A, was written in the mid 1980's, based on what was then the state of the art in night vision and image intensification. Since that time there have been changes in the photocathode sensitivity and the minus-blue coatings applied to the objective lenses. Specifically, many aviation night-vision goggles (NVGs) in the Air Force are equipped with so-called "leaky green" or Class C type objective lens coatings that provide a small amount of transmission around 545 nanometers so that the displays that use a P-43 phosphor can be seen through the NVGs. However, current NVIS compatibility requirements documents have not been updated to include these changes. Documents that followed and replaced MIL-L-85762A (ASC/ENFC-96-01 and MIL-STD-3009) addressed aspects of then current NVIS technology, but did little to change the actual content or NVIS radiance requirements set forth in the original MIL-L-85762A. This paper examines the impact of spectral response changes, introduced by changes in image tube parameters and objective lens minus-blue filters, on NVIS compatibility and NVIS radiance calculations. Possible impact on NVIS lighting requirements is also discussed. In addition, arguments are presented for revisiting NVIS radiometric unit conventions.

  4. 75 FR 71146 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ...''). 74 FR 34589-90 (July 16, 2009). The complaint alleged violations of section 337 of the Tariff Act of...) under review. 75 FR 60478-80 (September 30, 2010). On October 8 and 15, 2010, respectively, complainants... COMMISSION In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products...

  5. Active vision and sensor fusion for inspection of metallic surfaces

    NASA Astrophysics Data System (ADS)

    Puente Leon, Fernando; Beyerer, Juergen

    1997-09-01

    This paper deals with strategies for reliably obtaining the edges and the surface texture of metallic objects. Since illumination is a critical aspect regarding robustness and image quality, it is considered here as an active component of the image acquisition system. The performance of the methods presented is demonstrated -- among other examples -- with images of needles for blood sugar tests. Such objects show an optimized form consisting of several planar grinded surfaces delimited by sharp edges. To allow a reliable assessment of the quality of each surface, and a measurement of their edges, methods for fusing data obtained with different illumination constellations were developed. The fusion strategy is based on the minimization of suitable energy functions. First, an illumination-based segmentation of the object is performed. To obtain the boundaries of each surface, directional light-field illumination is used. By formulating suitable criteria, nearly binary images are selected by variation of the illumination direction. Hereafter, the surface edges are obtained by fusing the contours of the areas obtained before. Following, an optimally illuminated image is acquired for each surface of the object by varying the illumination direction. For this purpose, a criterion describing the quality of the surface texture has to be maximized. Finally, the images of all textured surfaces of the object are fused to an improved result, in which the whole object is contained with high contrast. Although the methods presented were designed for inspection of needles, they also perform robustly in other computer vision tasks where metallic objects have to be inspected.

  6. Vision systems for manned and robotic ground vehicles

    NASA Astrophysics Data System (ADS)

    Sanders-Reed, John N.; Koon, Phillip L.

    2010-04-01

    A Distributed Aperture Vision System for ground vehicles is described. An overview of the hardware including sensor pod, processor, video compression, and displays is provided. This includes a discussion of the choice between an integrated sensor pod and individually mounted sensors, open architecture design, and latency issues as well as flat panel versus head mounted displays. This technology is applied to various ground vehicle scenarios, including closed-hatch operations (operator in the vehicle), remote operator tele-operation, and supervised autonomy for multi-vehicle unmanned convoys. In addition, remote vision for automatic perimeter surveillance using autonomous vehicles and automatic detection algorithms is demonstrated.

  7. Development of a machine vision guidance system for automated assembly of space structures

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Sydow, P. Daniel

    1992-01-01

    The topics are presented in viewgraph form and include: automated structural assembly robot vision; machine vision requirements; vision targets and hardware; reflective efficiency; target identification; pose estimation algorithms; triangle constraints; truss node with joint receptacle targets; end-effector mounted camera and light assembly; vision system results from optical bench tests; and future work.

  8. Vision based assistive technology for people with dementia performing activities of daily living (ADLs): an overview

    NASA Astrophysics Data System (ADS)

    As'ari, M. A.; Sheikh, U. U.

    2012-04-01

    The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.

  9. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  10. Characterization of a multi-user indoor positioning system based on low cost depth vision (Kinect) for monitoring human activity in a smart home.

    PubMed

    Sevrin, Loïc; Noury, Norbert; Abouchi, Nacer; Jumel, Fabrice; Massot, Bertrand; Saraydaryan, Jacques

    2015-01-01

    An increasing number of systems use indoor positioning for many scenarios such as asset tracking, health care, games, manufacturing, logistics, shopping, and security. Many technologies are available and the use of depth cameras is becoming more and more attractive as this kind of device becomes affordable and easy to handle. This paper contributes to the effort of creating an indoor positioning system based on low cost depth cameras (Kinect). A method is proposed to optimize the calibration of the depth cameras, to describe the multi-camera data fusion and to specify a global positioning projection to maintain the compatibility with outdoor positioning systems. The monitoring of the people trajectories at home is intended for the early detection of a shift in daily activities which highlights disabilities and loss of autonomy. This system is meant to improve homecare health management at home for a better end of life at a sustainable cost for the community. PMID:26737415

  11. 77 FR 16890 - Eighteenth Meeting: RTCA Special Committee 213, Enhanced Flight Visions Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-22

    ... Federal Aviation Administration Eighteenth Meeting: RTCA Special Committee 213, Enhanced Flight Visions... of Transportation (DOT). ACTION: Notice of meeting RTCA Special Committee 213, Enhanced Flight... public of the eighteenth meeting of RTCA Special Committee 213, Enhanced Flight Visions...

  12. Enhanced vision systems: results of simulation and operational tests

    NASA Astrophysics Data System (ADS)

    Hecker, Peter; Doehler, Hans-Ullrich

    1998-07-01

    Today's aircrews have to handle more and more complex situations. Most critical tasks in the field of civil aviation are landing approaches and taxiing. Especially under bad weather conditions the crew has to handle a tremendous workload. Therefore DLR's Institute of Flight Guidance has developed a concept for an enhanced vision system (EVS), which increases performance and safety of the aircrew and provides comprehensive situational awareness. In previous contributions some elements of this concept have been presented, i.e. the 'Simulation of Imaging Radar for Obstacle Detection and Enhanced Vision' by Doehler and Bollmeyer 1996. Now the presented paper gives an overview about the DLR's enhanced vision concept and research approach, which consists of two main components: simulation and experimental evaluation. In a first step the simulational environment for enhanced vision research with a pilot-in-the-loop is introduced. An existing fixed base flight simulator is supplemented by real-time simulations of imaging sensors, i.e. imaging radar and infrared. By applying methods of data fusion an enhanced vision display is generated combining different levels of information, such as terrain model data, processed images acquired by sensors, aircraft state vectors and data transmitted via datalink. The second part of this contribution presents some experimental results. In cooperation with Daimler Benz Aerospace Sensorsystems Ulm, a test van and a test aircraft were equipped with a prototype of an imaging millimeter wave radar. This sophisticated HiVision Radar is up to now one of the most promising sensors for all weather operations. Images acquired by this sensor are shown as well as results of data fusion processes based on digital terrain models. The contribution is concluded by a short video presentation.

  13. Configuration assistant for versatile vision-based inspection systems

    NASA Astrophysics Data System (ADS)

    Huesser, Olivier; Huegli, Heinz

    2001-01-01

    Nowadays, vision-based inspection systems are present in many stages of the industrial manufacturing process. Their versatility, which permits us to accommodate a broad range of inspection requirements, is, however, limited by the time consuming system setup performed at each production change. This work aims at providing a configuration assistant that helps to speed up this system setup, considering the peculiarities of industrial vision systems. The pursued principle, which is to maximize the discriminating power of the features involved in the inspection decision, leads to an optimization problem based on a high-dimensional objective function. Several objective functions based on various metrics are proposed, their optimization being performed with the help of various search heuristics such as genetic methods and simulated annealing methods. The experimental results obtained with an industrial inspection system are presented. They show the effectiveness of the presented approach, and validate the configuration assistant as well.

  14. Head-aimed vision system improves tele-operated mobility

    NASA Astrophysics Data System (ADS)

    Massey, Kent

    2004-12-01

    A head-aimed vision system greatly improves the situational awareness and decision speed for tele-operations of mobile robots. With head-aimed vision, the tele-operator wears a head-mounted display and a small three axis head-position measuring device. Wherever the operator looks, the remote sensing system "looks". When the system is properly designed, the operator's occipital lobes are "fooled" into believing that the operator is actually on the remote robot. The result is at least a doubling of: situational awareness, threat identification speed, and target tracking ability. Proper system design must take into account: precisely matching fields of view; optical gain; and latency below 100 milliseconds. When properly designed, a head-aimed system does not cause nausea, even with prolonged use.

  15. Design and control of active vision based mechanisms for intelligent robots

    NASA Technical Reports Server (NTRS)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  16. Multistrategy machine-learning vision system

    NASA Astrophysics Data System (ADS)

    Roberts, Barry A.

    1993-04-01

    Advances in the field of machine learning technology have yielded learning techniques with solid theoretical foundations that are applicable to the problems being encountered by object recognition systems. At Honeywell an object recognition system that works with high-level, symbolic, object features is under development. This system, named object recognition accomplished through combined learning expertise (ORACLE), employs both an inductive learning technique (i.e., conceptual clustering, CC) and a deductive technique (i.e., explanation-based learning, EBL) that are combined in a synergistic manner. This paper provides an overview of the ORACLE system, describes the machine learning mechanisms (EBL and CC) that it employs, and provides example results of system operation. The paper emphasizes the beneficial effect of integrating machine learning into object recognition systems.

  17. Crew and Display Concepts Evaluation for Synthetic / Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III

    2006-01-01

    NASA s Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot s Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.

  18. A machine vision system for the calibration of digital thermometers

    NASA Astrophysics Data System (ADS)

    Vázquez-Fernández, Esteban; Dacal-Nieto, Angel; González-Jorge, Higinio; Martín, Fernando; Formella, Arno; Alvarez-Valado, Victor

    2009-06-01

    Automation is a key point in many industrial tasks such as calibration and metrology. In this context, machine vision has shown to be a useful tool for automation support, especially when there is no other option available. A system for the calibration of portable measurement devices has been developed. The system uses machine vision to obtain the numerical values shown by displays. A new approach based on human perception of digits, which works in parallel with other more classical classifiers, has been created. The results show the benefits of the system in terms of its usability and robustness, obtaining a success rate higher than 99% in display recognition. The system saves time and effort, and offers the possibility of scheduling calibration tasks without excessive attention by the laboratory technicians.

  19. Survey of computer vision-based natural disaster warning systems

    NASA Astrophysics Data System (ADS)

    Ko, ByoungChul; Kwak, Sooyeong

    2012-07-01

    With the rapid development of information technology, natural disaster prevention is growing as a new research field dealing with surveillance systems. To forecast and prevent the damage caused by natural disasters, the development of systems to analyze natural disasters using remote sensing geographic information systems (GIS), and vision sensors has been receiving widespread interest over the last decade. This paper provides an up-to-date review of five different types of natural disasters and their corresponding warning systems using computer vision and pattern recognition techniques such as wildfire smoke and flame detection, water level detection for flood prevention, coastal zone monitoring, and landslide detection. Finally, we conclude with some thoughts about future research directions.

  20. Fiber optic coherent laser radar 3D vision system

    SciTech Connect

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  1. A novel container truck locating system based on vision technology

    NASA Astrophysics Data System (ADS)

    He, Junji; Shi, Li; Mi, Weijian

    2008-10-01

    On a container dock, the container truck must be parked right under the trolley of the container crane before loading (unloading) a container to (from) it. But it often uses nearly one minute to park the truck at the right position because of the difficulty of aiming the truck at the trolley. A monocular machine vision system is designed to locate the locomotive container truck, give the information about how long the truck need to go ahead or go back, and thereby help the driver park the truck fleetly and correctly. With this system time is saved and the efficiency of loading and unloading is increased. The mathematical model of this system is presented in detail. Then the calibration method is described. At last the experiment result testifies the validity and precision of this locating system. The prominent characteristic of this system is simple, easy to be implemented, low cost, and effective. Furthermore, this research work verifies that a monocular vision system can detect 3D size on condition that the length and width of a container are known, which greatly extends the function and application of a monocular vision system.

  2. Intelligent vision system for autonomous vehicle operations

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  3. Practical vision based degraded text recognition system

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Rapid growth and progress in the medical, industrial, security and technology fields means more and more consideration for the use of camera based optical character recognition (OCR) Applying OCR to scanned documents is quite mature, and there are many commercial and research products available on this topic. These products achieve acceptable recognition accuracy and reasonable processing times especially with trained software, and constrained text characteristics. Even though the application space for OCR is huge, it is quite challenging to design a single system that is capable of performing automatic OCR for text embedded in an image irrespective of the application. Challenges for OCR systems include; images are taken under natural real world conditions, Surface curvature, text orientation, font, size, lighting conditions, and noise. These and many other conditions make it extremely difficult to achieve reasonable character recognition. Performance for conventional OCR systems drops dramatically as the degradation level of the text image quality increases. In this paper, a new recognition method is proposed to recognize solid or dotted line degraded characters. The degraded text string is localized and segmented using a new algorithm. The new method was implemented and tested using a development framework system that is capable of performing OCR on camera captured images. The framework allows parameter tuning of the image-processing algorithm based on a training set of camera-captured text images. Novel methods were used for enhancement, text localization and the segmentation algorithm which enables building a custom system that is capable of performing automatic OCR which can be used for different applications. The developed framework system includes: new image enhancement, filtering, and segmentation techniques which enabled higher recognition accuracies, faster processing time, and lower energy consumption, compared with the best state of the art published

  4. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  5. A vision system for an unmanned nonlethal weapon

    NASA Astrophysics Data System (ADS)

    Kogut, Greg; Drymon, Larry

    2004-10-01

    Unmanned weapons remove humans from deadly situations. However some systems, such as unmanned guns, are difficult to control remotely. It is difficult for a soldier to perform the complex tasks of identifying and aiming at specific points on targets from a remote location. This paper describes a computer vision and control system for providing autonomous control of unmanned guns developed at Space and Naval Warfare Systems Center, San Diego (SSC San Diego). The test platform, consisting of a non-lethal gun mounted on a pan-tilt mechanism, can be used as an unattended device or mounted on a robot for mobility. The system operates with a degree of autonomy determined by a remote user that ranges from teleoperated to fully autonomous. The teleoperated mode consists of remote joystick control over all aspects of the weapon, including aiming, arming, and firing. Visual feedback is provided by near-real-time video feeds from bore-site and wide-angle cameras. The semi-autonomous mode provides the user with tracking information overlayed over the real-time video. This provides the user with information on all detected targets being tracked by the vision system. The user uses a mouse to select a target, and the gun automatically aims the gun at the target. Arming and firing is still performed by teleoperation. In fully autonomous mode, all aspects of gun control are performed by the vision system.

  6. Development of a machine vision system for automated structural assembly

    NASA Technical Reports Server (NTRS)

    Sydow, P. Daniel; Cooper, Eric G.

    1992-01-01

    Research is being conducted at the LaRC to develop a telerobotic assembly system designed to construct large space truss structures. This research program was initiated within the past several years, and a ground-based test-bed was developed to evaluate and expand the state of the art. Test-bed operations currently use predetermined ('taught') points for truss structural assembly. Total dependence on the use of taught points for joint receptacle capture and strut installation is neither robust nor reliable enough for space operations. Therefore, a machine vision sensor guidance system is being developed to locate and guide the robot to a passive target mounted on the truss joint receptacle. The vision system hardware includes a miniature video camera, passive targets mounted on the joint receptacles, target illumination hardware, and an image processing system. Discrimination of the target from background clutter is accomplished through standard digital processing techniques. Once the target is identified, a pose estimation algorithm is invoked to determine the location, in three-dimensional space, of the target relative to the robots end-effector. Preliminary test results of the vision system in the Automated Structural Assembly Laboratory with a range of lighting and background conditions indicate that it is fully capable of successfully identifying joint receptacle targets throughout the required operational range. Controlled optical bench test results indicate that the system can also provide the pose estimation accuracy to define the target position.

  7. Novel Corrosion Sensor for Vision 21 Systems

    SciTech Connect

    Heng Ban; Bharat Soni

    2007-03-31

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall goal of this project is to develop a technology for on-line fireside corrosion monitoring. This objective is achieved by the laboratory development of sensors and instrumentation, testing them in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. This project successfully developed two types of sensors and measurement systems, and successful tested them in a muffle furnace in the laboratory. The capacitance sensor had a high fabrication cost and might be more appropriate in other applications. The low-cost resistance sensor was tested in a power plant burning eastern bituminous coals. The results show that the fireside corrosion measurement system can be used to determine the corrosion rate at waterwall and superheater locations. Electron microscope analysis of the corroded sensor surface provided detailed picture of the corrosion process.

  8. Novel Corrosion Sensor for Vision 21 Systems

    SciTech Connect

    Heng Ban

    2005-12-01

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the metal loss caused by chemical reactions on surfaces exposed to the combustion environment. Such corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall objective of this project is to develop a technology for on-line corrosion monitoring based on a new concept. This objective is to be achieved by a laboratory development of the sensor and instrumentation, testing of the measurement system in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. The initial plan for testing at the coal-fired pilot-scale furnace was replaced by testing in a power plant, because the operation condition at the power plant is continuous and more stable. The first two-year effort was completed with the successful development sensor and measurement system, and successful testing in a muffle furnace. Because of the potential high cost in sensor fabrication, a different type of sensor was used and tested in a power plant burning eastern bituminous coals. This report summarize the experiences and results of the first two years of the three-year project, which include laboratory

  9. Vision of the active limb impairs bimanual motor tracking in young and older adults

    PubMed Central

    Boisgontier, Matthieu P.; Van Halewyck, Florian; Corporaal, Sharissa H. A.; Willacker, Lina; Van Den Bergh, Veerle; Beets, Iseult A. M.; Levin, Oron; Swinnen, Stephan P.

    2014-01-01

    Despite the intensive investigation of bimanual coordination, it remains unclear how directing vision toward either limb influences performance, and whether this influence is affected by age. To examine these questions, we assessed the performance of young and older adults on a bimanual tracking task in which they matched motor-driven movements of their right hand (passive limb) with their left hand (active limb) according to in-phase and anti-phase patterns. Performance in six visual conditions involving central vision, and/or peripheral vision of the active and/or passive limb was compared to performance in a no vision condition. Results indicated that directing central vision to the active limb consistently impaired performance, with higher impairment in older than young adults. Conversely, directing central vision to the passive limb improved performance in young adults, but less consistently in older adults. In conditions involving central vision of one limb and peripheral vision of the other limb, similar effects were found to those for conditions involving central vision of one limb only. Peripheral vision alone resulted in similar or impaired performance compared to the no vision (NV) condition. These results indicate that the locus of visual attention is critical for bimanual motor control in young and older adults, with older adults being either more impaired or less able to benefit from a given visual condition. PMID:25452727

  10. Development of a distributed vision system for industrial conditions

    NASA Astrophysics Data System (ADS)

    Weiss, Michael; Schiller, Arnulf; O'Leary, Paul; Fauster, Ewald; Schalk, Peter

    2003-04-01

    This paper presents a prototype system to monitor a hot glowing wire during the rolling process in quality relevant aspects. Therefore a measurement system based on image vision and a communication framework integrating distributed measurement nodes is introduced. As a technologically approach, machine vision is used to evaluate the wire quality parameters. Therefore an image processing algorithm, based on dual Grassmannian coordinates fitting parallel lines by singular value decomposition, is formulated. Furthermore a communication framework which implements anonymous tuplespace communication, a private network based on TCP/IP and a consequent Java implementation of all used components is presented. Additionally, industrial requirements such as realtime communication to IEC-61131 conform digital IO"s (Modbus TCP/IP protocol), the implementation of a watchdog pattern and the integration of multiple operating systems (LINUX, QNX and WINDOWS) are lined out. The deployment of such a framework to the real world problem statement of the wire rolling mill is presented.

  11. A vision fusion treatment system based on ATtiny26L

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqing; Zhang, Chunxi; Wang, Jiqiang

    2006-11-01

    Vision fusion treatment is an important and effective project to strabismus children. The vision fusion treatment system based on the principle for eyeballs to follow the moving visual survey pole is put forward first. In this system the original position of visual survey pole is about 35 centimeters far from patient's face before its moving to the middle position between the two eyeballs. The eyeballs of patient will follow the movement of the visual survey pole. When they can't follow, one or two eyeballs will turn to other position other than the visual survey pole. This displacement is recorded every time. A popular single chip microcomputer ATtiny26L is used in this system, which has a PWM output signal to control visual survey pole to move with continuously variable speed. The movement of visual survey pole accords to the modulating law of eyeballs to follow visual survey pole.

  12. Low Cost Night Vision System for Intruder Detection

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  13. The Systemic Vision of the Educational Learning

    ERIC Educational Resources Information Center

    Lima, Nilton Cesar; Penedo, Antonio Sergio Torres; de Oliveira, Marcio Mattos Borges; de Oliveira, Sonia Valle Walter Borges; Queiroz, Jamerson Viegas

    2012-01-01

    As the sophistication of technology is increasing, also increased the demand for quality in education. The expectation for quality has promoted broad range of products and systems, including in education. These factors include the increased diversity in the student body, which requires greater emphasis that allows a simple and dynamic model in the…

  14. Early light vision isomorphic singular (ELVIS) system

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz P.; Ternovskiy, Igor V.; DeBacker, Theodore A.; Caulfield, H. John

    2000-07-01

    In the shallow water military scenarios, UUVs (Unmanned Underwater Vehicles) are required to protect assets against mines, swimmers, and other underwater military objects. It would be desirable if such UUVs could autonomously see in a similar way as humans, at least, at the primary visual cortex-level. In this paper, an attempt to such a UUV system development is proposed.

  15. NOVEL CORROSION SENSOR FOR VISION 21 SYSTEMS

    SciTech Connect

    Heng Ban

    2004-12-01

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the metal loss caused by chemical reactions on surfaces exposed to the combustion environment. Such corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall objective of this proposed project is to develop a technology for on-line corrosion monitoring based on a new concept. This report describes the initial results from the first-year effort of the three-year study that include laboratory development and experiment, and pilot combustor testing.

  16. Development of a vision system for an intelligent ground vehicle

    NASA Astrophysics Data System (ADS)

    Nagel, Robert L.; Perry, Kenneth; Stone, Robert B.; McAdams, Daniel A.

    2009-01-01

    The development of a vision system for an autonomous ground vehicle designed and constructed for the Intelligent Ground Vehicle Competition (IGVC) is discussed. The requirements for the vision system of the autonomous vehicle are explored via functional analysis considering the flows (materials, energies and signals) into the vehicle and the changes required of each flow within the vehicle system. Functional analysis leads to a vision system based on a laser range finder (LIDAR) and a camera. Input from the vision system is processed via a ray-casting algorithm whereby the camera data and the LIDAR data are analyzed as a single array of points representing obstacle locations, which for the IGVC, consist of white lines on the horizontal plane and construction markers on the vertical plane. Functional analysis also leads to a multithreaded application where the ray-casting algorithm is a single thread of the vehicle's software, which consists of multiple threads controlling motion, providing feedback, and processing the data from the camera and LIDAR. LIDAR data is collected as distances and angles from the front of the vehicle to obstacles. Camera data is processed using an adaptive threshold algorithm to identify color changes within the collected image; the image is also corrected for camera angle distortion, adjusted to the global coordinate system, and processed using least-squares method to identify white boundary lines. Our IGVC robot, MAX, is utilized as the continuous example for all methods discussed in the paper. All testing and results provided are based on our IGVC robot, MAX, as well.

  17. Healthcare Information Systems - Requirements and Vision

    NASA Astrophysics Data System (ADS)

    Williams, John G.

    The introduction of sophisticated information, communications and technology into health care is not a simple task, as demonstrated by the difficulties encountered by the Department of Health's multi-billion programme for the NHS. This programme has successfully implemented much of the infrastructure needed to support the activities of the NHS, but has made less progress with electronic patient records. The case for health records that are focused on the individual patient will be outlined, and the need for these to be underpinned by professionally agreed standards for structure and content. Some of the challenges will be discussed, and the benefits to health care and clinical research will be explored.

  18. Displacement measurement system for inverters using computer micro-vision

    NASA Astrophysics Data System (ADS)

    Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Li, Hai; Ge, Peng

    2016-06-01

    We propose a practical system for noncontact displacement measurement of inverters using computer micro-vision at the sub-micron scale. The measuring method of the proposed system is based on a fast template matching algorithm with an optical microscopy. A laser interferometer measurement (LIM) system is built up for comparison. Experimental results demonstrate that the proposed system can achieve the same performance as the LIM system but shows a higher operability and stability. The measuring accuracy is 0.283 μm.

  19. Telerobotic rendezvous and docking vision system architecture

    NASA Technical Reports Server (NTRS)

    Gravely, Ben; Myers, Donald; Moody, David

    1992-01-01

    This research program has successfully demonstrated a new target label architecture that allows a microcomputer to determine the position, orientation, and identity of an object. It contains a CAD-like database with specific geometric information about the object for approach, grasping, and docking maneuvers. Successful demonstrations were performed selecting and docking an ORU box with either of two ORU receptacles. Small, but significant differences were seen in the two camera types used in the program, and camera sensitive program elements have been identified. The software has been formatted into a new co-autonomy system which provides various levels of operator interaction and promises to allow effective application of telerobotic systems while code improvements are continuing.

  20. Honey characterization using computer vision system and artificial neural networks.

    PubMed

    Shafiee, Sahameh; Minaei, Saeid; Moghaddam-Charkari, Nasrollah; Barzegar, Mohsen

    2014-09-15

    This paper reports the development of a computer vision system (CVS) for non-destructive characterization of honey based on colour and its correlated chemical attributes including ash content (AC), antioxidant activity (AA), and total phenolic content (TPC). Artificial neural network (ANN) models were applied to transform RGB values of images to CIE L*a*b* colourimetric measurements and to predict AC, TPC and AA from colour features of images. The developed ANN models were able to convert RGB values to CIE L*a*b* colourimetric parameters with low generalization error of 1.01±0.99. In addition, the developed models for prediction of AC, TPC and AA showed high performance based on colour parameters of honey images, as the R(2) values for prediction were 0.99, 0.98, and 0.87, for AC, AA and TPC, respectively. The experimental results show the effectiveness and possibility of applying CVS for non-destructive honey characterization by the industry. PMID:24767037

  1. Object tracking in a stereo and infrared vision system

    NASA Astrophysics Data System (ADS)

    Colantonio, S.; Benvenuti, M.; Di Bono, M. G.; Pieri, G.; Salvetti, O.

    2007-01-01

    In this paper, we deal with the problem of real-time detection, recognition and tracking of moving objects in open and unknown environments using an infrared (IR) and visible vision system. A thermo-camera and two stereo visible-cameras synchronized are used to acquire multi-source information: three-dimensional data about target geometry and its thermal information are combined to improve the robustness of the tracking procedure. Firstly, target detection is performed by extracting its characteristic features from the images and then by storing the computed parameters on a specific database; secondly, the tracking task is carried on using two different computational approaches. A Hierarchical Artificial Neural Network (HANN) is used during active tracking for the recognition of the actual target, while, when partial occlusions or masking occur, a database retrieval method is used to support the search of the correct target followed. A prototype has been tested on case studies regarding the identification and tracking of animals moving at night in an open environment, and the surveillance of known scenes for unauthorized access control.

  2. A Vision For A Land Observing System

    NASA Astrophysics Data System (ADS)

    Lewis, P.; Gomez-Dans, J.; Disney, M.

    2013-12-01

    In this paper, we argue that the exploitation of EO land surface data for modelling and monitoring would be greatly facilitated by the routine generation of inter- operable low-level surface bidirectional reflectance factor (BRF) products. We consider evidence from a range of ESA, NASA and other products and studies as well as underlying research to outline the features such a processing system might have, and to define initial research priorities.

  3. Extracting depth by binocular stereo in a robot vision system

    SciTech Connect

    Marapane, S.B.; Trivedi, M.M.

    1988-01-01

    New generation of robotic systems will operate in complex, unstructured environments utilizing sophisticated sensory mechanisms. Vision and range will be two of the most important sensory modalities such a system will utilize to sense their operating environment. Measurement of depth is critical for the success of many robotic tasks such as: object recognition and location; obstacle avoidance and navigation; and object inspection. In this paper we consider the development of a binocular stereo technique for extracting depth information in a robot vision system for inspection and manipulation tasks. Ability to produce precise depth measurements over a wide range of distances and the passivity of the approach make binocular stereo techniques attractive and appropriate for range finding in a robotic environment. This paper describes work in progress towards the development of a region-based binocular stereo technique for a robot vision system designed for inspection and manipulation and presents preliminary experiments designed to evaluate performance of the approach. Results of these studies show promise for the region-based stereo matching approach. 16 refs., 1 fig.

  4. A Vision System For A Mars Rover

    NASA Astrophysics Data System (ADS)

    Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.

    1987-01-01

    A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.

  5. International Border Management Systems (IBMS) Program : visions and strategies.

    SciTech Connect

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  6. Establishing an evoked-potential vision-tracking system

    NASA Technical Reports Server (NTRS)

    Skidmore, Trent A.

    1991-01-01

    This paper presents experimental evidence to support the feasibility of an evoked-potential vision-tracking system. The topics discussed are stimulator construction, verification of the photic driving response in the electroencephalogram, a method for performing frequency separation, and a transient-analysis example. The final issue considered is that of object multiplicity (concurrent visual stimuli with different flashing rates). The paper concludes by discussing several applications currently under investigation.

  7. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  8. The advantages of stereo vision in a face recognition system

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2014-06-01

    Humans can recognize a face with binocular vision, while computers typically use a single face image. It is known that the performance of face recognition (by a computer) can be improved using the score fusion of multimodal images and multiple algorithms. A question is: Can we apply stereo vision to a face recognition system? We know that human binocular vision has many advantages such as stereopsis (3D vision), binocular summation, and singleness of vision including fusion of binocular images (cyclopean image). For face recognition, a 3D face or 3D facial features are typically computed from a pair of stereo images. In human visual processes, the binocular summation and singleness of vision are similar as image fusion processes. In this paper, we propose an advanced face recognition system with stereo imaging capability, which is comprised of two 2-in-1 multispectral (visible and thermal) cameras and three recognition algorithms (circular Gaussian filter, face pattern byte, and linear discriminant analysis [LDA]). Specifically, we present and compare stereo fusion at three levels (images, features, and scores) by using stereo images (from left camera and right camera). Image fusion is achieved with three methods (Laplacian pyramid, wavelet transform, average); feature fusion is done with three logical operations (AND, OR, XOR); and score fusion is implemented with four classifiers (LDA, k-nearest neighbor, support vector machine, binomial logical regression). The system performance is measured by probability of correct classification (PCC) rate (reported as accuracy rate in this paper) and false accept rate (FAR). The proposed approaches were validated with a multispectral stereo face dataset from 105 subjects. Experimental results show that any type of stereo fusion can improve the PCC, meanwhile reduce the FAR. It seems that stereo image/feature fusion is superior to stereo score fusion in terms of recognition performance. Further score fusion after image

  9. A VISION of Advanced Nuclear System Cost Uncertainty

    SciTech Connect

    J'Tia Taylor; David E. Shropshire; Jacob J. Jacobson

    2008-08-01

    VISION (VerifIable fuel cycle SImulatiON) is the Advanced Fuel Cycle Initiative’s and Global Nuclear Energy Partnership Program’s nuclear fuel cycle systems code designed to simulate the US commercial reactor fleet. The code is a dynamic stock and flow model that tracks the mass of materials at the isotopic level through the entire nuclear fuel cycle. As VISION is run, it calculates the decay of 70 isotopes including uranium, plutonium, minor actinides, and fission products. VISION.ECON is a sub-model of VISION that was developed to estimate fuel cycle and reactor costs. The sub-model uses the mass flows generated by VISION for each of the fuel cycle functions (referred to as modules) and calculates the annual cost based on cost distributions provided by the Advanced Fuel Cycle Cost Basis Report1. Costs are aggregated for each fuel cycle module, and the modules are aggregated into front end, back end, recycling, reactor, and total fuel cycle costs. The software also has the capability to perform system sensitivity analysis. This capability may be used to analyze the impacts on costs due to system uncertainty effects. This paper will provide a preliminary evaluation of the cost uncertainty affects attributable to 1) key reactor and fuel cycle system parameters and 2) scheduling variations. The evaluation will focus on the uncertainty on the total cost of electricity and fuel cycle costs. First, a single light water reactor (LWR) using mixed oxide fuel is examined to ascertain the effects of simple parameter changes. Three system parameters; burnup, capacity factor and reactor power are varied from nominal cost values and the affect on the total cost of electricity is measured. These simple parameter changes are measured in more complex scenarios 2-tier systems including LWRs with mixed fuel and fast recycling reactors using transuranic fuel. Other system parameters are evaluated and results will be presented in the paper. Secondly, the uncertainty due to

  10. [A Meridian Visualization System Based on Impedance and Binocular Vision].

    PubMed

    Su, Qiyan; Chen, Xin

    2015-03-01

    To ensure the meridian can be measured and displayed correctly on the human body surface, a visualization method based on impedance and binocular vision is proposed. First of all, using alternating constant current source to inject current signal into the human skin surface, then according to the low impedance characteristics of meridian, the multi-channel detecting instrument detects voltage of each pair of electrodes, thereby obtaining the channel of the meridian location, through the serial port communication, data is transmitted to the host computer. Secondly, intrinsic and extrinsic parameters of cameras are obtained by Zhang's camera calibration method, and 3D information of meridian location is got by corner selection and matching of the optical target, and then transform coordinate of 3D information according to the binocular vision principle. Finally, using curve fitting and image fusion technology realizes the meridian visualization. The test results show that the system can realize real-time detection and accurate display of meridian. PMID:26524777

  11. Processor design optimization methodology for synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Wren, Bill; Tarleton, Norman G.; Symosek, Peter F.

    1997-06-01

    Architecture optimization requires numerous inputs from hardware to software specifications. The task of varying these input parameters to obtain an optimal system architecture with regard to cost, specified performance and method of upgrade considerably increases the development cost due to the infinitude of events, most of which cannot even be defined by any simple enumeration or set of inequalities. We shall address the use of a PC-based tool using genetic algorithms to optimize the architecture for an avionics synthetic vision system, specifically passive millimeter wave system implementation.

  12. A stereo vision-based obstacle detection system in vehicles

    NASA Astrophysics Data System (ADS)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  13. Machine vision system for measuring conifer seedling morphology

    NASA Astrophysics Data System (ADS)

    Rigney, Michael P.; Kranzler, Glenn A.

    1995-01-01

    A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.

  14. Users' subjective evaluation of electronic vision enhancement systems.

    PubMed

    Culham, Louise E; Chabra, Anthony; Rubin, Gary S

    2009-03-01

    The aims of this study were (1) to elicit the users' responses to four electronic head-mounted devices (Jordy, Flipperport, Maxport and NuVision) and (2) to correlate users' opinion with performance. Ten patients with early onset macular disease (EOMD) and 10 with age-related macular disease (AMD) used these electronic vision enhancement systems (EVESs) for a variety of visual tasks. A questionnaire designed in-house and a modified VF-14 were used to evaluate the responses. Following initial experience of the devices in the laboratory, every patient took home two of the four devices for 1 week each. Responses were re-evaluated after this period of home loan. No single EVES stood out as the strong preference for all aspects evaluated. In the laboratory-based appraisal, Flipperport typically received the best overall ratings and highest score for image quality and ability to magnify, but after home loan there was no significant difference between devices. Comfort of device, although important, was not predictive of rating once magnification had been taken into account. For actual performance, a threshold effect was seen whereby ratings increased as reading speed improved up to 60 words per minute. Newly diagnosed patients responded most positively to EVESs, but otherwise users' opinion could not be predicted by age, gender, diagnosis or previous CCTV experience. User feedback is essential in our quest to understand the benefits and shortcoming of EVESs. Such information should help guide both prescribing and future development of low vision devices. PMID:19236583

  15. Development of a machine vision system for automotive part inspection

    NASA Astrophysics Data System (ADS)

    Andres, Nelson S.; Marimuthu, Ram P.; Eom, Yong-Kyun; Jang, Bong-Choon

    2005-12-01

    As an alternative for human inspection, presented in this study was the development of a machine vision inspection system (MVIS) purposely for car seat frames. The proposed MVIS was designed to meet the demands, features and specifications of car seat frame manufacturing companies in striving for increased throughput of better quality. This computer-based MVIS was designed to perform quality measures by detecting holes, nuts and welding spots on every car seat frame in real time and ensuring these portions are intact, precise and in proper place. In this study, the NI Vision Builder software for Automatic Inspection was used as a solution in configuring the aimed quality measurements. The proposed software has measurement techniques such as edge detecting and pattern-matching which are capable of identifying the boundaries or edges of an object and analyzing the pixel values along the profile to detect significant intensity changes. Either of these techniques is capable of gauging sizes, detecting missing portion and checking alignment of parts. The techniques for visual inspection were optimized through qualitative analysis and simulation of human tolerance on inspecting car seat frames. Furthermore, this study exemplified the incorporation of the optimized vision inspection environment to the pre-inspection and post-inspection subsystems. The optimized participation of human on this proposed MVIS for car seat frames has ideally eased to feeding and sorting.

  16. Low Cost Vision Based Personal Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Amami, M. M.; Smith, M. J.; Kokkas, N.

    2014-03-01

    Mobile mapping systems (MMS) can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS). A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  17. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  18. Configuration assistant for versatile vision-based inspection systems

    NASA Astrophysics Data System (ADS)

    Huesser, Olivier; Hugli, Heinz

    2000-03-01

    Nowadays, vision-based inspection systems are present in many stages of the industrial manufacturing process. Their versatility, which permits to accommodate a broad range of inspection requirements, is however limited by the time consuming system setup performed at each production change. This work aims at providing a configuration assistant that helps to speed up this system setup, considering the peculiarities of industrial vision systems. The pursued principle, which is to maximize the discriminating power of the features involved in the inspection decision, leads to an optimization problem based on a high dimensional objective function. Several objective functions based on various metrics are proposed, their optimization being performed with the help of various search heuristics such as genetic methods and simulated annealing methods. The experimental results obtained with an industrial inspection system are presented, considering the particular case of the visual inspection of markings found on top of molded integrated circuits. These results show the effectiveness of the presented objective functions and search methods, and validate the configuration assistant as well.

  19. Improving CAR Navigation with a Vision-Based System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  20. Research on machine vision system of monitoring injection molding processing

    NASA Astrophysics Data System (ADS)

    Bai, Fan; Zheng, Huifeng; Wang, Yuebing; Wang, Cheng; Liao, Si'an

    2016-01-01

    With the wide development of injection molding process, the embedded monitoring system based on machine vision has been developed to automatically monitoring abnormality of injection molding processing. First, the construction of hardware system and embedded software system were designed. Then camera calibration was carried on to establish the accurate model of the camera to correct distortion. Next the segmentation algorithm was applied to extract the monitored objects of the injection molding process system. The realization procedure of system included the initialization, process monitoring and product detail detection. Finally the experiment results were analyzed including the detection rate of kinds of the abnormality. The system could realize the multi-zone monitoring and product detail detection of injection molding process with high accuracy and good stability.

  1. Retinal Structures and Visual Cortex Activity are Impaired Prior to Clinical Vision Loss in Glaucoma.

    PubMed

    Murphy, Matthew C; Conner, Ian P; Teng, Cindy Y; Lawrence, Jesse D; Safiullah, Zaid; Wang, Bo; Bilonick, Richard A; Kim, Seong-Gi; Wollstein, Gadi; Schuman, Joel S; Chan, Kevin C

    2016-01-01

    Glaucoma is the second leading cause of blindness worldwide and its pathogenesis remains unclear. In this study, we measured the structure, metabolism and function of the visual system by optical coherence tomography and multi-modal magnetic resonance imaging in healthy subjects and glaucoma patients with different degrees of vision loss. We found that inner retinal layer thinning, optic nerve cupping and reduced visual cortex activity occurred before patients showed visual field impairment. The primary visual cortex also exhibited more severe functional deficits than higher-order visual brain areas in glaucoma. Within the visual cortex, choline metabolism was perturbed along with increasing disease severity in the eye, optic radiation and visual field. In summary, this study showed evidence that glaucoma deterioration is already present in the eye and the brain before substantial vision loss can be detected clinically using current testing methods. In addition, cortical cholinergic abnormalities are involved during trans-neuronal degeneration and can be detected non-invasively in glaucoma. The current results can be of impact for identifying early glaucoma mechanisms, detecting and monitoring pathophysiological events and eye-brain-behavior relationships, and guiding vision preservation strategies in the visual system, which may help reduce the burden of this irreversible but preventable neurodegenerative disease. PMID:27510406

  2. Retinal Structures and Visual Cortex Activity are Impaired Prior to Clinical Vision Loss in Glaucoma

    PubMed Central

    Murphy, Matthew C.; Conner, Ian P.; Teng, Cindy Y.; Lawrence, Jesse D.; Safiullah, Zaid; Wang, Bo; Bilonick, Richard A.; Kim, Seong-Gi; Wollstein, Gadi; Schuman, Joel S.; Chan, Kevin C.

    2016-01-01

    Glaucoma is the second leading cause of blindness worldwide and its pathogenesis remains unclear. In this study, we measured the structure, metabolism and function of the visual system by optical coherence tomography and multi-modal magnetic resonance imaging in healthy subjects and glaucoma patients with different degrees of vision loss. We found that inner retinal layer thinning, optic nerve cupping and reduced visual cortex activity occurred before patients showed visual field impairment. The primary visual cortex also exhibited more severe functional deficits than higher-order visual brain areas in glaucoma. Within the visual cortex, choline metabolism was perturbed along with increasing disease severity in the eye, optic radiation and visual field. In summary, this study showed evidence that glaucoma deterioration is already present in the eye and the brain before substantial vision loss can be detected clinically using current testing methods. In addition, cortical cholinergic abnormalities are involved during trans-neuronal degeneration and can be detected non-invasively in glaucoma. The current results can be of impact for identifying early glaucoma mechanisms, detecting and monitoring pathophysiological events and eye-brain-behavior relationships, and guiding vision preservation strategies in the visual system, which may help reduce the burden of this irreversible but preventable neurodegenerative disease. PMID:27510406

  3. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem

  4. 75 FR 71183 - Twelfth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation Administration... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of a meeting of Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight...

  5. Vision-Based People Detection System for Heavy Machine Applications

    PubMed Central

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  6. Vision-Based People Detection System for Heavy Machine Applications.

    PubMed

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  7. Triangulation-Based Camera Calibration For Machine Vision Systems

    NASA Astrophysics Data System (ADS)

    Bachnak, Rafic A.; Celenk, Mehmet

    1990-04-01

    This paper describes a camera calibration procedure for stereo-based machine vision systems. The method is based on geometric triangulation using only a single image of three distinctive points. Both the intrinsic and extrinsic parameters of the system are determined. The procedure is performed only once at the initial set-up using a simple camera model. The effective focal length is extended in such a way that a linear transformation exists between the camera image plane and the output digital image. Only three world points are needed to find the extended focal length and the transformation matrix elements that relates the camera position and orientation to a real world coordinate system. The parameters of the system are computed by solving a set of linear equations. Experimental results show that the method, when used in a stereo system developed in this research, produces reasonably accurate 3-D measurements.

  8. Beam Splitter For Welding-Torch Vision System

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.

    1991-01-01

    Compact welding torch equipped with along-the-torch vision system includes cubic beam splitter to direct preview light on weldment and to reflect light coming from welding scene for imaging. Beam splitter integral with torch; requires no external mounting brackets. Rugged and withstands vibrations and wide range of temperatures. Commercially available, reasonably priced, comes in variety of sizes and optical qualities with antireflection and interference-filter coatings on desired faces. Can provide 50 percent transmission and 50 percent reflection of incident light to exhibit minimal ghosting of image.

  9. Scratch measurement system using machine vision: part II

    NASA Astrophysics Data System (ADS)

    Sarr, Dennis P.

    1992-03-01

    Aircraft skins and windows must not have scratches, which are unacceptable for cosmetic and structural reasons. Manual methods are inadequate in giving accurate reading and do not provide a hardcopy report. A prototype scratch measurement system (SMS) using computer vision and image analysis has been developed. This paper discusses the prototype description, novel ideas, improvements, repeatability, reproducibility, accuracy, and the calibration method. Boeing's Calibration Certification Laboratory has given the prototype a qualified certification. The SMS is portable for usage in factory or aircraft hangars anywhere in the world.

  10. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  11. Robotic 3D vision solder joint verification system evaluation

    SciTech Connect

    Trent, M.A.

    1992-02-01

    A comparative performance evaluation was conducted between a proprietary inspection system using intelligent 3D vision and manual visual inspection of solder joints. The purpose was to assess the compatibility and correlation of the automated system with current visual inspection criteria. The results indicated that the automated system was more accurate (> 90%) than visual inspection (60--70%) in locating and/or categorizing solder joint defects. In addition, the automated system can offer significant capabilities to characterize and monitor a soldering process by measuring physical attributes, such as solder joint volumes and wetting angles, which are not available through manual visual inspection. A more in-depth evaluation of this technology is recommended.

  12. Vision-Based SLAM System for Unmanned Aerial Vehicles

    PubMed Central

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  13. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    PubMed

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  14. Computer vision system for three-dimensional inspection

    NASA Astrophysics Data System (ADS)

    Penafiel, Francisco; Fernandez, Luis; Campoy, Pascual; Aracil, Rafael

    1994-11-01

    In the manufacturing process certain workpieces are inspected for dimensional measurement using sophisticated quality control techniques. During the operation phase, these parts are deformed due to the high temperatures involved in the process. The evolution of the workpieces structure is noticed on their dimensional modification. This evolution can be measured with a set of dimensional parameters. In this paper, a three dimensional automatic inspection of these parts is proposed. The aim is the measuring of some workpieces features through 3D control methods using directional lighting and a computer artificial vision system. The results of this measuring must be compared with the parameters obtained after the manufacturing process in order to determine the degree of deformation of the workpiece and decide whether it is still usable or not. Workpieces outside a predetermined specification range must be discarded and replaced by new ones. The advantage of artificial vision methods is based on the fact that there is no need to get in touch with the object to inspect. This makes feasible its use in hazardous environments, not suitable for human beings. A system has been developed and applied to the inspection of fuel assemblies in nuclear power plants. Such a system has been implemented in a very high level of radiation environment and operates in underwater conditions. The physical dimensions of a nuclear fuel assembly are modified after its operation in a nuclear power plant in relation to the original dimensions after its manufacturing. The whole system (camera, mechanical and illumination systems and the radioactive fuel assembly) is submerged in water for minimizing radiation effects and is remotely controlled by human intervention. The developed system has to inspect accurately a set of measures on the fuel assembly surface such as length, twists, arching, etc. The present project called SICOM (nuclear fuel assembly inspection system) is included into the R

  15. Recognition of Activities of Daily Living with Egocentric Vision: A Review.

    PubMed

    Nguyen, Thi-Hoa-Cuc; Nebel, Jean-Christophe; Florez-Revuelta, Francisco

    2016-01-01

    Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory. PMID:26751452

  16. Recognition of Activities of Daily Living with Egocentric Vision: A Review

    PubMed Central

    Nguyen, Thi-Hoa-Cuc; Nebel, Jean-Christophe; Florez-Revuelta, Francisco

    2016-01-01

    Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory. PMID:26751452

  17. Development of Non-contact Respiratory Monitoring System for Newborn Using a FG Vision Sensor

    NASA Astrophysics Data System (ADS)

    Kurami, Yoshiyuki; Itoh, Yushi; Natori, Michiya; Ohzeki, Kazuo; Aoki, Yoshimitsu

    In recent years, development of neonatal care is strongly hoped, with increase of the low-birth-weight baby birth rate. Especially respiration of low-birth-weight baby is incertitude because central nerve and respiratory function is immature. Therefore, a low-birth-weight baby often causes a disease of respiration. In a NICU (Neonatal Intensive Care Unit), neonatal respiration is monitored using cardio-respiratory monitor and pulse oximeter at all times. These contact-type sensors can measure respiratory rate and SpO2 (Saturation of Peripheral Oxygen). However, because a contact-type sensor might damage the newborn's skin, it is a real burden to monitor neonatal respiration. Therefore, we developed the respiratory monitoring system for newborn using a FG (Fiber Grating) vision sensor. FG vision sensor is an active stereo vision sensor, it is possible for non-contact 3D measurement. A respiratory waveform is calculated by detecting the vertical motion of the thoracic and abdominal region with respiration. We attempted clinical experiment in the NICU, and confirmed the accuracy of the obtained respiratory waveform was high. Non-contact respiratory monitoring of newborn using a FG vision sensor enabled the minimally invasive procedure.

  18. Machine vision system for the control of tunnel boring machines

    NASA Astrophysics Data System (ADS)

    Habacher, Michael; O'Leary, Paul; Harker, Matthew; Golser, Johannes

    2013-03-01

    This paper presents a machine vision system for the control of dual-shield Tunnel Boring Machines. The system consists of a camera with ultra bright LED illumination and a target system consisting of multiple retro-reflectors. The camera mounted on the gripper shield measures the relative position and orientation of the target which is mounted on the cutting shield. In this manner the position of the cutting shield relative to the gripper shield is determined. Morphological operators are used to detect the retro-reflectors in the image and a covariance optimized circle fit is used to determine the center point of each reflector. A graph matching algorithm is used to ensure a robust matching of the constellation of the observed target with the ideal target geometry.

  19. Vision-aided inertial navigation system for robotic mobile mapping

    NASA Astrophysics Data System (ADS)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  20. Using Gnu C to develop PC-based vision systems

    NASA Astrophysics Data System (ADS)

    Miller, John W. V.; Shridhar, Malayappan; Shabestari, Behrouz N.

    1995-10-01

    The Gnu project has provided a substantial quantity of free high-quality software tools for UNIX-based machines including the Gnu C compiler which is used on a wide variety of hardware systems including IBM PC-compatible machines using 80386 or newer (32-bit) processors. While this compiler was developed for UNIX applications, it has been successfully ported to DOS and offers substantial benefits over traditional DOS-based 16-bit compilers for machine vision applications. One of the most significant advantages with Gnu C is the removal of the 640 K limit since addressing is performed with 32-bit pointers. Hence, all physical memory can be used directly to store and retrieve images, lookup tables, databases, etc. Execution speed is generally faster also since 32-bit code usually executes faster and there are no far pointers. Protected-mode operation provides other benefits since errant pointers often cause segmentation errors and the source of such errors can be readily identified using special tools provided with the compiler. Examples of vision applications using Gnu C include automatic hand-written address block recognition, counting of shattered-glass particles, and dimensional analysis.

  1. Creating photorealistic virtual model with polarization-based vision system

    NASA Astrophysics Data System (ADS)

    Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi

    2005-08-01

    Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.

  2. A database/knowledge structure for a robotics vision system

    NASA Technical Reports Server (NTRS)

    Dearholt, D. W.; Gonzales, N. N.

    1987-01-01

    Desirable properties of robotics vision database systems are given, and structures which possess properties appropriate for some aspects of such database systems are examined. Included in the structures discussed is a family of networks in which link membership is determined by measures of proximity between pairs of the entities stored in the database. This type of network is shown to have properties which guarantee that the search for a matching feature vector is monotonic. That is, the database can be searched with no backtracking, if there is a feature vector in the database which matches the feature vector of the external entity which is to be identified. The construction of the database is discussed, and the search procedure is presented. A section on the support provided by the database for description of the decision-making processes and the search path is also included.

  3. MARVEL: A system that recognizes world locations with stereo vision

    SciTech Connect

    Braunegg, D.J. . Artificial Intelligence Lab.)

    1993-06-01

    MARVEL is a system that supports autonomous navigation by building and maintaining its own models of world locations and using these models and stereo vision input to recognize its location in the world and its position and orientation within that location. The system emphasizes the use of simple, easily derivable features for recognition, whose aggregate identifies a location, instead of complex features that also require recognition. MARVEL is designed to be robust with respect to input errors and to respond to a gradually changing world by updating its world location models. In over 1,000 recognition tests using real-world data, MARVEL yielded a false negative rate under 10% with zero false positives.

  4. Wearable design issues for electronic vision enhancement systems

    NASA Astrophysics Data System (ADS)

    Dvorak, Joe

    2006-09-01

    As the baby boomer generation ages, visual impairment will overtake a significant portion of the US population. At the same time, more and more of our world is becoming digital. These two trends, coupled with the continuing advances in digital electronics, argue for a rethinking in the design of aids for the visually impaired. This paper discusses design issues for electronic vision enhancement systems (EVES) [R.C. Peterson, J.S. Wolffsohn, M. Rubinstein, et al., Am. J. Ophthalmol. 136 1129 (2003)] that will facilitate their wearability and continuous use. We briefly discuss the factors affecting a person's acceptance of wearable devices. We define the concept of operational inertia which plays an important role in our design of wearable devices and systems. We then discuss how design principles based upon operational inertia can be applied to the design of EVES.

  5. Cryogenics Vision Workshop for High-Temperature Superconducting Electric Power Systems Proceedings

    SciTech Connect

    Energetics, Inc.

    2000-01-01

    The US Department of Energy's Superconductivity Program for Electric Systems sponsored the Cryogenics Vision Workshop, which was held on July 27, 1999 in Washington, D.C. This workshop was held in conjunction with the Program's Annual Peer Review meeting. Of the 175 people attending the peer review meeting, 31 were selected in advance to participate in the Cryogenics Vision Workshops discussions. The participants represented cryogenic equipment manufactures, industrial gas manufacturers and distributors, component suppliers, electric power equipment manufacturers (Superconductivity Partnership Initiative participants), electric utilities, federal agencies, national laboratories, and consulting firms. Critical factors were discussed that need to be considered in describing the successful future commercialization of cryogenic systems. Such systems will enable the widespread deployment of high-temperature superconducting (HTS) electric power equipment. Potential research, development, and demonstration (RD and D) activities and partnership opportunities for advancing suitable cryogenic systems were also discussed. The workshop agenda can be found in the following section of this report. Facilitated sessions were held to discuss the following specific focus topics: identifying Critical Factors that need to be included in a Cryogenics Vision for HTS Electric Power Systems (From the HTS equipment end-user perspective) identifying R and D Needs and Partnership Roles (From the cryogenic industry perspective) The findings of the facilitated Cryogenics Vision Workshop were then presented in a plenary session of the Annual Peer Review Meeting. Approximately 120 attendees participated in the afternoon plenary session. This large group heard summary reports from the workshop session leaders and then held a wrap-up session to discuss the findings, cross-cutting themes, and next steps. These summary reports are presented in this document. The ideas and suggestions raised during

  6. The modeling of portable 3D vision coordinate measuring system

    NASA Astrophysics Data System (ADS)

    Liu, Shugui; Huang, Fengshan; Peng, Kai

    2005-02-01

    The portable three-dimensional vision coordinate measuring system, which consists of a light pen, a CCD camera and a laptop computer, can be widely applied in most coordinate measuring fields especially on the industrial spots. On the light pen there are at least three point-shaped light sources (LEDs) acting as the measured control characteristic points and a touch trigger probe with a spherical stylus which is used to contact the point to be measured. The most important character of this system is that three light sources and the probe stylus are aligned in one line with known positions. In building and studying this measuring system, how to construct the system"s mathematical model is the most key problem called perspective of three-collinear-points problem, which is a particular case of perspective of three-points problem (P3P). On the basis of P3P and spatial analytical geometry theory, the system"s mathematical model is established in this paper. What"s more, it is verified that perspective of three-collinear-points problem has a unique solution. And the analytical equations of the measured point"s coordinates are derived by using the system"s mathematical model and the restrict condition that three light sources and the probe stylus are aligned in one line. Finally, the effectiveness of the mathematical model is confirmed by experiments.

  7. neu-VISION: an explosives detection system for transportation security

    NASA Astrophysics Data System (ADS)

    Warman, Kieffer; Penn, David

    2008-04-01

    Terrorists were targeting commercial airliners long before the 9/11 attacks on the World Trade Center and the Pentagon. Despite heightened security measures, commercial airliners remain an attractive target for terrorists, as evidenced by the August 2006 terrorist plot to destroy as many as ten aircraft in mid-flight from the United Kingdom to the United States. As a response to the security threat air carriers are now required to screen 100-percent of all checked baggage for explosives. The scale of this task is enormous and the Transportation Security Administration has deployed thousands of detection systems. Although this has resulted in improved security, the performance of the installed systems is not ideal. Further improvements are needed and can only be made with new technologies that ensure a flexible Concept of Operations and provide superior detection along with low false alarm rates and excellent dependability. To address security needs Applied Signal Technology, Inc. is developing an innovative and practical solution to meet the performance demands of aviation security. The neu-VISION TM system is expected to provide explosives detection performance for checked baggage that both complements and surpasses currently deployed performance. The neu-VISION TM system leverages a 5 year R&D program developing the Associated Particle Imaging (API) technique; a neutron based non-intrusive material identification and imaging technique. The superior performance afforded by this neutron interrogation technique delivers false alarm rates much lower than deployed technologies and "sees through" dense, heavy materials. Small quantities of explosive material are identified even in the cluttered environments.

  8. Stereoscopic Machine-Vision System Using Projected Circles

    NASA Technical Reports Server (NTRS)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  9. HiVision millimeter-wave radar for enhanced vision systems in civil and military transport aircraft

    NASA Astrophysics Data System (ADS)

    Pirkl, Martin; Tospann, Franz-Jose

    1997-06-01

    This paper presents a guideline to meet the requirements of forward looking sensors of an enhanced vision system for both military and civil transport aircraft. It gives an update of a previous publication with special respect to airborne application. For civil transport aircraft an imaging mm-wave radar is proposed as the vision sensor for an enhanced vision system. For military air transport an additional high-performance weather radar should be combined with the mm-wave radar to enable advanced situation awareness, e.g. spot-SAR or air to air operation. For tactical navigation the mm-wave radar is useful due to its ranging capabilities. To meet these requirements the HiVision radar was developed and tested. It uses a robust concept of electronic beam steering and will meet the strict price constraints of transport aircraft. Advanced image processing and high frequency techniques are currently developed to enhance the performance of both the radar image and integration techniques. The advantages FMCW waveform even enables a sensor with low probability of intercept and a high resistance against jammer. The 1997 highlight will be the optimizing of the sensor and flight trials with an enhanced radar demonstrator.

  10. Visual tracking in stereo. [by computer vision system

    NASA Technical Reports Server (NTRS)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  11. MMW radar enhanced vision systems: the Helicopter Autonomous Landing System (HALS) and Radar-Enhanced Vision System (REVS) are rotary and fixed wing enhanced flight vision systems that enable safe flight operations in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Cross, Jack; Schneider, John; Cariani, Pete

    2013-05-01

    Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.

  12. Applied machine vision

    SciTech Connect

    Not Available

    1984-01-01

    This book presents the papers given at a conference on robot vision. Topics considered at the conference included the link between fixed and flexible automation, general applications of machine vision, the development of a specification for a machine vision system, machine vision technology, machine vision non-contact gaging, and vision in electronics manufacturing.

  13. Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system.

    PubMed

    Jia, Zhenyuan; Yang, Jinghao; Liu, Wei; Wang, Fuji; Liu, Yang; Wang, Lingli; Fan, Chaonan; Zhao, Kai

    2015-06-15

    High-precision calibration of binocular vision systems plays an important role in accurate dimensional measurements. In this paper, an improved camera calibration method is proposed. First, an accurate intrinsic parameters calibration method based on active vision with perpendicularity compensation is developed. Compared to the previous work, this method eliminates the effect of non-perpendicularity of the camera motion on calibration accuracy. The principal point, scale factors, and distortion factors are calculated independently in this method, thereby allowing the strong coupling of these parameters to be eliminated. Second, an accurate global optimization method with only 5 images is presented. The results of calibration experiments show that the accuracy of the calibration method can reach 99.91%. PMID:26193503

  14. A Novel Vision Sensing System for Tomato Quality Detection.

    PubMed

    Srivastava, Satyam; Boyat, Sachin; Sadistap, Shashikant

    2014-01-01

    Producing tomato is a daunting task as the crop of tomato is exposed to attacks from various microorganisms. The symptoms of the attacks are usually changed in color, bacterial spots, special kind of specks, and sunken areas with concentric rings having different colors on the tomato outer surface. This paper addresses a vision sensing based system for tomato quality inspection. A novel approach has been developed for tomato fruit detection and disease detection. Developed system consists of USB based camera module having 12.0 megapixel interfaced with ARM-9 processor. Zigbee module has been interfaced with developed system for wireless transmission from host system to PC based server for further processing. Algorithm development consists of three major steps, preprocessing steps like noise rejection, segmentation and scaling, classification and recognition, and automatic disease detection and classification. Tomato samples have been collected from local market and data acquisition has been performed for data base preparation and various processing steps. Developed system can detect as well as classify the various diseases in tomato samples. Various pattern recognition and soft computing techniques have been implemented for data analysis as well as different parameters prediction like shelf life of the tomato, quality index based on disease detection and classification, freshness detection, maturity index detection, and different suggestions for detected diseases. Results are validated with aroma sensing technique using commercial Alpha Mos 3000 system. Accuracy has been calculated from extracted results, which is around 92%. PMID:26904620

  15. Architecture for computer vision application development within the HORUS system

    NASA Astrophysics Data System (ADS)

    Eckstein, Wolfgang; Steger, Carsten T.

    1997-04-01

    An integrated program development environment for computer vision tasks is presented. The first component of the system is concerned with the visualization of 2D image data. This is done in an object-oriented manner. Programming of the visualization process is achieved by arranging the representations of iconic data in an interactively customizable hierarchy that establishes an intuitive flow of messages between data representations seen as objects. The visualization objects called displays, are designed for different levels of abstraction, starting from direct iconic representation down to numerical features, depending on the information needed. Two types of messages are passed between these displays, which yield a clear and intuitive semantics. The second component of the system is an interactive tool for rapid program development. It helps the user in selecting appropriate operators in many ways. For example, the system provides context sensitive selection of possible alternative operators as well as suitable successors and required predecessors. For the task of choosing appropriate parameters several alternatives exist. For example, the system provides default values as well as lists of useful values for al parameters of each operator. To achieve this, a knowledge base containing facts about the operators and their parameters is used. Second, through the tight coupling of the two system components, parameters can be determined quickly by data exploration within the visualization components.

  16. X-Eye: a novel wearable vision system

    NASA Astrophysics Data System (ADS)

    Wang, Yuan-Kai; Fan, Ching-Tang; Chen, Shao-Ang; Chen, Hou-Ye

    2011-03-01

    This paper proposes a smart portable device, named the X-Eye, which provides a gesture interface with a small size but a large display for the application of photo capture and management. The wearable vision system is implemented with embedded systems and can achieve real-time performance. The hardware of the system includes an asymmetric dualcore processer with an ARM core and a DSP core. The display device is a pico projector which has a small volume size but can project large screen size. A triple buffering mechanism is designed for efficient memory management. Software functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a color classification which is based on the expectation-maximization algorithm and Gaussian mixture model (GMM). To improve the performance of the GMM, we devise a LUT (Look Up Table) technique. Fingertips are extracted and geometrical features of fingertip's shape are matched to recognize user's gesture commands finally. In order to verify the accuracy of the gesture recognition module, experiments are conducted in eight scenes with 400 test videos including the challenge of colorful background, low illumination, and flickering. The processing speed of the whole system including the gesture recognition is with the frame rate of 22.9FPS. Experimental results give 99% recognition rate. The experimental results demonstrate that this small-size large-screen wearable system has effective gesture interface with real-time performance.

  17. Information Systems in the University of Saskatchewan Libraries: A Vision for the 1990s.

    ERIC Educational Resources Information Center

    Saskatchewan Univ., Saskatoon. Libraries.

    This report describes the vision of the Information Systems Advisory Committee (ISAC) of an Information Systems Model for the 1990s. It includes an evaluation of the present automation environment at the university, a vision of library automation at the University of Saskatchewan between 1994 and 1999, and specific recommendations on such issues…

  18. The Application of Lidar to Synthetic Vision System Integrity

    NASA Technical Reports Server (NTRS)

    Campbell, Jacob L.; UijtdeHaag, Maarten; Vadlamani, Ananth; Young, Steve

    2003-01-01

    One goal in the development of a Synthetic Vision System (SVS) is to create a system that can be certified by the Federal Aviation Administration (FAA) for use at various flight criticality levels. As part of NASA s Aviation Safety Program, Ohio University and NASA Langley have been involved in the research and development of real-time terrain database integrity monitors for SVS. Integrity monitors based on a consistency check with onboard sensors may be required if the inherent terrain database integrity is not sufficient for a particular operation. Sensors such as the radar altimeter and weather radar, which are available on most commercial aircraft, are currently being investigated for use in a real-time terrain database integrity monitor. This paper introduces the concept of using a Light Detection And Ranging (LiDAR) sensor as part of a real-time terrain database integrity monitor. A LiDAR system consists of a scanning laser ranger, an inertial measurement unit (IMU), and a Global Positioning System (GPS) receiver. Information from these three sensors can be combined to generate synthesized terrain models (profiles), which can then be compared to the stored SVS terrain model. This paper discusses an initial performance evaluation of the LiDAR-based terrain database integrity monitor using LiDAR data collected over Reno, Nevada. The paper will address the consistency checking mechanism and test statistic, sensitivity to position errors, and a comparison of the LiDAR-based integrity monitor to a radar altimeter-based integrity monitor.

  19. Helmet-mounted pilot night vision systems: Human factors issues

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.; Brickner, Michael S.

    1989-01-01

    Helmet-mounted displays of infrared imagery (forward-looking infrared (FLIR)) allow helicopter pilots to perform low level missions at night and in low visibility. However, pilots experience high visual and cognitive workload during these missions, and their performance capabilities may be reduced. Human factors problems inherent in existing systems stem from three primary sources: the nature of thermal imagery; the characteristics of specific FLIR systems; and the difficulty of using FLIR system for flying and/or visually acquiring and tracking objects in the environment. The pilot night vision system (PNVS) in the Apache AH-64 provides a monochrome, 30 by 40 deg helmet-mounted display of infrared imagery. Thermal imagery is inferior to television imagery in both resolution and contrast ratio. Gray shades represent temperatures differences rather than brightness variability, and images undergo significant changes over time. The limited field of view, displacement of the sensor from the pilot's eye position, and monocular presentation of a bright FLIR image (while the other eye remains dark-adapted) are all potential sources of disorientation, limitations in depth and distance estimation, sensations of apparent motion, and difficulties in target and obstacle detection. Insufficient information about human perceptual and performance limitations restrains the ability of human factors specialists to provide significantly improved specifications, training programs, or alternative designs. Additional research is required to determine the most critical problem areas and to propose solutions that consider the human as well as the development of technology.

  20. Flexible vision-based navigation system for unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Blasch, Erik P.

    1995-01-01

    A critical component of unmanned aerial vehicles in the navigation system which provides position and velocity feedback for autonomous control. The Georgia Tech Aerial Robotics navigational system (NavSys) consists of four DVTStinger70C Integrated Vision Units (IVUs) with CCD-based panning platforms, software, and a fiducial onboard the vehicle. The IVUs independently scan for the retro-reflective bar-code fiducial while the NavSys image processing software performs a gradient threshold followed by a image search localization of three vertical bar-code lines. Using the (x,y) image coordinate and CCD angle, the NavSys triangulates the fiducial's (x,y) position, differentiates for velocity, and relays the information to the helicopter controller, which independently determines the z direction with an onboard altimeter. System flexibility is demonstrated by recognition of different fiducial shapes, night and day time operation, and is being extended to on-board and off-board navigation of aerial and ground vehicles. The navigation design provides a real-time, inexpensive, and effective system for determining the (x,y) position of the aerial vehicle with updates generated every 51 ms (19.6 Hz) at an accuracy of approximately +/- 2.8 in.

  1. Vision aided inertial navigation system augmented with a coded aperture

    NASA Astrophysics Data System (ADS)

    Morrison, Jamie R.

    plate aperture produces diffraction patterns that change the shape of the focal blur pattern. When used as an aperture, the Fresnel zone plate produces multiple focal planes in the scene. The interference between the multiple focal planes produce changes in the aperture that can be observed both between the focal planes and beyond the most distant focal plane. The Fresnel zone plate aperture and lens may be designed to change in the focal blur pattern at greater depths, thereby improving measurement performance of the coded aperture system. This research provides an in-depth study of the Fresnel zone plate used as a coded aperture, and the performance improvement obtained by augmenting a single camera vision aided inertial navigation system with a Fresnel zone plate coded aperture. Design and analysis of a generalized coded aperture is presented and demonstrated, and special considerations for the Fresnel zone plate are given. Also techniques to determine a continuous depth measurement from a coded image are presented and evaluated through measurement. Finally the measurement results from different aperture configurations are statistically modeled and compared with a simulated vision aided navigation environment to predict the change in performance of a vision aided inertial navigation system when augmented with a coded aperture.

  2. Intelligent robots and computer vision XII: Active vision and 3D methods; Proceedings of the Meeting, Boston, MA, Sept. 8, 9, 1993

    SciTech Connect

    Casasent, D.P.

    1993-01-01

    Topics addressed include active vision for intelligent robots, 3D vision methods, tracking in robotic and vision, visual servoing and egomotion in robotics, egomotion and time-sequential processing, and control and planning in robotics and vision. Particular attention is given to invariant in visual motion, generic target tracking using color, recognizing 3D articulated-line-drawing objects, range data acquisition from an encoded structured light pattern, and 3D edge orientation detection. Also discussed are acquisition of randomly moving objects by visual guidance, fundamental principles of robot vision, high-performance visual servoing for robot end-point control, a long sequence analysis of human motion using eigenvector decomposition, and sequential computer algorithms for printed circuit board inspection.

  3. New vision solar system mission study. Final report

    SciTech Connect

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    The vision for the future of the planetary exploration program includes the capability to deliver {open_quotes}constellations{close_quotes} or {open_quotes}fleets{close_quotes} of microspacecraft to a planetary destination. These fleets will act in a coordinated manner to gather science data from a variety of locations on or around the target body, thus providing detailed, global coverage without requiring development of a single large, complex and costly spacecraft. Such constellations of spacecraft, coupled with advanced information processing and visualization techniques and high-rate communications, could provide the basis for development of a {open_quotes}virtual{close_quotes} {open_quotes}presence{close_quotes} in the solar system. A goal could be the near real-time delivery of planetary images and video to a wide variety of users in the general public and the science community. This will be a major step in making the solar system accessible to the public and will help make solar system exploration a part of the human experience on Earth.

  4. A Portable Stereo Vision System for Whole Body Surface Imaging

    PubMed Central

    Yu, Wurong; Xu, Bugao

    2009-01-01

    This paper presents a whole body surface imaging system based on stereo vision technology. We have adopted a compact and economical configuration which involves only four stereo units to image the frontal and rear sides of the body. The success of the system depends on a stereo matching process that can effectively segment the body from the background in addition to recovering sufficient geometric details. For this purpose, we have developed a novel sub-pixel, dense stereo matching algorithm which includes two major phases. In the first phase, the foreground is accurately segmented with the help of a predefined virtual interface in the disparity space image, and a coarse disparity map is generated with block matching. In the second phase, local least squares matching is performed in combination with global optimization within a regularization framework, so as to ensure both accuracy and reliability. Our experimental results show that the system can realistically capture smooth and natural whole body shapes with high accuracy. PMID:20161620

  5. Versatile robot vision system using line-scan sensors

    NASA Astrophysics Data System (ADS)

    Godber, Simon X.; Robinson, Max; Evans, J. Paul O.

    1993-03-01

    This paper describes on-going research into machine vision systems based on the line-scan or linear array type cameras. Such devices have been used successfully in the production line environment, as the inherent movement within the manufacturing process can be utilized for image production. However, applications such as these have traditionally involved using the line-scan device in a purely two-dimensional role. Initial research was carried out to extend such 2-D arrangements into a 3-D system, retaining the lateral motion of the object with respect to the camera. The resulting stereoscopic camera allowed three-dimensional coordinate data to be extracted from a moving object volume (workspace). The most recent work has involved rotating line-scan systems in relation to a static scene. This allows images to be produced with fields of view varying in both size and position in the rotation. Due to the nature of the movement the images can be complex dependent on the size of the field of view selected. Benefits of obtaining images in this fashion include `all-round' observation, variable resolution in the movement axis, and a calibrated volume that can be moved to observe any point in a 360 degree arc.

  6. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    PubMed

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. PMID:26948877

  7. Development of a machine vision fire detection system

    NASA Astrophysics Data System (ADS)

    Goedeke, A. D.; Healey, G.; Drda, B.

    1994-03-01

    This project resulted in the development, test, and delivery of a patented Machine Vision Fire Detector System (MVFDS) that provides for the first time a unique and reliable method of detecting fire events and determining their size, growth, distance, location, and overall threat in real-time. The system also provides simultaneous video coverage of the area being monitored by the MVFDS for fires. This 'man-in-the-loop' capability provides an option for manual override of automatic suppressant dump, or manual release of suppressant agent. The MVFDS is designed to be immune to false alarms based upon its decision process which involves identification, comparison, and deduction (emulates a human's process of deduction/decision) of unique properties of fire. These unique properties have been included into a fire model from which algorithms have been developed. The MVFDS uses a commercially available color CCD camera, frame grabber, microprocessor, video chip, and electronics. In aircraft hangar and facility applications, the detector is designed to identify a 2-foot x 2-foot fire at a distance of 100 feet in less than 0.5 seconds with no false alarms and, in other applications, detect fires in less than 30 milliseconds.

  8. Vision system for gauging and automatic straightening of steel bars

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Wilding, Ernst; Niel, Albert; Barg, Herbert

    2001-02-01

    A machine vision application for the fully automatic straightening of steel bars is presented. The bars with lengths of up to 6000 mm are quite bent on exit of the rolling mill and need to be straightened prior to delivery to a customer. The shape of the steel bar is extracted and measured by two video resolution cameras which are calibrated in position and viewing angle relative to a coordinate system located in the center of the roller table. Its contour is tracked and located with a dynamic programming method utilizing several constraints to make the algorithm as robust as possible. 3D camera calibration allows the transformation of image coordinates to real-world coordinates. After smoothing and spline fitting the curvature of the bar is computed. A deformation model of the effect of force applied to the steel allows the system to generate press commands which state where and with what specific pressure the bar has to be processed. The model can be used to predict the straightening of the bar over some consecutive pressing events helping to optimize the operation. The process of measurement and pressing is repeated until the straightness of the bar reaches a predefined limit.

  9. A reconfigurable real-time morphological system for augmented vision

    NASA Astrophysics Data System (ADS)

    Gibson, Ryan M.; Ahmadinia, Ali; McMeekin, Scott G.; Strang, Niall C.; Morison, Gordon

    2013-12-01

    There is a significant number of visually impaired individuals who suffer sensitivity loss to high spatial frequencies, for whom current optical devices are limited in degree of visual aid and practical application. Digital image and video processing offers a variety of effective visual enhancement methods that can be utilised to obtain a practical augmented vision head-mounted display device. The high spatial frequencies of an image can be extracted by edge detection techniques and overlaid on top of the original image to improve visual perception among the visually impaired. Augmented visual aid devices require highly user-customisable algorithm designs for subjective configuration per task, where current digital image processing visual aids offer very little user-configurable options. This paper presents a highly user-reconfigurable morphological edge enhancement system on field-programmable gate array, where the morphological, internal and external edge gradients can be selected from the presented architecture with specified edge thickness and magnitude. In addition, the morphology architecture supports reconfigurable shape structuring elements and configurable morphological operations. The proposed morphology-based visual enhancement system introduces a high degree of user flexibility in addition to meeting real-time constraints capable of obtaining 93 fps for high-definition image resolution.

  10. Hardware and software for prototyping industrial vision systems

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Daley, Michael W.; Griffiths, Eric C.

    1994-10-01

    A simple, low-cost device is described, which the authors have developed for prototyping industrial machine vision systems. The unit provides facilities for controlling the following devices, via a single serial (RS232) port, connected to a host computer: (a) Twelve ON/OFF mains devices (lamps, laser stripe generator, pattern projector, etc) (b) Four ON/OFF pneumatic valves (These are mounted on board the hardware module.) (c) One 8-way video multiplexor (d) Six programmable-speed serial (RS232) communication ports (e) Six opto- isolated 8-way parallel I/O ports. Using this unit, it is possible for software, running on the host computer and which contains only the most rudimentary I/O facilities, to operate a range of electro- mechanical devices. For example, a HyperCard program can switch lamps and pneumatic air lines ON/OFF, control the movements of an (X,Y,(theta) )-table and select different video cameras. These electro-mechanical devices form part of a flexible inspection cell, which the authors have built recently. This cell is being used to study the inspection of low-volume batch products, without the need for detailed instructions. The interface module has also been used to connect an image processing package, based on the Prolog programming language, to a gantry robot. This system plays dominoes against a human opponent.

  11. Development of image processing LSI "SuperVchip" for real-time vision systems

    NASA Astrophysics Data System (ADS)

    Muramatsu, Shoji; Kobayashi, Yoshiki; Otsuka, Yasuo; Shojima, Hiroshi; Tsutsumi, Takayuki; Imai, Toshihiko; Yamada, Shigeyoshi

    2002-03-01

    A new image processing LSI SuperVchip with high-performance computing power has been developed. The SuperVchip has powerful capability for vision systems as follows: 1. General image processing by 3x3, 5x5, 7x7 kernel for high speed filtering function. 2. 16-parallel gray search engine units for robust template matching. 3. 49 block matching Pes to calculate the summation of the absolution difference in parallel for stereo vision function. 4. A color extraction unit for color object recognition. The SuperVchip also has peripheral function of vision systems, such as video interface, PCI extended interface, RISC engine interface and image memory controller on a chip. Therefore, small and high performance vision systems are realized via SuperVchip. In this paper, the above specific circuits are presented, and an architecture of a vision device equipped with SuperVchip and its performance are also described.

  12. Single-computer HWIL simulation facility for real-time vision systems

    NASA Astrophysics Data System (ADS)

    Fuerst, Simon; Werner, Stefan; Dickmanns, Ernst D.

    1998-07-01

    UBM is working on autonomous vision systems for aircraft for more than one and a half decades by now. The systems developed use standard on-board sensors and two additional monochrome cameras for state estimation of the aircraft. A common task is to detect and track a runway for an autonomous landing approach. The cameras have different focal lengths and are mounted on a special pan and tilt camera platform. As the platform is equipped with two resolvers and two gyros it can be stabilized inertially and the system has the ability to actively focus on the objects of highest interest. For verification and testing, UBM has a special HWIL simulation facility for real-time vision systems. Central part of this simulation facility is a three axis motion simulator (DBS). It is used to realize the computed orientation in the rotational degrees of freedom of the aircraft. The two-axis camera platform with its two CCD-cameras is mounted on the inner frame of the DBS and is pointing at the cylindrical projection screen with a synthetic view displayed on it. As the performance of visual perception systems has increased significantly in recent years, a new, more powerful synthetic vision system was required. A single Onyx2 machine replaced all the former simulation computers. This computer is powerful enough to simulate the aircraft, to generate a high-resolution synthetic view, to control the DBS and to communicate with the image processing computers. Further improvements are the significantly reduced delay times for closed loop simulations and the elimination of communication overhead.

  13. Machine Vision Giving Eyes to Robots. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1990

    1990-01-01

    This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)

  14. ARM-based visual processing system for prosthetic vision.

    PubMed

    Matteucci, Paul B; Byrnes-Preston, Philip; Chen, Spencer C; Lovell, Nigel H; Suaning, Gregg J

    2011-01-01

    A growing number of prosthetic devices have been shown to provide visual perception to the profoundly blind through electrical neural stimulation. These first-generation devices offer promising outcomes to those affected by degenerative disorders such as retinitis pigmentosa. Although prosthetic approaches vary in their placement of the stimulating array (visual cortex, optic-nerve, epi-retinal surface, sub-retinal surface, supra-choroidal space, etc.), most of the solutions incorporate an externally-worn device to acquire and process video to provide the implant with instructions on how to deliver electrical stimulation to the patient, in order to elicit phosphenized vision. With the significant increase in availability and performance of low power-consumption smart phone and personal device processors, the authors investigated the use of a commercially available ARM (Advanced RISC Machine) device as an externally-worn processing unit for a prosthetic neural stimulator for the retina. A 400 MHz Samsung S3C2440A ARM920T single-board computer was programmed to extract 98 values from a 1.3 Megapixel OV9650 CMOS camera using impulse, regional averaging and Gaussian sampling algorithms. Power consumption and speed of video processing were compared to results obtained to similar reported devices. The results show that by using code optimization, the system is capable of driving a 98 channel implantable device for the restoration of visual percepts to the blind. PMID:22255197

  15. Vision System for Remote Strain/Deformation Measurement

    SciTech Connect

    Hovis, G.L.

    1999-01-26

    Machine vision metrology is ideally suited to the task of non-contact/non-intrusive deformation and strain measurement in a remote system. The objective of this work-in-progress is to develop a compact instrument for strain measurement consisting of a camera, image capture card, PC, software, and light source. The instrument is portable and useful in a variety of applications and environments. A digital camera with a microscopic lens is connected to an image capture card in a PC. Commercially available image processing software is used to control the image capture and image processing steps leading up to displacement/strain measurement. Image processing steps include filtering and edge/feature enhancement. Custom software is required to control/automate certain elements of the acquisition and processing. Images of a region on the surface of a specimen are acquired at hold points (during static tests) or at regular time intervals (during transients). Salient features in the image scene (microstructure, oxide deposits, etc.) are observed in subsequent images. The strain measurement algorithm characterizes relative motion of the salient features with individual displacement vectors yielding 2-D deformation equations. The set of deformation equations is solved simultaneously to yield unknown deformation gradient terms that are used to express 2-D strain. The overall concept, theory, and test results to date are presented herein.

  16. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  17. The forms of knowledge mobilized in some machine vision systems.

    PubMed Central

    Brady, M

    1997-01-01

    This paper describes a number of computer vision systems that we have constructed, and which are firmly based on knowledge of diverse sorts. However, that knowledge is often represented in a way that is only accessible to a limited set of processes, that make limited use of it, and though the knowledge is amenable to change, in practice it can only be changed in rather simple ways. The rest of the paper addresses the questions: (i) what knowledge is mobilized in the furtherance of a perceptual task?; (ii) how is that knowledge represented?; and (iii) how is that knowledge mobilized? First we review some cases of early visual processing where the mobilization of knowledge seems to be a key contributor to success yet where the knowledge is deliberately represented in a quite inflexible way. After considering the knowledge that is involved in overcoming the projective nature of images, we move the discussion to the knowledge that was required in programs to match, register, and recognize shapes in a range of applications. Finally, we discuss the current state of process architectures for knowledge mobilization. PMID:9304690

  18. A neural network based artificial vision system for licence plate recognition.

    PubMed

    Draghici, S

    1997-02-01

    This paper presents a neural network based artificial vision system able to analyze the image of a car given by a camera, locate the registration plate and recognize the registration number of the car. The paper describes in detail various practical problems encountered in implementing this particular application and the solutions used to solve them. The main features of the system presented are: controlled stability-plasticity behavior, controlled reliability threshold, both off-line and on-line learning, self assessment of the output reliability and high reliability based on high level multiple feedback. The system has been designed using a modular approach. Sub-modules can be upgraded and/or substituted independently, thus making the system potentially suitable in a large variety of vision applications. The OCR engine was designed as an interchangeable plug-in module. This allows the user to choose an OCR engine which is suited to the particular application and to upgrade it easily in the future. At present, there are several versions of this OCR engine. One of them is based on a fully connected feedforward artificial neural network with sigmoidal activation functions. This network can be trained with various training algorithms such as error backpropagation. An alternative OCR engine is based on the constraint based decomposition (CBD) training architecture. The system has showed the following performances (on average) on real-world data: successful plate location and segmentation about 99%, successful character recognition about 98% and successful recognition of complete registration plates about 80%. PMID:9228583

  19. Neural correlates of active vision: An fMRI comparison of natural reading and scene viewing.

    PubMed

    Choi, Wonil; Henderson, John M

    2015-08-01

    Theories of eye movement control during active vision tasks such as reading and scene viewing have primarily been developed and tested using data from eye tracking and computational modeling, and little is currently known about the neurocognition of active vision. The current fMRI study was conducted to examine the nature of the cortical networks that are associated with active vision. Subjects were asked to read passages for meaning and view photographs of scenes for a later memory test. The eye movement control network comprising frontal eye field (FEF), supplementary eye fields (SEF), and intraparietal sulcus (IPS), commonly activated during single-saccade eye movement tasks, were also involved in reading and scene viewing, suggesting that a common control network is engaged when eye movements are executed. However, the activated locus of the FEF varied across the two tasks, with medial FEF more activated in scene viewing relative to passage reading and lateral FEF more activated in reading than scene viewing. The results suggest that eye movements during active vision are associated with both domain-general and domain-specific components of the eye movement control network. PMID:26026255

  20. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  1. Application of edge detection algorithm for vision guided robotics assembly system

    NASA Astrophysics Data System (ADS)

    Balabantaray, Bunil Kumar; Jha, Panchanand; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system has a major role in making robotic assembly system autonomous. Part detection and identification of the correct part are important tasks which need to be carefully done by a vision system to initiate the process. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Edge detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus one needs to choose the correct tool for the process with respect to the given environment. In this paper the comparative study of edge detection algorithm with grasping the object in robot assembly system is presented. The proposed work is performed on the Matlab R2010a Simulink. This paper proposes four algorithms i.e. Canny's, Robert, Prewitt and Sobel edge detection algorithm. An attempt has been made to find the best algorithm for the problem. It is found that Canny's edge detection algorithm gives better result and minimum error for the intended task.

  2. Network integration. A new vision for Catholic healthcare systems in the 1990s.

    PubMed

    Mason, S A

    1990-12-01

    The 1990s will be the decade of network integration for many of the nation's healthcare organizations. Catholic healthcare systems will have to refocus on local and regional healthcare delivery. To succeed in local and regional markets, the systems will have to offer various levels of care through numerous types of providers, share services among facilities, cooperate with secular organizations, and build stronger affiliations with local parishes. Managing this change (from offering fragmented healthcare services to offering integrated services) will be a major challenge facing organizations in the decade ahead. They must develop a clearly articulated vision to provide stability during this time of rapid change. To meet the challenges of the 1990s, Catholic healthcare systems will have to determine the types of functional sharing that will be beneficial at the local level, divest and transfer sponsorship of facilities that burden the system's mission, and expand the activities of the laity. PMID:10108005

  3. The precision measurement and assembly for miniature parts based on double machine vision systems

    NASA Astrophysics Data System (ADS)

    Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.

    2015-02-01

    In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.

  4. Evaluating the Effects of Dimensionality in Advanced Avionic Display Concepts for Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Alexander, Amy L.; Prinzel, Lawrence J., III; Wickens, Christopher D.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.

    2007-01-01

    Synthetic vision systems provide an in-cockpit view of terrain and other hazards via a computer-generated display representation. Two experiments examined several display concepts for synthetic vision and evaluated how such displays modulate pilot performance. Experiment 1 (24 general aviation pilots) compared three navigational display (ND) concepts: 2D coplanar, 3D, and split-screen. Experiment 2 (12 commercial airline pilots) evaluated baseline 'blue sky/brown ground' or synthetic vision-enabled primary flight displays (PFDs) and three ND concepts: 2D coplanar with and without synthetic vision and a dynamic multi-mode rotatable exocentric format. In general, the results pointed to an overall advantage for a split-screen format, whether it be stand-alone (Experiment 1) or available via rotatable viewpoints (Experiment 2). Furthermore, Experiment 2 revealed benefits associated with utilizing synthetic vision in both the PFD and ND representations and the value of combined ego- and exocentric presentations.

  5. Measurement of meat color using a computer vision system.

    PubMed

    Girolami, Antonio; Napolitano, Fabio; Faraone, Daniela; Braghieri, Ada

    2013-01-01

    The limits of the colorimeter and a technique of image analysis in evaluating the color of beef, pork, and chicken were investigated. The Minolta CR-400 colorimeter and a computer vision system (CVS) were employed to measure colorimetric characteristics. To evaluate the chromatic fidelity of the image of the sample displayed on the monitor, a similarity test was carried out using a trained panel. The panelists found the digital images of the samples visualized on the monitor very similar to the actual ones (P<0.001). During the first similarity test the panelists observed at the same time both the actual meat sample and the sample image on the monitor in order to evaluate the similarity between them (test A). Moreover, the panelists were asked to evaluate the similarity between two colors, both generated by the software Adobe Photoshop CS3 one using the L, a and b values read by the colorimeter and the other obtained using the CVS (test B); which of the two colors was more similar to the sample visualized on the monitor was also assessed (test C). The panelists found the digital images very similar to the actual samples (P<0.001). As to the similarity (test B) between the CVS- and colorimeter-based colors the panelists found significant differences between them (P<0.001). Test C showed that the color of the sample on the monitor was more similar to the CVS generated color than to the colorimeter generated color. The differences between the values of the L, a, b, hue angle and chroma obtained with the CVS and the colorimeter were statistically significant (P<0.05-0.001). These results showed that the colorimeter did not generate coordinates corresponding to the true color of meat. Instead, the CVS method seemed to give valid measurements that reproduced a color very similar to the real one. PMID:22981646

  6. Lidar multi-range integrated Dewar assembly (IDA) for active-optical vision navigation sensor

    NASA Astrophysics Data System (ADS)

    Mayner, Philip; Clemet, Ed; Asbrock, Jim; Chen, Isabel; Getty, Jonathan; Malone, Neil; De Loo, John; Giroux, Mark

    2013-09-01

    A multi-range focal plane was developed and delivered by Raytheon Vision Systems for a docking system that was demonstrated on STS-134. This required state of the art focal plane and electronics synchronization to capture nanosecond length laser pulses to determine ranges with an accuracy of less than 1 inch.

  7. The autonomy of the visual systems and the modularity of conscious vision.

    PubMed Central

    Zeki, S; Bartels, A

    1998-01-01

    Anatomical and physiological evidence shows that the primate visual brain consists of many distributed processing systems, acting in parallel. Psychophysical studies show that the activity in each of the parallel systems reaches its perceptual end-point at a different time, thus leading to a perceptual asynchrony in vision. This, together with clinical and human imaging evidence, suggests strongly that the processing systems are also perceptual systems and that the different processing-perceptual systems can act more or less autonomously. Moreover, activity in each can have a conscious correlate without necessarily involving activity in other visual systems. This leads us to conclude not only that visual consciousness is itself modular, reflecting the basic modular organization of the visual brain, but that the binding of cellular activity in the processing-perceptual systems is more properly thought of as a binding of the consciousnesses generated by each of them. It is this binding that gives us our integrated image of the visual world. PMID:9854263

  8. Technique for positioning moving binocular vision measurement system and data registration with ball target

    NASA Astrophysics Data System (ADS)

    Gu, Fei-fei; Zhao, Hong; Zhao, Zinxin; Zhang, Lu

    2013-04-01

    A ball-based intermediary target technique is presented to position moving machine vision measurement system and to realize data registration under different positions. Large-sized work-piece measurement based on machine vision faces several problems: limited viewing angle, range and accuracy of measurement inversely proportional. To measure the whole work-piece conveniently and precisely, the idea that using balls as registration target is proposed in this paper. Only a single image of the ball target is required from each camera then the vision system is fully calibrated (intrinsic and extrinsic camera parameters). When the vision system has to be moved to measure the whole work-piece, one snapshot of the ball target in the common view can position the system. Then data registration can be fulfilled. To achieve more accurate position of ball's center, an error correction model is established.

  9. Design of a dynamic test platform for autonomous robot vision systems

    NASA Technical Reports Server (NTRS)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  10. 3D vision sensor and its algorithm on clone seedlings plant system

    NASA Astrophysics Data System (ADS)

    Hayashi, Jun-ichiro; Hiroyasu, Takehisa; Hojo, Hirotaka; Hata, Seiji; Okada, Hiroshi

    2007-01-01

    Today, vision systems for robots had been widely applied to many important applications. But 3-D vision systems for industrial uses should face to many practical problems. Here, a vision system for bio-production has been introduced. Clone seedlings plants are one of the important applications of biotechnology. Most of the production processes of clone seedlings plants are highly automated, but the transplanting process of the small seedlings plants cannot be automated because the shape of small seedlings plants are not stable and in order to handle the seedlings plants it is required to observe the shapes of the small seedlings plants. In this research, a robot vision system has been introduced for the transplanting process in a plant factory.

  11. An Inquiry-Based Vision Science Activity for Graduate Students and Postdoctoral Research Scientists

    NASA Astrophysics Data System (ADS)

    Putnam, N. M.; Maness, H. L.; Rossi, E. A.; Hunter, J. J.

    2010-12-01

    The vision science activity was originally designed for the 2007 Center for Adaptive Optics (CfAO) Summer School. Participants were graduate students, postdoctoral researchers, and professionals studying the basics of adaptive optics. The majority were working in fields outside vision science, mainly astronomy and engineering. The primary goal of the activity was to give participants first-hand experience with the use of a wavefront sensor designed for clinical measurement of the aberrations of the human eye and to demonstrate how the resulting wavefront data generated from these measurements can be used to assess optical quality. A secondary goal was to examine the role wavefront measurements play in the investigation of vision-related scientific questions. In 2008, the activity was expanded to include a new section emphasizing defocus and astigmatism and vision testing/correction in a broad sense. As many of the participants were future post-secondary educators, a final goal of the activity was to highlight the inquiry-based approach as a distinct and effective alternative to traditional laboratory exercises. Participants worked in groups throughout the activity and formative assessment by a facilitator (instructor) was used to ensure that participants made progress toward the content goals. At the close of the activity, participants gave short presentations about their work to the whole group, the major points of which were referenced in a facilitator-led synthesis lecture. We discuss highlights and limitations of the vision science activity in its current format (2008 and 2009 summer schools) and make recommendations for its improvement and adaptation to different audiences.

  12. 78 FR 5557 - Twenty-First Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-25

    ... Federal Aviation Administration Twenty-First Meeting: RTCA Special Committee 213, Enhanced Flight Vision... of Transportation (DOT). ACTION: Meeting Notice of RTCA Special Committee 213, Enhanced Flight Vision... of the twenty-first meeting of the RTCA Special Committee 213, Enhanced Flight Vision...

  13. Poor Vision, Functioning, and Depressive Symptoms: A Test of the Activity Restriction Model

    ERIC Educational Resources Information Center

    Bookwala, Jamila; Lawson, Brendan

    2011-01-01

    Purpose: This study tested the applicability of the activity restriction model of depressed affect to the context of poor vision in late life. This model hypothesizes that late-life stressors contribute to poorer mental health not only directly but also indirectly by restricting routine everyday functioning. Method: We used data from a national…

  14. The Effect of Gender and Level of Vision on the Physical Activity Level of Children and Adolescents with Visual Impairment

    ERIC Educational Resources Information Center

    Aslan, Ummuhan Bas; Calik, Bilge Basakci; Kitis, Ali

    2012-01-01

    This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between…

  15. The study of calibration and epipolar geometry for the stereo vision system built by fisheye lenses

    NASA Astrophysics Data System (ADS)

    Zhang, Baofeng; Lu, Chunfang; Röning, Juha; Feng, Weijia

    2015-01-01

    Fish-eye lens is a kind of short focal distance (f=6~16mm) camera. The field of view (FOV) of it is near or even exceeded 180×180 degrees. A lot of literatures show that the multiple view geometry system built by fish-eye lens will get larger stereo field than traditional stereo vision system which based on a pair of perspective projection images. Since a fish-eye camera usually has a wider-than-hemispherical FOV, the most of image processing approaches based on the pinhole camera model for the conventional stereo vision system are not satisfied to deal with the applications of this category of stereo vision which built by fish-eye lenses. This paper focuses on discussing the calibration and the epipolar rectification method for a novel machine vision system set up by four fish-eye lenses, which is called Special Stereo Vision System (SSVS). The characteristic of SSVS is that it can produce 3D coordinate information from the whole global observation space and acquiring no blind area 360º×360º panoramic image simultaneously just using single vision equipment with one time static shooting. Parameters calibration and epipolar rectification is the basic for SSVS to realize 3D reconstruction and panoramic image generation.

  16. Synthetic and Enhanced Vision System for Altair Lunar Lander

    NASA Technical Reports Server (NTRS)

    Prinzell, Lawrence J., III; Kramer, Lynda J.; Norman, Robert M.; Arthur, Jarvis J., III; Williams, Steven P.; Shelton, Kevin J.; Bailey, Randall E.

    2009-01-01

    Past research has demonstrated the substantial potential of synthetic and enhanced vision (SV, EV) for aviation (e.g., Prinzel & Wickens, 2009). These augmented visual-based technologies have been shown to significantly enhance situation awareness, reduce workload, enhance aviation safety (e.g., reduced propensity for controlled flight -into-terrain accidents/incidents), and promote flight path control precision. The issues that drove the design and development of synthetic and enhanced vision have commonalities to other application domains; most notably, during entry, descent, and landing on the moon and other planetary surfaces. NASA has extended SV/EV technology for use in planetary exploration vehicles, such as the Altair Lunar Lander. This paper describes an Altair Lunar Lander SV/EV concept and associated research demonstrating the safety benefits of these technologies.

  17. SUMO/FREND: vision system for autonomous satellite grapple

    NASA Astrophysics Data System (ADS)

    Obermark, Jerome; Creamer, Glenn; Kelm, Bernard E.; Wagner, William; Henshaw, C. Glen

    2007-04-01

    SUMO/FREND is a risk reduction program for an advanced servicing spacecraft sponsored by DARPA and executed by the Naval Center for Space Technology at the Naval Research Laboratory in Washington, DC. The overall program will demonstrate the integration of many techniques needed in order to autonomously rendezvous and capture customer satellites at geosynchronous orbits. A flight-qualifiable payload is currently under development to prove out challenging aspects of the mission. The grappling process presents computer vision challenges to properly identify and guide the final step in joining the pursuer craft to the customer. This paper will provide an overview of the current status of the project with an emphasis on the challenges, techniques, and directions of the machine vision processes to guide the grappling.

  18. International computer vision directory

    SciTech Connect

    Flora, P.C.

    1986-01-01

    This book contains information on: computerized automation technologies. State-of-the-art computer vision systems for many areas of industrial use are covered. Other topics discussed include the following automated inspection systems; robot/vision systems; vision process control; cameras (vidicon and solid state); vision peripherals and components; and pattern processor.

  19. Simulation assessment of synthetic vision system concepts for UAV operations

    NASA Astrophysics Data System (ADS)

    Calhoun, Gloria L.; Draper, Mark H.; Ruff, Heath A.; Nelson, Jeremy T.; Lefebvre, Austen T.

    2006-05-01

    The Air Force Research Laboratory's Human Effectiveness Directorate supports research addressing human factors associated with Unmanned Aerial Vehicle (UAV) operator control stations. One research thrust explores the value of combining synthetic vision data with live camera video presented on a UAV control station display. Information is constructed from databases (e.g., terrain, etc.), as well as numerous information updates via networked communication with other sources. This information is overlaid conformal, in real time, onto the dynamic camera video image display presented to operators. Synthetic vision overlay technology is expected to improve operator situation awareness by highlighting elements of interest within the video image. Secondly, it can assist the operator in maintaining situation awareness of an environment if the video datalink is temporarily degraded. Synthetic vision overlays can also serve to facilitate intuitive communications of spatial information between geographically separated users. This paper discusses results from a high-fidelity UAV simulation evaluation of synthetic symbology overlaid on a (simulated) live camera display. Specifically, the effects of different telemetry data update rates for synthetic visual data were examined for a representative sensor operator task. Participants controlled the zoom and orientation of the camera to find and designate targets. The results from both performance and subjective data demonstrated the potential benefit of an overlay of synthetic symbology for improving situation awareness, reducing workload, and decreasing time required to designate points of interest. Implications of symbology update rate are discussed, as well as other human factors issues.

  20. Aircraft exterior scratch measurement system using machine vision

    NASA Astrophysics Data System (ADS)

    Sarr, Dennis P.

    1991-08-01

    In assuring the quality of aircraft skin, it must be free of surface imperfections and structural defects. Manual inspection methods involve mechanical and optical technologies. Machine vision instrumentation can be automated for increasing the inspection rate and repeatability of measurement. As shown by previous industry experience, machine vision instrumentation methods are not calibrated and certified as easily as mechanical devices. The defect must be accurately measured and documented via a printout for engineering evaluation and disposition. In the actual usage of the instrument for inspection, the device must be portable for factory usage, on the flight line, or on an aircraft anywhere in the world. The instrumentation must be inexpensive and operable by a mechanic/technician level of training. The instrument design requirements are extensive, requiring a multidisciplinary approach for the research and development. This paper presents the image analysis results of microscopic structures laser images of scratches on various surfaces. Also discussed are the hardware and algorithms used for the microscopic structures laser images. Dedicated hardware and embedded software for implementing the image acquisition and analysis have been developed. The human interface, human vision is used for determining which image should be processed. Once the image is chosen for analysis, the final answer is a numerical value of the scratch depth. The result is an answer that is reliable and repeatable. The prototype has been built and demonstrated to Boeing Commercial Airplanes Group factory Quality Assurance and flight test management with favorable response.

  1. Human Factors Engineering as a System in the Vision for Exploration

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Smith, Danielle; Holden, Kritina

    2006-01-01

    In order to accomplish NASA's Vision for Exploration, while assuring crew safety and productivity, human performance issues must be well integrated into system design from mission conception. To that end, a two-year Technology Development Project (TDP) was funded by NASA Headquarters to develop a systematic method for including the human as a system in NASA's Vision for Exploration. The specific goals of this project are to review current Human Systems Integration (HSI) standards (i.e., industry, military, NASA) and tailor them to selected NASA Exploration activities. Once the methods are proven in the selected domains, a plan will be developed to expand the effort to a wider scope of Exploration activities. The methods will be documented for inclusion in NASA-specific documents (such as the Human Systems Integration Standards, NASA-STD-3000) to be used in future space systems. The current project builds on a previous TDP dealing with Human Factors Engineering processes. That project identified the key phases of the current NASA design lifecycle, and outlined the recommended HFE activities that should be incorporated at each phase. The project also resulted in a prototype of a webbased HFE process tool that could be used to support an ideal HFE development process at NASA. This will help to augment the limited human factors resources available by providing a web-based tool that explains the importance of human factors, teaches a recommended process, and then provides the instructions, templates and examples to carry out the process steps. The HFE activities identified by the previous TDP are being tested in situ for the current effort through support to a specific NASA Exploration activity. Currently, HFE personnel are working with systems engineering personnel to identify HSI impacts for lunar exploration by facilitating the generation of systemlevel Concepts of Operations (ConOps). For example, medical operations scenarios have been generated for lunar habitation

  2. A vision-aided alignment datum system for coordinate measuring machines

    NASA Astrophysics Data System (ADS)

    Wang, L.; Lin, G. C. I.

    1997-07-01

    This paper presents the development of a CAD-based and vision-aided precision measurement system. A new coordinate system alignment technique for coordinate measuring machines (CMMs) is described. This alignment technique involves a machine vision system with CAD-based planning and execution of inspection. The determination method for measuring datums for the coordinate measuring technique, using the AutoCAD development system, is described in more detail. To improve image quality in the machine vision system, a contrast enhancement technique is used on the image background to reduce image noise, and an on-line calibration technique is applied. Some systematic errors may be caused by imperfect geometric features in components during coordinate system alignment. This measurement system approach, with its new measuring coordinate alignment method, can be used for high-precision measurement to overcome such errors.

  3. Computational vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1981-01-01

    The range of fundamental computational principles underlying human vision that equally apply to artificial and natural systems is surveyed. There emerges from research a view of the structuring of vision systems as a sequence of levels of representation, with the initial levels being primarily iconic (edges, regions, gradients) and the highest symbolic (surfaces, objects, scenes). Intermediate levels are constrained by information made available by preceding levels and information required by subsequent levels. In particular, it appears that physical and three-dimensional surface characteristics provide a critical transition from iconic to symbolic representations. A plausible vision system design incorporating these principles is outlined, and its key computational processes are elaborated.

  4. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  5. Approximate world models: Incorporating qualitative and linguistic information into vision systems

    SciTech Connect

    Pinhanez, C.S.; Bobick, A.F.

    1996-12-31

    Approximate world models are coarse descriptions of the elements of a scene, and are intended to be used in the selection and control of vision routines in a vision system. In this paper we present a control architecture in which the approximate models represent the complex relationships among the objects in the world, allowing the vision routines to be situation or context specific. Moreover, because of their reduced accuracy requirements, approximate world models can employ qualitative information such as those provided by linguistic descriptions of the scene. The concept is demonstrated in the development of automatic cameras for a TV studio-SmartCams. Results are shown where SmartCams use vision processing of real imagery and information written in the script of a TV show to achieve TV-quality framing.

  6. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  7. Improving vision-based motor rehabilitation interactive systems for users with disabilities using mirror feedback.

    PubMed

    Jaume-i-Capó, Antoni; Martínez-Bueso, Pau; Moyà-Alcover, Biel; Varona, Javier

    2014-01-01

    Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (T(s)) and time-to-complete (T(c))). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (T(s) = 7.09 (P < 0.001) and T(c) = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems. PMID:25295310

  8. Improving Vision-Based Motor Rehabilitation Interactive Systems for Users with Disabilities Using Mirror Feedback

    PubMed Central

    Martínez-Bueso, Pau; Moyà-Alcover, Biel

    2014-01-01

    Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (Ts) and time-to-complete (Tc)). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (Ts = 7.09 (P < 0.001) and Tc = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems. PMID:25295310

  9. Eye vision system using programmable micro-optics and micro-electronics

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.; Amin, M. Junaid; Riza, Mehdi N.

    2014-02-01

    Proposed is a novel eye vision system that combines the use of advanced micro-optic and microelectronic technologies that includes programmable micro-optic devices, pico-projectors, Radio Frequency (RF) and optical wireless communication and control links, energy harvesting and storage devices and remote wireless energy transfer capabilities. This portable light weight system can measure eye refractive powers, optimize light conditions for the eye under test, conduct color-blindness tests, and implement eye strain relief and eye muscle exercises via time sequenced imaging. Described is the basic design of the proposed system and its first stage system experimental results for vision spherical lens refractive error correction.

  10. Vision Underwater.

    ERIC Educational Resources Information Center

    Levine, Joseph S.

    1980-01-01

    Provides information regarding underwater vision. Includes a discussion of optically important interfaces, increased eye size of organisms at greater depths, visual peculiarities regarding the habitat of the coastal environment, and various pigment visual systems. (CS)

  11. Development of a machine vision system for a real-time precision sprayer

    NASA Astrophysics Data System (ADS)

    Bossu, Jérémie; Gée, Christelle; Truchetet, Frédéric

    2007-01-01

    In the context of precision agriculture, we have developed a machine vision system for a real time precision sprayer. From a monochrome CCD camera located in front of the tractor, the discrimination between crop and weeds is obtained with an image processing based on spatial information using a Gabor filter. This method allows to detect the periodic signals from the non periodic one and it enables to enhance the crop rows whereas weeds have patchy distribution. Thus, weed patches were clearly identified by a blob-coloring method. Finally, we use a pinhole model to transform the weed patch coordinates image in world coordinates in order to activate the right electro-pneumatic valve of the sprayer at the right moment.

  12. A customized vision system for tracking humans wearing reflective safety clothing from industrial vehicles and machinery.

    PubMed

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  13. A Customized Vision System for Tracking Humans Wearing Reflective Safety Clothing from Industrial Vehicles and Machinery

    PubMed Central

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  14. Night vision imaging systems design, integration, and verification in military fighter aircraft

    NASA Astrophysics Data System (ADS)

    Sabatini, Roberto; Richardson, Mark A.; Cantiello, Maurizio; Toscano, Mario; Fiorini, Pietro; Jia, Huamin; Zammit-Mangion, David

    2012-04-01

    This paper describes the developmental and testing activities conducted by the Italian Air Force Official Test Centre (RSV) in collaboration with Alenia Aerospace, Litton Precision Products and Cranfiled University, in order to confer the Night Vision Imaging Systems (NVIS) capability to the Italian TORNADO IDS (Interdiction and Strike) and ECR (Electronic Combat and Reconnaissance) aircraft. The activities consisted of various Design, Development, Test and Evaluation (DDT&E) activities, including Night Vision Goggles (NVG) integration, cockpit instruments and external lighting modifications, as well as various ground test sessions and a total of eighteen flight test sorties. RSV and Litton Precision Products were responsible of coordinating and conducting the installation activities of the internal and external lights. Particularly, an iterative process was established, allowing an in-site rapid correction of the major deficiencies encountered during the ground and flight test sessions. Both single-ship (day/night) and formation (night) flights were performed, shared between the Test Crews involved in the activities, allowing for a redundant examination of the various test items by all participants. An innovative test matrix was developed and implemented by RSV for assessing the operational suitability and effectiveness of the various modifications implemented. Also important was definition of test criteria for Pilot and Weapon Systems Officer (WSO) workload assessment during the accomplishment of various operational tasks during NVG missions. Furthermore, the specific technical and operational elements required for evaluating the modified helmets were identified, allowing an exhaustive comparative evaluation of the two proposed solutions (i.e., HGU-55P and HGU-55G modified helmets). The results of the activities were very satisfactory. The initial compatibility problems encountered were progressively mitigated by incorporating modifications both in the front and

  15. Development and modeling of a stereo vision focusing system for a field programmable gate array robot

    NASA Astrophysics Data System (ADS)

    Tickle, Andrew J.; Buckle, James; Grindley, Josef E.; Smith, Jeremy S.

    2010-10-01

    Stereo vision is a situation where an imaging system has two or more cameras in order to make it more robust by mimicking the human vision system. By using two inputs, knowledge of their own relative geometry can be exploited to derive depth information from the two views they receive. 3D co-ordinates of an object in an observed scene can be computed from the intersection of the two sets of rays. Presented here is the development of a stereo vision system to focus on an object at the centre of a baseline between two cameras at varying distances. This has been developed primarily for use on a Field Programmable Gate Array (FPGA) but an adaptation of this developed methodology is also presented for use with a PUMA 560 Robotic Manipulator with a single camera attachment. The two main vision systems considered here are a fixed baseline with an object moving at varying distances from this baseline, and a system with a fixed distance and a varying baseline. These two differing situations provide enough data so that the co-efficient variables that determine the system operation can be calibrated automatically with only the baseline value needing to be entered, the system performs all the required calculations for the user for use with a baseline of any distance. The limits of system with regards to the focusing accuracy obtained are also presented along with how the PUMA 560 controls its joints for the stereo vision and how it moves from one position to another to attend stereo vision compared to the two camera system for the FPGA. The benefits of such a system for range finding in mobile robotics are discussed and how this approach is more advantageous when compared against laser range finders or echolocation using ultrasonics.

  16. Function-based design process for an intelligent ground vehicle vision system

    NASA Astrophysics Data System (ADS)

    Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.

    2010-10-01

    An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.

  17. Vision System To Identify Car Body Types For Spray Painting Robot

    NASA Astrophysics Data System (ADS)

    Uartlam, Peter; Neilson, Geoff

    1984-02-01

    The automation of car body spray booth operations employing paint spraying robots generally requires the robots to execute one of a number of defined routines according to the car body type. A vision system is described which identifies a car body type by its shape and provides an identity code to the robot controller thus enabling the correct routine to be executed. The vision system consists of a low cost linescan camera, a flucrescens light source and a microprocessor image analyser and is an example of a cost effective, reliable, industrially engineered robot vision system for a demanding production environment. Extension of the system with additional cameras will increase the application to the other automatic operations on a car assembly line where it becomes essential to reliably differentiate between up to 40 vatiations of body types.

  18. GARGOYLE: An environment for real-time, context-sensitive active vision

    SciTech Connect

    Prokopowicz, P.N.; Swain, M.J.; Firby, R.J.; Kahn, R.E.

    1996-12-31

    Researchers in robot vision have access to several excellent image processing packages (e.g., Khoros, Vista, Susan, MIL, and X Vision to name only a few) as a base for any new vision software needed in most navigation and recognition tasks. Our work in automonous robot control and human-robot interaction, however, has demanded a new level of run-time flexibility and performance: on-the-fly configuration of visual routines that exploit up-to-the-second context from the task, image, and environment. The result is Gargoyle: an extendible, on-board, real-time vision software package that allows a robot to configure, parameterize, and execute image-processing pipelines at run-time. Each operator in a pipeline works at a level of resolution and over regions of interest that are computed by upstream operators or set by the robot according to task constraints. Pipeline configurations and operator parameters can be stored as a library of visual methods appropriate for different sensing tasks and environmental conditions. Beyond this, a robot may reason about the current task and environmental constraints to construct novel visual routines that are too specialized to work under general conditions, but that are well-suited to the immediate environment and task. We use the RAP reactive plan-execution system to select and configure pre-compiled processing pipelines, and to modify them for specific constraints determined at run-time.

  19. Monocular vision measurement system for the position and orientation of remote object

    NASA Astrophysics Data System (ADS)

    Zhou, Tao; Sun, Changku; Chen, Shan

    2008-03-01

    The high-precision measurement method for the position and orientation of remote object, is one of the hot issues in vision inspection, because it is very important in the field of aviation, precision measurement and so on. The position and orientation of the object at a distance of 5 m, can be measured by near infrared monocular vision based on vision measurement principle, using image feature extraction and data optimization. With the existent monocular vision methods and their features analyzed, a new monocular vision method is presented to get the position and orientation of target. In order to reduce the environmental light interference and make greater contrast between the target and background, near infrared light is used as light source. For realizing automatic camera calibration, a new feature-circle-based calibration drone is designed. A set of algorithms for image processing, proved to be efficient, are presented as well. The experiment results show that, the repeatability precision of angles is less than 8"; the repeatability precision of displacement is less than 0.02 mm. This monocular vision measurement method has been already used in wheel alignment system. It will have broader application field.

  20. Long-Term Instrumentation, Information, and Control Systems (II&C) Modernization Future Vision and Strategy

    SciTech Connect

    Kenneth Thomas

    2012-02-01

    digital environment to enhance nuclear safety, increase productivity, and improve overall plant performance. The long-term goal is to transform the operating model of the nuclear power plants (NPP)s from one that is highly reliant on a large staff performing mostly manual activities to an operating model based on highly integrated technology with a smaller staff. This digital transformation is critical to addressing an array of issues facing the plants, including aging of legacy analog systems, potential shortage of technical workers, ever-increasing expectations for nuclear safety improvement, and relentless pressure to reduce cost. The Future Vision is based on research is being conducted in the following major areas of plant function: (1) Highly integrated control rooms; (2) Highly automated plant; (3) Integrated operations; (4) Human performance improvement for field workers; and (5) Outage safety and efficiency. Pilot projects will be conducted in each of these areas as the means for industry to collectively integrate these new technologies into nuclear plant work activities. The pilot projects introduce new digital technologies into the nuclear plant operating environment at host operating plants to demonstrate and validate them for production usage. In turn, the pilot project technologies serve as the stepping stones to the eventual seamless digital environment as described in the Future Vision.

  1. Long-Term Instrumentation, Information, and Control Systems (II&C) Modernization Future Vision and Strategy

    SciTech Connect

    Kenneth Thomas; Bruce Hallbert

    2013-02-01

    seamless digital environment to enhance nuclear safety, increase productivity, and improve overall plant performance. The long-term goal is to transform the operating model of the nuclear power plants (NPP)s from one that is highly reliant on a large staff performing mostly manual activities to an operating model based on highly integrated technology with a smaller staff. This digital transformation is critical to addressing an array of issues facing the plants, including aging of legacy analog systems, potential shortage of technical workers, ever-increasing expectations for nuclear safety improvement, and relentless pressure to reduce cost. The Future Vision is based on research is being conducted in the following major areas of plant function: 1. Highly integrated control rooms 2. Highly automated plant 3. Integrated operations 4. Human performance improvement for field workers 5. Outage safety and efficiency. Pilot projects will be conducted in each of these areas as the means for industry to collectively integrate these new technologies into nuclear plant work activities. The pilot projects introduce new digital technologies into the nuclear plant operating environment at host operating plants to demonstrate and validate them for production usage. In turn, the pilot project technologies serve as the stepping stones to the eventual seamless digital environment as described in the Future Vision.

  2. A binocular machine vision system for three-dimensional surface measurement of small objects.

    PubMed

    Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido

    2007-12-01

    Rendering three-dimensional information of a scene from optical measurements is very important for a wide variety of applications. However, computer vision advancements have not yet achieved the accurate three-dimensional reconstruction of objects smaller than 1 cm diameter. This paper describes the development of a novel volumetric method for small objects, using a binocular machine vision system. The achieved precision is high, providing a standard deviation of 0.04 mm. The robustness, of the system, issues from the lab prototype imaging system with the crucial z-axis movement without the need of further calibration and the fully automated volumetric algorithms. PMID:17881188

  3. A concurrent on-board vision system for a mobile robot

    SciTech Connect

    Jones, J.P.

    1988-01-01

    Robot vision algorithms have been implemented on an 8-node NCUBE-AT hypercube system onboard a mobile robot (HERMIES) developed at Oak Ridge National Laboratory. Images are digitized using a faremgrabber mounted in a VME rack. Image processing and analysis are performed on the hypercube system. The vision system is integrated with robot navigation and control software, enabling the robot to find the front of a mockup control panel, move up to the panel, and read an analog meter. Among the concurrent algorithms used for image analysis are a new component labelign algorithm and a Hough transform algorithm with load balancing. 14 refs., 3 figs., 2 tabs.

  4. A computer vision system for the recognition of trees in aerial photographs

    NASA Technical Reports Server (NTRS)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  5. Computer Vision System For Locating And Identifying Defects In Hardwood Lumber

    NASA Astrophysics Data System (ADS)

    Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.

    1989-03-01

    This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.

  6. Laser gated viewing at ISL for vision through smoke, active polarimetry, and 3D imaging in NIR and SWIR wavelength bands

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Christnacher, Frank

    2013-12-01

    In this article, we want to give a review on the application of laser gated viewing for the improvement of vision cross-diffusing obstacles (smoke, turbid medium, …), the capturing of 3D scene information, or the study of material properties by polarimetric analysis at near-infrared (NIR) and shortwave-infrared (SWIR) wavelengths. Laser gated viewing has been studied since the 1960s as an active night vision method. Owing to enormous improvements in the development of compact and highly efficient laser sources and in the development of modern sensor technologies, the maturity of demonstrator systems rose during the past decades. Further, it was demonstrated that laser gated viewing has versatile sensing capabilities with application for long-range observation under certain degraded weather conditions, vision through obstacles and fog, active polarimetry, and 3D imaging.

  7. Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision

    PubMed Central

    Van Dromme, Ilse C.; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter

    2016-01-01

    The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams. PMID:27082854

  8. A machine vision assisted system for fluorescent magnetic particle inspection of railway wheelsets

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Sun, Zhenguo; Zhang, Wenzeng; Chen, Qiang

    2016-02-01

    Fluorescent magnetic particle inspection is a conventional non-destructive evaluation process for detecting surface and slightly subsurface cracks of the wheelsets. Using machine vision instead of workers' direct observation could remarkably improve the working condition and repeatability of the inspection. This paper presents a machine vision assisted automatic fluorescent magnetic particle inspection system for surface defect inspection of railway wheelsets. The system setup of it is composed of a semiautomatic fluorescent magnetic particle inspection machine, a vision system and an industrial computer. The detection of magnetic particle indications of quantitative quality indicators and cracks is studied: the detection of quantitative quality indicators is achieved by mathematical morphology, Otsu's thresholding and a RANSAC based ellipse fitting algorithm; the crack detection algorithm is a multiscale algorithm using Gaussian blur, mathematical morphology and several shape and color descriptors. Tests show that the algorithms are able to detect the indications of the quantitative quality indicators and the cracks precisely.

  9. Clinical testing of the Ultra-Vision screen-film system for maxillofacial radiography.

    PubMed

    Sewerin, I P

    1994-03-01

    The Ultra-Vision screen (Du Pont, Towanda, Pa.) contains a yttrium tantalate phosphor-emitting ultraviolet light and eliminates the crossover effect. Increased resolution has been proven in vitro and the purpose of the present study was to test these findings in a clinical situation. Fifteen pairs of skull radiographs were produced with the use of Du Pont Ultra-Vision Rapid screens and Kodak Lanex (Eastman Kodak, Rochester, N.Y.) screens both belonging to speed class 400. Objects were a cadaver head, a 3M phantom head (3M Corp., St. Paul, Minn.), and patients who were serially radiographed as controls in a dental implant study. The radiographs had identical densities, but contrast was varied deliberately. Twelve observers judged the radiographs blindly. Ninety-two percent of the ratings with respect to resolution favored the Ultra-Vision system. However, great doubt was expressed regarding contrast. The agreement between the observers was tested by a Cochran's Q test. The results confirm that the Ultra-Vision system exhibits an improved resolution compared with the Lanex system. Ultra-Vision is recommended whether improved resolution of the radiographs or an expected reduced patient dose is preferred. PMID:8170665

  10. Influence of control parameters on the joint tracking performance of a coaxial weld vision system

    NASA Technical Reports Server (NTRS)

    Gangl, K. J.; Weeks, J. L.

    1985-01-01

    The first phase of a series of evaluations of a vision-based welding control sensor for the Space Shuttle Main Engine Robotic Welding System is described. The robotic welding system is presently under development at the Marshall Space Flight Center. This evaluation determines the standard control response parameters necessary for proper trajectory of the welding torch along the joint.

  11. Night vision: requirements and possible roadmap for FIR and NIR systems

    NASA Astrophysics Data System (ADS)

    Källhammer, Jan-Erik

    2006-04-01

    A night vision system must increase visibility in situations where only low beam headlights can be used today. As pedestrians and animals have the highest risk increase in night time traffic due to darkness, the ability of detecting those objects should be the main performance criteria, and the system must remain effective when facing the headlights of oncoming vehicles. Far infrared system has been shown to be superior to near infrared system in terms of pedestrian detection distance. Near infrared images were rated to have significantly higher visual clutter compared with far infrared images. Visual clutter has been shown to correlate with reduction in detection distance of pedestrians. Far infrared images are perceived as being more unusual and therefore more difficult to interpret, although the image appearance is likely related to the lower visual clutter. However, the main issue comparing the two technologies should be how well they solve the driver's problem with insufficient visibility under low beam conditions, especially of pedestrians and other vulnerable road users. With the addition of an automatic detection aid, a main issue will be whether the advantage of FIR systems will vanish given NIR systems with well performing automatic pedestrian detection functionality. The first night vision introductions did not generate the sales volumes initially expected. A renewed interest in night vision systems are however to be expected after the release of night vision systems by BMW, Mercedes and Honda, the latter with automatic pedestrian detection.

  12. Human factors and safety considerations of night-vision systems flight using thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Rash, Clarence E.; Verona, Robert W.; Crowley, John S.

    1990-10-01

    Helmet Mounted Systems (HMS) must be lightweight, balanced and compatible with life support and head protection assemblies. This paper discusses the design of one particular HMS, the GEC Ferranti NITE-OP/NIGHTBIRD aviator's Night Vision Goggle (NVG) developed under contracts to the Ministry of Defence for all three services in the United Kingdom (UK) for Rotary Wing and fast jet aircraft. The existing equipment constraints, safety, human factor and optical performance requirements are discussed before the design solution is presented after consideration of these material and manufacturing options.

  13. New vision system and navigation algorithm for an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Tann, Hokchhay; Shakya, Bicky; Merchen, Alex C.; Williams, Benjamin C.; Khanal, Abhishek; Zhao, Jiajia; Ahlgren, David J.

    2013-12-01

    Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 Intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.

  14. ExpertVision - A video-based non-contact system for motion measurement

    NASA Technical Reports Server (NTRS)

    Walton, James S.

    1988-01-01

    A system known as ExpertVision for obtaining noncontact kinematic measurements using standard video signals is described. In the system, a video processor extracts edge information from video images using a proprietary thresholding technique. Images can be examined in real time at up to 200 fields/s, and as many as four synchronized inputs can be treated simultaneously by buffering the edge coordinates for each view in dedicated RAM memory. Mechanical applications for ExpertVision include the study of simple impacts, ballistics, wing flutter, the kinematics of helicopter rotor blades, and fluid and gas flow problems.

  15. Flight Test Evaluation of Situation Awareness Benefits of Integrated Synthetic Vision System Technology f or Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III

    2005-01-01

    Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.

  16. Multi-Purpose Avionic Architecture for Vision Based Navigation Systems for EDL and Surface Mobility Scenarios

    NASA Astrophysics Data System (ADS)

    Tramutola, A.; Paltro, D.; Cabalo Perucha, M. P.; Paar, G.; Steiner, J.; Barrio, A. M.

    2015-09-01

    Vision Based Navigation (VBNAV) has been identified as a valid technology to support space exploration because it can improve autonomy and safety of space missions. Several mission scenarios can benefit from the VBNAV: Rendezvous & Docking, Fly-Bys, Interplanetary cruise, Entry Descent and Landing (EDL) and Planetary Surface exploration. For some of them VBNAV can improve the accuracy in state estimation as additional relative navigation sensor or as absolute navigation sensor. For some others, like surface mobility and terrain exploration for path identification and planning, VBNAV is mandatory. This paper presents the general avionic architecture of a Vision Based System as defined in the frame of the ESA R&T study “Multi-purpose Vision-based Navigation System Engineering Model - part 1 (VisNav-EM-1)” with special focus on the surface mobility application.

  17. Novel approach to characterize and compare the performance of night vision systems in representative illumination conditions

    NASA Astrophysics Data System (ADS)

    Roy, Nathalie; Vallières, Alexandre; St-Germain, Daniel; Potvin, Simon; Dupuis, Michel; Bouchard, Jean-Claude; Villemaire, André; Bérubé, Martin; Breton, Mélanie; Gagné, Guillaume

    2016-05-01

    A novel approach is used to characterize and compare the performance of night vision systems in conditions more representative of night operation in terms of spectral content. Its main advantage compared to standard testing methodologies is that it provides a fast and efficient way for untrained observers to compare night vision system performances with realistic illumination spectra. The testing methodology relies on a custom tumbling-E target and on a new LED-based illumination source that better emulates night sky spectral irradiances from deep overcast starlight to quarter-moon conditions. In this paper, we describe the setup and we demonstrate that the novel approach can be an efficient method to characterize among others night vision goggles (NVG) performances with a small error on the photogenerated electrons compared to the STANAG 4351 procedure.

  18. A bio-inspired apposition compound eye machine vision sensor system.

    PubMed

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-12-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm. PMID:19901450

  19. [Development of a new position-recognition system for robotic radiosurgery systems using machine vision].

    PubMed

    Mohri, Issai; Umezu, Yoshiyuki; Fukunaga, Junnichi; Tane, Hiroyuki; Nagata, Hironori; Hirashima, Hideaki; Nakamura, Katsumasa; Hirata, Hideki

    2014-08-01

    CyberKnife(®) provides continuous guidance through radiography, allowing instantaneous X-ray images to be obtained; it is also equipped with 6D adjustment for patient setup. Its disadvantage is that registration is carried out just before irradiation, making it impossible to perform stereo-radiography during irradiation. In addition, patient movement cannot be detected during irradiation. In this study, we describe a new registration system that we term "Machine Vision," which subjects the patient to no additional radiation exposure for registration purposes, can be set up promptly, and allows real-time registration during irradiation. Our technique offers distinct advantages over CyberKnife by enabling a safer and more precise mode of treatment. "Machine Vision," which we have designed and fabricated, is an automatic registration system that employs three charge coupled device cameras oriented in different directions that allow us to obtain a characteristic depiction of the shape of both sides of the fetal fissure and external ears in a human head phantom. We examined the degree of precision of this registration system and concluded it to be suitable as an alternative method of registration without radiation exposure when displacement is less than 1.0 mm in radiotherapy. It has potential for application to CyberKnife in clinical treatment. PMID:25142385

  20. G-MAP: a novel night vision system for satellites

    NASA Astrophysics Data System (ADS)

    Miletti, Thomas; Maresi, Luca; Zuccaro Marchi, Alessandro; Pontetti, Giorgia

    2015-10-01

    The recent developments of single-photon counting array detectors opens the door to a novel type of systems that could be used on satellites in low Earth orbit. One possible application is the detection of non-cooperative vessels or illegal fishing activities. Currently only surveillance operations conducted by Navy or coast guard address this topic, operations by nature costly and with limited coverage. This paper aims to describe the architectural design of a system based on a novel single-photon counting detector, which works mainly in the visible and features fast readout, low noise and a 256x256 matrix of 64 μm-pixels. This detector is positioned in the focal plane of a fully aspheric reflective f/6 telescope, to guarantee state of the art performance. The combination of the two grants optimal ground sampling distance, compatible with the average dimension of a vessel, and overall performance. A radiative analysis of the light transmitted from emission to detection is presented, starting from models of lamps used for attracting fishes and illuminating the deck of the boats. A radiative transfer model is used to estimate the amount of photons emitted by such vessels reaching the detector. Since the novel detector features high framerate and low noise, the system as it is envisaged is able to properly serve the proposed goal. The paper shows the results of a trade-off between instrument parameters and spacecraft operations to maximize the detection probability and the covered sea surface. The status of development of both detector and telescope are also described.

  1. Retinal stimulation strategies to restore vision: Fundamentals and systems.

    PubMed

    Yue, Lan; Weiland, James D; Roska, Botond; Humayun, Mark S

    2016-07-01

    Retinal degeneration, a leading cause of blindness worldwide, is primarily characterized by the dysfunctional/degenerated photoreceptors that impair the ability of the retina to detect light. Our group and others have shown that bioelectronic retinal implants restore useful visual input to those who have been blind for decades. This unprecedented approach of restoring sight demonstrates that patients can adapt to new visual input, and thereby opens up opportunities to not only improve this technology but also develop alternative retinal stimulation approaches. These future improvements or new technologies could have the potential of selectively stimulating specific cell classes in the inner retina, leading to improved visual resolution and color vision. In this review we will detail the progress of bioelectronic retinal implants and future devices in this genre as well as discuss other technologies such as optogenetics, chemical photoswitches, and ultrasound stimulation. We will discuss the principles, biological aspects, technology development, current status, clinical outcomes/prospects, and challenges for each approach. The review will cover functional imaging documented cortical responses to retinal stimulation in blind patients. PMID:27238218

  2. Semi-autonomous wheelchair developed using a unique camera system configuration biologically inspired by equine vision.

    PubMed

    Nguyen, Jordan S; Tran, Yvonne; Su, Steven W; Nguyen, Hung T

    2011-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using cameras in a system configuration modeled on the vision system of a horse. This new camera configuration utilizes stereoscopic vision for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, combined with a spherical camera system for 360-degrees of monocular vision. This unique combination allows for static components of an unknown environment to be mapped and any surrounding dynamic obstacles to be detected, during real-time autonomous navigation, minimizing blind-spots and preventing accidental collisions with people or obstacles. This novel vision system combined with shared control strategies provides intelligent assistive guidance during wheelchair navigation and can accompany any hands-free wheelchair control technology. Leading up to experimental trials with patients at the Royal Rehabilitation Centre (RRC) in Ryde, results have displayed the effectiveness of this system to assist the user in navigating safely within the RRC whilst avoiding potential collisions. PMID:22255649

  3. External Vision Systems (XVS) Proof-of-Concept Flight Test Evaluation

    NASA Technical Reports Server (NTRS)

    Shelton, Kevin J.; Williams, Steven P.; Kramer, Lynda J.; Arthur, Jarvis J.; Prinzel, Lawrence, III; Bailey, Randall E.

    2014-01-01

    NASA's Fundamental Aeronautics Program, High Speed Project is performing research, development, test and evaluation of flight deck and related technologies to support future low-boom, supersonic configurations (without forward-facing windows) by use of an eXternal Vision System (XVS). The challenge of XVS is to determine a combination of sensor and display technologies which can provide an equivalent level of safety and performance to that provided by forward-facing windows in today's aircraft. This flight test was conducted with the goal of obtaining performance data on see-and-avoid and see-to-follow traffic using a proof-of-concept XVS design in actual flight conditions. Six data collection flights were flown in four traffic scenarios against two different sized participating traffic aircraft. This test utilized a 3x1 array of High Definition (HD) cameras, with a fixed forward field-of-view, mounted on NASA Langley's UC-12 test aircraft. Test scenarios, with participating NASA aircraft serving as traffic, were presented to two evaluation pilots per flight - one using the proof-of-concept (POC) XVS and the other looking out the forward windows. The camera images were presented on the XVS display in the aft cabin with Head-Up Display (HUD)-like flight symbology overlaying the real-time imagery. The test generated XVS performance data, including comparisons to natural vision, and post-run subjective acceptability data were also collected. This paper discusses the flight test activities, its operational challenges, and summarizes the findings to date.

  4. Integration of a Legacy System with Night Vision Training System (NVTS)

    NASA Astrophysics Data System (ADS)

    Anderson, Gretchen M.; Vrana, Craig A.; Riegler, Joseph T.; Martin, Elizabeth L.

    2002-08-01

    The increase in tactical night operations resulted in the requirement for improved night vision goggle (NVG) training and simulation. The Night Vision Training System (NVTS), developed at the Air Force Research Laboratory's Warfighter Training Research Division (AFRL/HEA), provides high-fidelity NVG imagery required to support effective NVG training and mission rehearsal. Acquisition of a multichannel NVTS, to drive both an out-the-window (OTW) view and a helmet-mounted display (HMD), may exceed resources of some training units. An alternative could be to add one channel of NVG imagery to the existing OTW imagery provided by the legacy system. This evaluation addressed engineering and training issues associated with integrating a single NVTS HMD channel with an existing legacy system. Pilots rated the degree of disparity between the HMD and OTW scenes for various scene attributes and effect on flight performance. Findings demonstrated the potential for integration of an NVTS channel with an existing legacy system. Latency and terrain elevation differences between the two databases were measured and did not significantly impact system integration or pilot ratings. When integrating other legacy systems with NVTS, significant disparities may exist between the two databases. Pilot ratings and comments indicate that (a) display brightness and contrast levels of the OTW scene should be set to correspond to real-world, (b) unaided luminance values for a given illumination condition; disparity in moon phase and position between the two sky models should be minimized; and (c) star quantity and brightness in the OTW scene and the NVG scene, as rendered on the HMD, should be as consistent with real-world conditions as possible.

  5. Three-dimensional measurement of moving objects using a multiple-camera vision system

    SciTech Connect

    Lee, D.J.; Anbalagan, R.S.

    1995-12-31

    Machine vision systems utilizing lighting triangulation technique are often used for the depth measurement or profiling. The surface depth information is extracted from a two-dimensional (2-D) image and require the object in stationary form. For most of the industrial gauging applications, several measurements of a large moving object must be done simultaneously from different perspectives with high accuracy. This requires a system with multiple cameras that can provide the true three-dimensional (3-D) measurements in a world coordinate system. A unique solution based on IDAS vision system has been designed and developed for measuring large fast-moving objects with high accuracy. The vision system developed consists of three camera systems. Each camera system is calibrated to obtain the information in all three dimensions through a very detailed calibration procedure using a common reference jig. Information obtained from each of the three cameras can be converted into a world coordinate system that is created during the calibration. Because of the unified coordinate system, objects can be accurately measured independent of their orientation and position. This enables the system to perform the 3-D measurement of the objects moving at a high speed without the occlusion problem. The technique can be expanded to n-camera system to support the measurement of complex objects. The details of the system configuration, optics selections, calibration procedure, and the image processing algorithms will be included in this paper.

  6. Novel compact panomorph lens based vision system for monitoring around a vehicle

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2008-04-01

    Automotive applications are one of the largest vision-sensor market segments and one of the fastest growing ones. The trend to use increasingly more sensors in cars is driven both by legislation and consumer demands for higher safety and better driving experiences. Awareness of what directly surrounds a vehicle affects safe driving and manoeuvring of a vehicle. Consequently, panoramic 360° Field of View imaging can contributes most to the perception of the world around the driver than any other sensors. However, to obtain a complete vision around the car, several sensor systems are necessary. To solve this issue, a customized imaging system based on a panomorph lens will provide the maximum information for the drivers with a reduced number of sensors. A panomorph lens is a hemispheric wide angle anamorphic lens with enhanced resolution in predefined zone of interest. Because panomorph lenses are optimized to a custom angle-to-pixel relationship, vision systems provide ideal image coverage that reduces and optimizes the processing. We present various scenarios which may benefit from the use of a custom panoramic sensor. We also discuss the technical requirements of such vision system. Finally we demonstrate how the panomorph based visual sensor is probably one of the most promising ways to fuse many sensors in one. For example, a single panoramic sensor on the front of a vehicle could provide all necessary information for assistance in crash avoidance, lane tracking, early warning, park aids, road sign detection, and various video monitoring views.

  7. Forward-looking activities: incorporating citizens' visions: A critical analysis of the CIVISTI method.

    PubMed

    Gudowsky, Niklas; Peissl, Walter; Sotoudeh, Mahshid; Bechtold, Ulrike

    2012-11-01

    Looking back on the many prophets who tried to predict the future as if it were predetermined, at first sight any forward-looking activity is reminiscent of making predictions with a crystal ball. In contrast to fortune tellers, today's exercises do not predict, but try to show different paths that an open future could take. A key motivation to undertake forward-looking activities is broadening the information basis for decision-makers to help them actively shape the future in a desired way. Experts, laypeople, or stakeholders may have different sets of values and priorities with regard to pending decisions on any issue related to the future. Therefore, considering and incorporating their views can, in the best case scenario, lead to more robust decisions and strategies. However, transferring this plurality into a form that decision-makers can consider is a challenge in terms of both design and facilitation of participatory processes. In this paper, we will introduce and critically assess a new qualitative method for forward-looking activities, namely CIVISTI (Citizen Visions on Science, Technology and Innovation; www.civisti.org), which was developed during an EU project of the same name. Focussing strongly on participation, with clear roles for citizens and experts, the method combines expert, stakeholder and lay knowledge to elaborate recommendations for decision-making in issues related to today's and tomorrow's science, technology and innovation. Consisting of three steps, the process starts with citizens' visions of a future 30-40 years from now. Experts then translate these visions into practical recommendations which the same citizens then validate and prioritise to produce a final product. The following paper will highlight the added value as well as limits of the CIVISTI method and will illustrate potential for the improvement of future processes. PMID:23204998

  8. Enhanced Flight Vision Systems Operational Feasibility Study Using Radar and Infrared Sensors

    NASA Technical Reports Server (NTRS)

    Etherington, Timothy J.; Kramer, Lynda J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.

    2015-01-01

    Approach and landing operations during periods of reduced visibility have plagued aircraft pilots since the beginning of aviation. Although techniques are currently available to mitigate some of the visibility conditions, these operations are still ultimately limited by the pilot's ability to "see" required visual landing references (e.g., markings and/or lights of threshold and touchdown zone) and require significant and costly ground infrastructure. Certified Enhanced Flight Vision Systems (EFVS) have shown promise to lift the obscuration veil. They allow the pilot to operate with enhanced vision, in lieu of natural vision, in the visual segment to enable equivalent visual operations (EVO). An aviation standards document was developed with industry and government consensus for using an EFVS for approach, landing, and rollout to a safe taxi speed in visibilities as low as 300 feet runway visual range (RVR). These new standards establish performance, integrity, availability, and safety requirements to operate in this regime without reliance on a pilot's or flight crew's natural vision by use of a fail-operational EFVS. A pilot-in-the-loop high-fidelity motion simulation study was conducted at NASA Langley Research Center to evaluate the operational feasibility, pilot workload, and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 feet RVR by use of vision system technologies on a head-up display (HUD) without need or reliance on natural vision. Twelve crews flew various landing and departure scenarios in 1800, 1000, 700, and 300 RVR. This paper details the non-normal results of the study including objective and subjective measures of performance and acceptability. The study validated the operational feasibility of approach and departure operations and success was independent of visibility conditions. Failures were handled within the

  9. Algorithm development with the integrated vision system to get the 3D location data

    NASA Astrophysics Data System (ADS)

    Lee, Ji-hyeon; Kim, Moo-hyun; Kim, Yeong-kyeong; Park, Mu-hun

    2011-10-01

    This paper introduces an Integrated Vision System that enables us to detect the image of slabs and coils and get the complete three dimensional location data without any other obstacles in the field of unmanned-crane automation system. Existing laser scanner research tends to be easily influenced by the environment of the work place and therefore cannot give the exact location information. Also, CCD cameras have some problems recognizing the pattern because of the illumination intensity caused in an industrial setting. To overcome these two weaknesses, this thesis suggests laser scanners should be combined with a CCD camera named Integrated Vision System. This system can draw clearer pictures and take advanced 3D location information. The suggested system is expected to help improve the unmanned-crane automation system.

  10. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  11. Using Vision System Technologies to Enable Operational Improvements for Low Visibility Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.

    2014-01-01

    Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  12. Assessing Impact of Dual Sensor Enhanced Flight Vision Systems on Departure Performance

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.

    2016-01-01

    Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS) may serve as game-changing technologies to meet the challenges of the Next Generation Air Transportation System and the envisioned Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety and operational tempos of current-day Visual Flight Rules operations irrespective of the weather and visibility conditions. One significant obstacle lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility and pilot workload of conducting departures and approaches on runways without centerline lighting in visibility as low as 300 feet runway visual range (RVR) by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance and workload was assessed. Using EFVS concepts during 300 RVR terminal operations on runways without centerline lighting appears feasible as all EFVS concepts had equivalent (or better) departure performance and landing rollout performance, without any workload penalty, than those flown with a conventional HUD to runways having centerline lighting. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.

  13. The Use of a Tactile-Vision Sensory Substitution System as an Augmentative Tool for Individuals with Visual Impairments

    ERIC Educational Resources Information Center

    Williams, Michael D.; Ray, Christopher T.; Griffith, Jennifer; De l'Aune, William

    2011-01-01

    The promise of novel technological strategies and solutions to assist persons with visual impairments (that is, those who are blind or have low vision) is frequently discussed and held to be widely beneficial in countless applications and daily activities. One such approach involving a tactile-vision sensory substitution modality as a mechanism to…

  14. Angle extended linear MEMS scanning system for 3D laser vision sensor

    NASA Astrophysics Data System (ADS)

    Pang, Yajun; Zhang, Yinxin; Yang, Huaidong; Zhu, Pan; Gai, Ye; Zhao, Jian; Huang, Zhanhua

    2016-09-01

    Scanning system is often considered as the most important part for 3D laser vision sensor. In this paper, we propose a method for the optical system design of angle extended linear MEMS scanning system, which has features of huge scanning degree, small beam divergence angle and small spot size for 3D laser vision sensor. The principle of design and theoretical formulas are derived strictly. With the help of software ZEMAX, a linear scanning optical system based on MEMS has been designed. Results show that the designed system can extend scanning angle from ±8° to ±26.5° with a divergence angle small than 3.5 mr, and the spot size is reduced for 4.545 times.

  15. Assessing Dual Sensor Enhanced Flight Vision Systems to Enable Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.

    2016-01-01

    Flight deck-based vision system technologies, such as Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS), may serve as a revolutionary crew/vehicle interface enabling technologies to meet the challenges of the Next Generation Air Transportation System Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. One significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility, pilot workload and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 ft runway visual range by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs as they made approaches to runways with and without touchdown zone and centerline lights. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance, workload, and situation awareness during extremely low visibility approach and landing operations was assessed. Results indicate that all EFVS concepts flown resulted in excellent approach path tracking and touchdown performance without any workload penalty. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.

  16. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  17. Defining filled and empty space: reassessing the filled space illusion for active touch and vision.

    PubMed

    Collier, Elizabeth S; Lawson, Rebecca

    2016-09-01

    In the filled space illusion, an extent filled with gratings is estimated as longer than an equivalent extent that is apparently empty. However, researchers do not seem to have carefully considered the terms filled and empty when describing this illusion. Specifically, for active touch, smooth, solid surfaces have typically been used to represent empty space. Thus, it is not known whether comparing gratings to truly empty space (air) during active exploration by touch elicits the same illusionary effect. In Experiments 1 and 2, gratings were estimated as longer if they were compared to smooth, solid surfaces rather than being compared to truly empty space. Consistent with this, Experiment 3 showed that empty space was perceived as longer than solid surfaces when the two were compared directly. Together these results are consistent with the hypothesis that, for touch, the standard filled space illusion only occurs if gratings are compared to smooth, solid surfaces and that it may reverse if gratings are compared to empty space. Finally, Experiment 4 showed that gratings were estimated as longer than both solid and empty extents in vision, so the direction of the filled space illusion in vision was not affected by the nature of the comparator. These results are discussed in relation to the dual nature of active touch. PMID:27233286

  18. An Active Vision Approach to Understanding and Improving Visual Training in the Geosciences

    NASA Astrophysics Data System (ADS)

    Voronov, J.; Tarduno, J. A.; Jacobs, R. A.; Pelz, J. B.; Rosen, M. R.

    2009-12-01

    Experience in the field is a fundamental aspect of geologic training, and its effectiveness is largely unchallenged because of anecdotal evidence of its success among expert geologists. However, there have been only a few quantitative studies based on large data collection efforts to investigate how Earth Scientists learn in the field. In a recent collaboration between Earth scientists, Cognitive scientists and experts in Imaging science at the University of Rochester and Rochester Institute of Technology, we are investigating such a study. Within Cognitive Science, one school of thought, referred to as the Active Vision approach, emphasizes that visual perception is an active process requiring us to move our eyes to acquire new information about our environment. The Active Vision approach indicates the perceptual skills which experts possess and which novices will need to acquire to achieve expert performance. We describe data collection efforts using portable eye-trackers to assess how novice and expert geologists acquire visual knowledge in the field. We also discuss our efforts to collect images for use in a semi-immersive classroom environment, useful for further testing of novices and experts using eye-tracking technologies.

  19. Crew and display concepts evaluation for synthetic/enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III

    2006-05-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.

  20. Real-time vision system using multiple-feature extraction for the Toro robot

    NASA Astrophysics Data System (ADS)

    Climent, Joan; Grau, Antoni

    1994-08-01

    This paper presents a vision system developed for the guidance of a mobile robot. This robot emulates the behavior of a bull in a corrida. A camera is placed at the front of the robot, this means, for the first time, we look at the scene from the point of view of the bull. Due to the difficulty in understanding how bull vision works our model is restricted to the most known features about bull behavior in front of corrida scenes. For this reason, we restrict the bull vision to two different topics: high sensitivity to red color, and constant attention to moving objects. We emulate this model using two image processors. The first performs color segmentation, the second performs motion segmentation. The information obtained is used as feedbacks of the mobile robot. Our approach for view planning is to first simplify the 3-D decision making problem into a 2-D problem. The architecture of our real-time vision system and its implementation are described in detail.

  1. Street Viewer: An Autonomous Vision Based Traffic Tracking System.

    PubMed

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-01-01

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time. PMID:27271627

  2. Street Viewer: An Autonomous Vision Based Traffic Tracking System

    PubMed Central

    Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano

    2016-01-01

    The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time. PMID:27271627

  3. Modelling Peripheral Pre-Attention And Foveal Fixation For Search Directed Machine Vision Systems

    NASA Astrophysics Data System (ADS)

    Luckman, Adrian J.; Allinson, Nigel M.

    1990-02-01

    The human visual system has evolved towards a close integration of visual information processing and visual data acquisition. Fast, peripheral, pre-attentive vision uses low resolution input to direct the fixation of the fovea to features of importance in an efficient visual search pattern. Here we describe a system which emulates the multi-resolution aspect of human visual processing to provide computational efficiency in data analysis. The visual task used is the location of specific features in human faces for use in videotelephony. The feature location technique uses a Kohonen-based neural network architecture to permit learning by example. Input data is in the form of a resolution pyramid to emulate the differing modes of human vision. The system is implemented on a RISC-based microcomputer workstation with purpose-built real-time image acquisition hardware. It performs well with both familiar and unseen image data and, with refinement, could form the basis of a useable system.

  4. Health systems analysis of eye care services in Zambia: evaluating progress towards VISION 2020 goals

    PubMed Central

    2014-01-01

    Background VISION 2020 is a global initiative launched in 1999 to eliminate avoidable blindness by 2020. The objective of this study was to undertake a situation analysis of the Zambian eye health system and assess VISION 2020 process indicators on human resources, equipment and infrastructure. Methods All eye health care providers were surveyed to determine location, financing sources, human resources and equipment. Key informants were interviewed regarding levels of service provision, management and leadership in the sector. Policy papers were reviewed. A health system dynamics framework was used to analyse findings. Results During 2011, 74 facilities provided eye care in Zambia; 39% were public, 37% private for-profit and 24% owned by Non-Governmental Organizations. Private facilities were solely located in major cities. A total of 191 people worked in eye care; 18 of these were ophthalmologists and eight cataract surgeons, equivalent to 0.34 and 0.15 per 250,000 population, respectively. VISION 2020 targets for inpatient beds and surgical theatres were met in six out of nine provinces, but human resources and spectacles manufacturing workshops were below target in every province. Inequalities in service provision between urban and rural areas were substantial. Conclusion Shortage and maldistribution of human resources, lack of routine monitoring and inadequate financing mechanisms are the root causes of underperformance in the Zambian eye health system, which hinder the ability to achieve the VISION 2020 goals. We recommend that all VISION 2020 process indicators are evaluated simultaneously as these are not individually useful for monitoring progress. PMID:24575919

  5. Research of vision measurement system of the instruction sheet caliper rack

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Kong, Ming; Dong, Ying-Jun

    2010-12-01

    This article proposes a method of rack measurement based on computer vision. It establishes a computer vision measurement system; the system consists of precision linear guide, camera, computer and other several parts. The entire system can be divided into displacement platform design system and image acquisition system two parts. The displacement platform system is that the linear guide campaigns driven by the driver controlled by the computer, to expand the scope of this measure realizing the measurement for the whole tooth. Image acquisition system is the use of computer vision technology to analysis and identification the capture images, the light source emitting light to the caliper rack, camerawork is to be the image which acquisitioned. Then input the images to the computer through the USB interface in order to the image analysis, such as Edge Detection, Feature Extraction and so on. And the detection accuracy reaches to sub-pixel level. Experiment with the rack modulus of 0.19894 instruction sheet calipers to measure, using image processing technology to realize the edge detection, and getting the edge of rack. Finally get the basic parameters of the rack such as p and s, and calculated individual circular pitch deviation fpt, total cumulative pitch deviation Fp, tooth thickness deviation fsn. Then comparison the measurement results with the Accretech S1910DX3. It turned out that the accuracy of this method can meet the requirements for the measurement of such rack. And the measurement method is simple and practical, providing technical support for the rack online testing.

  6. Research of vision measurement system of the instruction sheet caliper rack

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Kong, Ming; Dong, Ying-jun

    2011-05-01

    This article proposes a method of rack measurement based on computer vision. It establishes a computer vision measurement system; the system consists of precision linear guide, camera, computer and other several parts. The entire system can be divided into displacement platform design system and image acquisition system two parts. The displacement platform system is that the linear guide campaigns driven by the driver controlled by the computer, to expand the scope of this measure realizing the measurement for the whole tooth. Image acquisition system is the use of computer vision technology to analysis and identification the capture images, the light source emitting light to the caliper rack, camerawork is to be the image which acquisitioned. Then input the images to the computer through the USB interface in order to the image analysis, such as Edge Detection, Feature Extraction and so on. And the detection accuracy reaches to sub-pixel level. Experiment with the rack modulus of 0.19894 instruction sheet calipers to measure, using image processing technology to realize the edge detection, and getting the edge of rack. Finally get the basic parameters of the rack such as p and s, and calculated individual circular pitch deviation fpt, total cumulative pitch deviation Fp, tooth thickness deviation fsn. Then comparison the measurement results with the Accretech S1910DX3. It turned out that the accuracy of this method can meet the requirements for the measurement of such rack. And the measurement method is simple and practical, providing technical support for the rack online testing.

  7. Utilization of the Space Vision System as an Augmented Reality System For Mission Operations

    NASA Technical Reports Server (NTRS)

    Maida, James C.; Bowen, Charles

    2003-01-01

    Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to

  8. Synthetic vision in the cockpit: 3D systems for general aviation

    NASA Astrophysics Data System (ADS)

    Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth

    2001-08-01

    Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.

  9. Altered Vision-Related Resting-State Activity in Pituitary Adenoma Patients with Visual Damage

    PubMed Central

    Qian, Haiyan; Wang, Xingchao; Wang, Zhongyan; Wang, Zhenmin; Liu, Pinan

    2016-01-01

    Objective To investigate changes of vision-related resting-state activity in pituitary adenoma (PA) patients with visual damage through comparison to healthy controls (HCs). Methods 25 PA patients with visual damage and 25 age- and sex-matched corrected-to-normal-vision HCs underwent a complete neuro-ophthalmologic evaluation, including automated perimetry, fundus examinations, and a magnetic resonance imaging (MRI) protocol, including structural and resting-state fMRI (RS-fMRI) sequences. The regional homogeneity (ReHo) of the vision-related cortex and the functional connectivity (FC) of 6 seeds within the visual cortex (the primary visual cortex (V1), the secondary visual cortex (V2), and the middle temporal visual cortex (MT+)) were evaluated. Two-sample t-tests were conducted to identify the differences between the two groups. Results Compared with the HCs, the PA group exhibited reduced ReHo in the bilateral V1, V2, V3, fusiform, MT+, BA37, thalamus, postcentral gyrus and left precentral gyrus and increased ReHo in the precuneus, prefrontal cortex, posterior cingulate cortex (PCC), anterior cingulate cortex (ACC), insula, supramarginal gyrus (SMG), and putamen. Compared with the HCs, V1, V2, and MT+ in the PAs exhibited decreased FC with the V1, V2, MT+, fusiform, BA37, and increased FC primarily in the bilateral temporal lobe (especially BA20,21,22), prefrontal cortex, PCC, insular, angular gyrus, ACC, pre-SMA, SMG, hippocampal formation, caudate and putamen. It is worth mentioning that compared with HCs, V1 in PAs exhibited decreased or similar FC with the thalamus, whereas V2 and MT+ exhibited increased FCs with the thalamus, especially pulvinar. Conclusions In our study, we identified significant neural reorganization in the vision-related cortex of PA patients with visual damage compared with HCs. Most subareas within the visual cortex exhibited remarkable neural dysfunction. Some subareas, including the MT+ and V2, exhibited enhanced FC with the thalamic

  10. Machine vision is not computer vision

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Charlier, Jean-Ray

    1998-10-01

    The identity of Machine Vision as an academic and practical subject of study is asserted. In particular, the distinction between Machine Vision on the one hand and Computer Vision, Digital Image Processing, Pattern Recognition and Artificial Intelligence on the other is emphasized. The article demonstrates through four cases studies that the active involvement of a person who is sensitive to the broad aspects of vision system design can avoid disaster and can often achieve a successful machine that would not otherwise have been possible. This article is a transcript of the key- note address presented at the conference. Since the proceedings are prepared and printed before the conference, it is not possible to include a record of the response to this paper made by the delegates during the round-table discussion. It is hoped to collate and disseminate these via the World Wide Web after the event. (A link will be provided at http://bruce.cs.cf.ac.uk/bruce/index.html.).

  11. The optimized PWM driving for the lighting system based on physiological characteristic of human vision

    NASA Astrophysics Data System (ADS)

    Wang, Ping-Chieh; Uang, Chii-Maw; Hong, Yi-Jian; Ho, Zu-Sheng

    2011-10-01

    Saving energy, White-light LED plays a main role in solid state lighting system. Find the best energy saving driven solution is the engineer endless hard work. Besides DC and AC driving, LED using Pulse Width Modulation (PWM) operation is also a valuable research topic. The most important issue for this work is to find the drive frequency and duty for achieving both energy saving and better feeling on the human vision sensation. In this paper, psychophysics of human vision response to the lighting effect, including Persistence of vision, Bloch's Law, Broca-Sulzer Law, Ferry-Porter Law, Talbot-Plateau Law, and Contrast Sensitivity, will be discussed and analyzed. From the human vision system, we found that there are three factors: the flash sensitivity, the illumination intensity and the background environment illumination, that are used to decide the frequency and duty of the PWM driving method. A set of controllable LED lamps with adjustable frequency and duty is fitted inside a non-closed box is constructed for this experiment. When the background environment illumination intensity is high, the variation of the flash sensitivity and illumination intensity is not easy to observe. Increasing PWM frequency will eliminate flash sensitivity. When the duty is over 70%, the vision sensitivity is saturated. For warning purpose, the better frequency range is between 7Hz to 15Hz and the duty cycle can be lower down to 70%. For general lighting, the better frequency range is between 200Hz to 1000Hz and the duty cycle can also be lower down to 70%.

  12. Virtual vision system with actual flavor by olfactory display

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Kanazawa, Fumihiro

    2010-11-01

    The authors have researched multimedia system and support system for nursing studies on and practices of reminiscence therapy and life review therapy. The concept of the life review is presented by Butler in 1963. The process of thinking back on one's life and communicating about one's life to another person is called life review. There is a famous episode concerning the memory. It is called as Proustian effects. This effect is mentioned on the Proust's novel as an episode that a story teller reminds his old memory when he dipped a madeleine in tea. So many scientists research why smells trigger the memory. The authors pay attention to the relation between smells and memory although the reason is not evident yet. Then we have tried to add an olfactory display to the multimedia system so that the smells become a trigger of reminding buried memories. An olfactory display is a device that delivers smells to the nose. It provides us with special effects, for example to emit smell as if you were there or to give a trigger for reminding us of memories. The authors have developed a tabletop display system connected with the olfactory display. For delivering a flavor to user's nose, the system needs to recognition and measure positions of user's face and nose. In this paper, the authors describe an olfactory display which enables to detect the nose position for an effective delivery.

  13. A vision for an ultra-high resolution integrated water cycle observation and prediction system

    NASA Astrophysics Data System (ADS)

    Houser, P. R.

    2013-05-01

    Society's welfare, progress, and sustainable economic growth—and life itself—depend on the abundance and vigorous cycling and replenishing of water throughout the global environment. The water cycle operates on a continuum of time and space scales and exchanges large amounts of energy as water undergoes phase changes and is moved from one part of the Earth system to another. We must move toward an integrated observation and prediction paradigm that addresses broad local-to-global science and application issues by realizing synergies associated with multiple, coordinated observations and prediction systems. A central challenge of a future water and energy cycle observation strategy is to progress from single variable water-cycle instruments to multivariable integrated instruments in electromagnetic-band families. The microwave range in the electromagnetic spectrum is ideally suited for sensing the state and abundance of water because of water's dielectric properties. Eventually, a dedicated high-resolution water-cycle microwave-based satellite mission may be possible based on large-aperture antenna technology that can harvest the synergy that would be afforded by simultaneous multichannel active and passive microwave measurements. A partial demonstration of these ideas can even be realized with existing microwave satellite observations to support advanced multivariate retrieval methods that can exploit the totality of the microwave spectral information. The simultaneous multichannel active and passive microwave retrieval would allow improved-accuracy retrievals that are not possible with isolated measurements. Furthermore, the simultaneous monitoring of several of the land, atmospheric, oceanic, and cryospheric states brings synergies that will substantially enhance understanding of the global water and energy cycle as a system. The multichannel approach also affords advantages to some constituent retrievals—for instance, simultaneous retrieval of vegetation

  14. Parallel of low-level computer vision algorithms on a multi-DSP system

    NASA Astrophysics Data System (ADS)

    Liu, Huaida; Jia, Pingui; Li, Lijian; Yang, Yiping

    2011-06-01

    Parallel hardware becomes a commonly used approach to satisfy the intensive computation demands of computer vision systems. A multiprocessor architecture based on hypercube interconnecting digital signal processors (DSPs) is described to exploit the temporal and spatial parallelism. This paper presents a parallel implementation of low level vision algorithms designed on multi-DSP system. The convolution operation has been parallelized by using redundant boundary partitioning. Performance of the parallel convolution operation is investigated by varying the image size, mask size and the number of processors. Experimental results show that the speedup is close to the ideal value. However, it can be found that the loading imbalance of processor can significantly affect the computation time and speedup of the multi- DSP system.

  15. The design and realization of a sort of robot vision measure system

    NASA Astrophysics Data System (ADS)

    Ren, Yong-jie; Zhu, Ji-gui; Yang, Xue-you; Ye, Sheng-hua

    2006-06-01

    The robot vision measure system based on stereovision is a very meaningful research realm within the engineering application. In this system, the industry robot is the movable carrier of the stereovision sensor, not only extending the work space of the sensor, but also reserving the characteristics of vision measure technology such as non-contact, quickness, etc. Controlling the pose of the robot in space, the stereovision sensor can arrive at the given point to collect the image signal of the given point one by one, and then obtain the 3D coordinate data after computing the image data. The method based on the technique of binocular stereovision sensor, which uses two transit instruments and one precision drone to carry out the whole calibration, is presented. At the same time, the measurement program of the robot and the computer was written in different program language. In the end, the system was tested carefully, and the feasibility was proved simultaneously.

  16. Awareness and Detection of Traffic and Obstacles Using Synthetic and Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.

    2012-01-01

    Research literature are reviewed and summarized to evaluate the awareness and detection of traffic and obstacles when using Synthetic Vision Systems (SVS) and Enhanced Vision Systems (EVS). The study identifies the critical issues influencing the time required, accuracy, and pilot workload associated with recognizing and reacting to potential collisions or conflicts with other aircraft, vehicles and obstructions during approach, landing, and surface operations. This work considers the effect of head-down display and head-up display implementations of SVS and EVS as well as the influence of single and dual pilot operations. The influences and strategies of adding traffic information and cockpit alerting with SVS and EVS were also included. Based on this review, a knowledge gap assessment was made with recommendations for ground and flight testing to fill these gaps and hence, promote the safe and effective implementation of SVS/EVS technologies for the Next Generation Air Transportation System

  17. Snapshot hyperspectral fovea vision system (HyperVideo)

    NASA Astrophysics Data System (ADS)

    Kriesel, Jason; Scriven, Gordon; Gat, Nahum; Nagaraj, Sheela; Willson, Paul; Swaminathan, V.

    2012-06-01

    The development and demonstration of a new snapshot hyperspectral sensor is described. The system is a significant extension of the four dimensional imaging spectrometer (4DIS) concept, which resolves all four dimensions of hyperspectral imaging data (2D spatial, spectral, and temporal) in real-time. The new sensor, dubbed "4×4DIS" uses a single fiber optic reformatter that feeds into four separate, miniature visible to near-infrared (VNIR) imaging spectrometers, providing significantly better spatial resolution than previous systems. Full data cubes are captured in each frame period without scanning, i.e., "HyperVideo". The current system operates up to 30 Hz (i.e., 30 cubes/s), has 300 spectral bands from 400 to 1100 nm (~2.4 nm resolution), and a spatial resolution of 44×40 pixels. An additional 1.4 Megapixel video camera provides scene context and effectively sharpens the spatial resolution of the hyperspectral data. Essentially, the 4×4DIS provides a 2D spatially resolved grid of 44×40 = 1760 separate spectral measurements every 33 ms, which is overlaid on the detailed spatial information provided by the context camera. The system can use a wide range of off-the-shelf lenses and can either be operated so that the fields of view match, or in a "spectral fovea" mode, in which the 4×4DIS system uses narrow field of view optics, and is cued by a wider field of view context camera. Unlike other hyperspectral snapshot schemes, which require intensive computations to deconvolve the data (e.g., Computed Tomographic Imaging Spectrometer), the 4×4DIS requires only a linear remapping, enabling real-time display and analysis. The system concept has a range of applications including biomedical imaging, missile defense, infrared counter measure (IRCM) threat characterization, and ground based remote sensing.

  18. Improved colorization for night vision system based on image splitting

    NASA Astrophysics Data System (ADS)

    Ali, E.; Kozaitis, S. P.

    2015-03-01

    The success of a color night navigation system often depends on the accuracy of the colors in the resulting image. Often, small regions can incorrectly adopt the color of large regions simply due to size of the regions. We presented a method to improve the color accuracy of a night navigation system by initially splitting a fused image into two distinct sections before colorization. We split a fused image into two sections, generally road and sky regions, before colorization and processed them separately to obtain improved color accuracy of each region. Using this approach, small regions were colored correctly when compared to not separating regions.

  19. CATEGORIZATION OF EXTRANEOUS MATTER IN COTTON USING MACHINE VISION SYSTEMS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Cotton Trash Identification System (CTIS) was developed at the Southwestern Cotton Ginning Research Laboratory to identify and categorize extraneous matter in cotton. The CTIS bark/grass categorization was evaluated with USDA-Agricultural Marketing Service (AMS) extraneous matter calls assigned ...

  20. Categorization of extraneous matter in cotton using machine vision systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Cotton Trash Identification System (CTIS) developed at the Southwestern Cotton Ginning Research Laboratory was evaluated for identification and categorization of extraneous matter in cotton. The CTIS bark/grass categorization was evaluated with USDA-Agricultural Marketing Service (AMS) extraneou...

  1. Finger mouse system based on computer vision in complex backgrounds

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Zhang, Xiong

    2013-12-01

    This paper presents a human-computer interaction system and realizes a real-time virtual mouse. Our system emulates the dragging and selecting functions of a mouse by recognizing bare hands, hence the control style is simple and intuitive. A single camera is used to capture hand images and a DSP chip is embedded as the image processing platform. To deal with complex backgrounds, particularly where skin-like or moving objects appear, we develop novel hand recognition algorithms. Hand segmentation is achieved by skin color cue and background difference. Each input image is corrected according to the luminance and then skin color is extracted by Gaussian model. We employ a Camshift tracking algorithm which receives feedbacks from the recognition module. In fingertip recognition, a method combining template matching and circle drawing is proposed. Our system has advantages of good real-time performance, easy integration and energy conservation. Experiments show that the system is robust to the scaling and rotation of hands.

  2. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  3. Calibration for stereo vision system based on phase matching and bundle adjustment algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Huijie; Wang, Zhen; Jiang, Hongzhi; Xu, Yang; Dong, Chao

    2015-05-01

    Calibration for stereo vision system plays an important role in the field of machine vision applications. The existing accurate calibration methods are usually carried out by capturing a high-accuracy calibration target with the same size as the measurement view. In in-situ 3D measurement and in large field of view measurement, the extrinsic parameters of the system usually need to be calibrated in real-time. Furthermore, the large high-accuracy calibration target in the field is a big challenge for manufacturing. Therefore, an accurate and rapid calibration method in the in-situ measurement is needed. In this paper, a novel calibration method for stereo vision system is proposed based on phase-based matching method and the bundle adjustment algorithm. As the camera is usually mechanically locked once adjusted appropriately after calibrated in lab, the intrinsic parameters are usually stable. We emphasize on the extrinsic parameters calibration in the measurement field. Firstly, the matching method based on heterodyne multi-frequency phase-shifting technique is applied to find thousands of pairs of corresponding points between images of two cameras. The large amount of pairs of corresponding points can help improve the accuracy of the calibration. Then the method of bundle adjustment in photogrammetry is used to optimize the extrinsic parameters and the 3D coordinates of the measured objects. Finally, the quantity traceability is carried out to transform the optimized extrinsic parameters from the 3D metric coordinate system into Euclid coordinate system to obtain the ultimate optimal extrinsic parameters. Experiment results show that the procedure of calibration takes less than 3 s. And, based on the stereo vision system calibrated by the proposed method, the measurement RMS (Root Mean Square) error can reach 0.025 mm when measuring the calibrated gauge with nominal length of 999.576 mm.

  4. A Vision of the Future Air Traffic Control System

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    2000-01-01

    The air transportation system is on the verge of gridlock, with delays and cancelled flights this summer reaching all time highs. As demand for air transportation continues to increase, the capacity needed to accommodate the growth in traffic is falling farther and farther behind. Moreover, it has become increasingly apparent that the present system cannot be scaled up to provide the capacity increases needed to meet demand over the next 25 years. NASA, working with the Federal Aviation Administration and industry, is pursuing a major research program to develop air traffic management technologies that have the ultimate goal of doubling capacity while increasing safety and efficiency. This seminar will describe how the current system operates, what its limitations are and why a revolutionary "shift in paradigm" is needed to overcome fundamental limitations in capacity and safety. For the near term, NASA has developed a portfolio of software tools for air traffic controllers, called the Center-TRACON Automation System (CTAS), that provides modest gains in capacity and efficiency while staying within the current paradigm. The outline of a concept for the long term, with a deployment date of 2015 at the earliest, has recently been formulated and presented by NASA to a select group of industry and government stakeholders. Automated decision making software, combined with an Internet in the sky that enables sharing of information and distributes control between the cockpit and the ground, is key to this concept. However, its most revolutionary feature is a fundamental change in the roles and responsibilities assigned to air traffic controllers.

  5. Visual Advantage of Enhanced Flight Vision System During NextGen Flight Test Evaluation

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Harrison, Stephanie J.; Bailey, Randall E.; Shelton, Kevin J.; Ellis, Kyle K.

    2014-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities. Nine test flights were flown in Gulfstream's G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast). Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision.

  6. Visual advantage of enhanced flight vision system during NextGen flight test evaluation

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Harrison, Stephanie J.; Bailey, Randall E.; Shelton, Kevin J.; Ellis, Kyle K. E.

    2014-06-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities. Nine test flights were flown in Gulfstream's G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast). Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision.

  7. Commercial machine vision system for traffic monitoring and control

    NASA Astrophysics Data System (ADS)

    D Agostino, Salvatore A.

    1992-03-01

    Traffic imaging covers a range of current and potential applications. These include traffic control and analysis, license plate finding, reading and storage, violation detection and archiving, vehicle sensors, and toll collection/enforcement. Experience from commercial installations and knowledge of the system requirements have been gained over the past 10 years. Recent improvements in system component cost and performance now allow products to be applied that provide cost effective solutions to the requirements for truly intelligent vehicle/highway systems (IVHS). The United States is a country that loves to drive. The infrastructure built in the 1950s and 1960s along with the low price of gasoline created an environment where the automobiles became an accessible and intricate part of American life. The United States has spent $DLR103 billion to build 40,000 highway miles since 1956, the start of the interstate program which is nearly complete. Unfortunately, a situation has arisen where the options for dramatically improving the ability of our roadways to absorb the increasing amount of traffic is limited. This is true in other countries as well as in the United States. The number of vehicles in the world increases by over 10,000,000 each year. In the United States there are about 180 million cars, trucks, and buses and this is estimated to double in the next 30 years. Urban development, and development in general, pushes from the edge of our roadways out. This leaves little room to increase the physical amount of roadway. Americans now spend more than 1.6 billion hours a year waiting in traffic jams. It is estimated that this congestion wasted 3 billion gallons of oil or 4% of the nation's annual gas consumption. The way out of the dilemma is to increase road use efficiency as well as improve mass transportation alternatives.

  8. Optical calculation of correlation filters for a robotic vision system

    NASA Technical Reports Server (NTRS)

    Knopp, Jerome

    1989-01-01

    A method is presented for designing optical correlation filters based on measuring three intensity patterns: the Fourier transform of a filter object, a reference wave and the interference pattern produced by the sum of the object transform and the reference. The method can produce a filter that is well matched to both the object, its transforming optical system and the spatial light modulator used in the correlator input plane. A computer simulation was presented to demonstrate the approach for the special case of a conventional binary phase-only filter. The simulation produced a workable filter with a sharp correlation peak.

  9. Estimation of Theaflavins (TF) and Thearubigins (TR) Ratio in Black Tea Liquor Using Electronic Vision System

    NASA Astrophysics Data System (ADS)

    Akuli, Amitava; Pal, Abhra; Ghosh, Arunangshu; Bhattacharyya, Nabarun; Bandhopadhyya, Rajib; Tamuly, Pradip; Gogoi, Nagen

    2011-09-01

    Quality of black tea is generally assessed using organoleptic tests by professional tea tasters. They determine the quality of black tea based on its appearance (in dry condition and during liquor formation), aroma and taste. Variation in the above parameters is actually contributed by a number of chemical compounds like, Theaflavins (TF), Thearubigins (TR), Caffeine, Linalool, Geraniol etc. Among the above, TF and TR are the most important chemical compounds, which actually contribute to the formation of taste, colour and brightness in tea liquor. Estimation of TF and TR in black tea is generally done using a spectrophotometer instrument. But, the analysis technique undergoes a rigorous and time consuming effort for sample preparation; also the operation of costly spectrophotometer requires expert manpower. To overcome above problems an Electronic Vision System based on digital image processing technique has been developed. The system is faster, low cost, repeatable and can estimate the amount of TF and TR ratio for black tea liquor with accuracy. The data analysis is done using Principal Component Analysis (PCA), Multiple Linear Regression (MLR) and Multiple Discriminate Analysis (MDA). A correlation has been established between colour of tea liquor images and TF, TR ratio. This paper describes the newly developed E-Vision system, experimental methods, data analysis algorithms and finally, the performance of the E-Vision System as compared to the results of traditional spectrophotometer.

  10. Implementation of the Canny Edge Detection algorithm for a stereo vision system

    SciTech Connect

    Wang, J.R.; Davis, T.A.; Lee, G.K.

    1996-12-31

    There exists many applications in which three-dimensional information is necessary. For example, in manufacturing systems, parts inspection may require the extraction of three-dimensional information from two-dimensional images, through the use of a stereo vision system. In medical applications, one may wish to reconstruct a three-dimensional image of a human organ from two or more transducer images. An important component of three-dimensional reconstruction is edge detection, whereby an image boundary is separated from background, for further processing. In this paper, a modification of the Canny Edge Detection approach is suggested to extract an image from a cluttered background. The resulting cleaned image can then be sent to the image matching, interpolation and inverse perspective transformation blocks to reconstruct the 3-D scene. A brief discussion of the stereo vision system that has been developed at the Mars Mission Research Center (MMRC) is also presented. Results of a version of Canny Edge Detection algorithm show promise as an accurate edge extractor which may be used in the edge-pixel based binocular stereo vision system.

  11. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    NASA Astrophysics Data System (ADS)

    D'Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-06-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  12. Mosad and Stream Vision For A Telerobotic, Flying Camera System

    NASA Technical Reports Server (NTRS)

    Mandl, William

    2002-01-01

    Two full custom camera systems using the Multiplexed OverSample Analog to Digital (MOSAD) conversion technology for visible light sensing were built and demonstrated. They include a photo gate sensor and a photo diode sensor. The system includes the camera assembly, driver interface assembly, a frame stabler board with integrated decimeter and Windows 2000 compatible software for real time image display. An array size of 320X240 with 16 micron pixel pitch was developed for compatibility with 0.3 inch CCTV optics. With 1.2 micron technology, a 73% fill factor was achieved. Noise measurements indicated 9 to 11 bits operating with 13.7 bits best case. Power measured under 10 milliwatts at 400 samples per second. Nonuniformity variation was below noise floor. Pictures were taken with different cameras during the characterization study to demonstrate the operable range. The successful conclusion of this program demonstrates the utility of the MOSAD for NASA missions, providing superior performance over CMOS and lower cost and power consumption over CCD. The MOSAD approach also provides a path to radiation hardening for space based applications.

  13. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  14. Multispectral uncooled infrared enhanced-vision system for flight test

    NASA Astrophysics Data System (ADS)

    Tiana, Carlo L.; Kerr, Richard; Harrah, Steven D.

    2001-08-01

    The 1997 Final Report of the 'White House Commission on Aviation Safety and Security' challenged industrial and government concerns to reduce aviation accident rates by a factor of five within 10 years. In the report, the commission encourages NASA, FAA and others 'to expand their cooperative efforts in aviation safety research and development'. As a result of this publication, NASA has since undertaken a number of initiatives aimed at meeting the stated goal. Among these, the NASA Aviation Safety Program was initiated to encourage and assist in the development of technologies for the improvement of aviation safety. Among the technologies being considered are certain sensor technologies that may enable commercial and general aviation pilots to 'see to land' at night or in poor visibility conditions. Infrared sensors have potential applicability in this field, and this paper describes a system, based on such sensors, that is being deployed on the NASA Langley Research Center B757 ARIES research aircraft. The system includes two infrared sensors operating in different spectral bands, and a visible-band color CCD camera for documentation purposes. The sensors are mounted in an aerodynamic package in a forward position on the underside of the aircraft. Support equipment in the aircraft cabin collects and processes all relevant sensor data. Display of sensor images is achieved in real time on the aircraft's Head Up Display (HUD), or other display devices.

  15. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    SciTech Connect

    M. D. McKay; M. O. Anderson; R. A. Kinoshita; W. D. Willis

    1999-02-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an ongoing effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the ''feel'' of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  16. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    SciTech Connect

    Kinoshita, Robert Arthur; Anderson, Matthew Oley; Mckay, Mark D; Willis, Walter David

    1999-04-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the "feel" of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  17. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  18. Shadow and feature recognition aids for rapid image geo-registration in UAV vision system architectures

    NASA Astrophysics Data System (ADS)

    Baer, Wolfgang; Kölsch, Mathias

    2009-05-01

    The problem of real-time image geo-referencing is encountered in all vision based cognitive systems. In this paper we present a model-image feedback approach to this problem and show how it can be applied to image exploitation from Unmanned Arial Vehicle (UAV) vision systems. By calculating reference images from a known terrain database, using a novel ray trace algorithm, we are able to eliminate foreshortening, elevation, and lighting distortions, introduce registration aids and reduce the geo-referencing problem to a linear transformation search over the two dimensional image space. A method for shadow calculation that maintains real-time performance is also presented. The paper then discusses the implementation of our model-image feedback approach in the Perspective View Nascent Technology (PVNT) software package and provides sample results from UAV mission control and target mensuration experiments conducted at China Lake and Camp Roberts, California.

  19. Real-time and low-cost embedded platform for car's surrounding vision system

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio; Franchi, Emilio

    2016-04-01

    The design and the implementation of a flexible and low-cost embedded system for real-time car's surrounding vision is presented. The target of the proposed multi-camera vision system is to provide the driver a better view of the objects that surround the vehicle. Fish-eye lenses are used to achieve a larger Field of View (FOV) but, on the other hand, introduce radial distortion of the images projected on the sensors. Using low-cost cameras there could be also some alignment issues. Since these complications are noticeable and dangerous, a real-time algorithm for their correction is presented. Then another real-time algorithm, used for merging 4 camera video streams together in a single view, is described. Real-time image processing is achieved through a hardware-software platform

  20. A low-cost embedded platform for car's surrounding vision system

    NASA Astrophysics Data System (ADS)

    Saponara, S.; Fontanelli, G.; Fanucci, L.; Franchi, E.

    2014-05-01

    The design and the implementation of a flexible and low-cost embedded system for car's surrounding vision is presented. The target of the proposed multi-camera vision system is to provide the driver a better view of the objects that surround the vehicle during maneuvering. Fish-eye lenses are used to achieve a larger field of view (FOV) but, on the other hand, introduce radial distortion of the images projected on the sensors. Using low-cost cameras there could be also some alignment issues. Since these complications are noticeable and dangerous, a real-time algorithm for their correction and for the merging of 4 cameras video showed in a single view is presented.

  1. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    PubMed Central

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344

  2. SailSpy: a vision system for yacht sail shape measurement

    NASA Astrophysics Data System (ADS)

    Olsson, Olof J.; Power, P. Wayne; Bowman, Chris C.; Palmer, G. Terry; Clist, Roger S.

    1992-11-01

    SailSpy is a real-time vision system which we have developed for automatically measuring sail shapes and masthead rotation on racing yachts. Versions have been used by the New Zealand team in two America's Cup challenges in 1988 and 1992. SailSpy uses four miniature video cameras mounted at the top of the mast to provide views of the headsail and mainsail on either tack. The cameras are connected to the SailSpy computer below deck using lightweight cables mounted inside the mast. Images received from the cameras are automatically analyzed by the SailSpy computer, and sail shape and mast rotation parameters are calculated. The sail shape parameters are calculated by recognizing sail markers (ellipses) that have been attached to the sails, and the mast rotation parameters by recognizing deck markers painted on the deck. This paper describes the SailSpy system and some of the vision algorithms used.

  3. Endoscopic machine vision system for blood-supply estimation of the nasal mucosa

    NASA Astrophysics Data System (ADS)

    Balas, Constantin J.; Christodoulou, P. N.; Prokopakis, E. P.; Helidonis, Emmanuel S.

    1996-12-01

    We have developed a machine vision system, which combines imaging and absolute color measurement techniques, for remote, objective, 2D color and color difference measurements. This imaging colorimeter adapted on an endoscope was used to evaluate nasal mucosa color changes induced by the administration of a sympathomimetic agent, with vasoconstrictive properties. The demonstrated reproducible and reliable measurements indicate the efficacy of the described system, for the potent vasoconstriction assessment of different pharmacotherapeutic agents, and suggests that it can also be useful for evaluating individuals, with allergic rhinitis, vasomotor rhinitis, and inflammation disorders of the paranasal sinuses. Machine vision techniques in endoscopy providing objective indices for optical tissue characterization and analysis can serve in understanding the pathophysiology of tissue lesions, and in the objective evaluation of their response to different therapeutic schemes, in several medical fields.

  4. Outstanding Science in the Neptune System from an Aerocaptured NASA "Vision Mission"

    NASA Technical Reports Server (NTRS)

    Spilker, T. R.; Spilker, L. J.; Ingersoll, A. P.

    2005-01-01

    In 2003 NASA released its Vision Mission Studies NRA (NRA-03-OSS-01-VM) soliciting proposals to study any one of 17 Vision Missions described in the NRA. The authors, along with a team of scientists and engineers, sucessfully proposed a study of the Neptune Orbiter With Probes (NOP) option, a mission that performs Cassini-level science in the Neptune system without fission-based electric power or propulsion. The Study Team includes a Science Team composed of experienced planetary scientists, many of whom helped draft the Neptune discussions in the 2003 Solar System Exploration Decadal Survey (SSEDS), and an Implementation Team with experienced engineers and technologists from multiple NASA Centers and JPL.

  5. Image processing for a tactile/vision substitution system using digital CNN.

    PubMed

    Lin, Chien-Nan; Yu, Sung-Nien; Hu, Jin-Cheng

    2006-01-01

    In view of the parallel processing and easy implementation properties of CNN, we propose to use digital CNN as the image processor of a tactile/vision substitution system (TVSS). The digital CNN processor is used to execute the wavelet down-sampling filtering and the half-toning operations, aiming to extract important features from the images. A template combination method is used to embed the two image processing functions into a single CNN processor. The digital CNN processor is implemented on an intellectual property (IP) and is implemented on a XILINX VIRTEX II 2000 FPGA board. Experiments are designated to test the capability of the CNN processor in the recognition of characters and human subjects in different environments. The experiments demonstrates impressive results, which proves the proposed digital CNN processor a powerful component in the design of efficient tactile/vision substitution systems for the visually impaired people. PMID:17946687

  6. A novel registration method for image-guided neurosurgery system based on stereo vision.

    PubMed

    An, Yong; Wang, Manning; Song, Zhijian

    2015-01-01

    This study presents a novel spatial registration method of Image-guided neurosurgery system (IGNS) based on stereo-vision. Images of the patient's head are captured by a video camera, which is calibrated and tracked by an optical tracking system. Then, a set of sparse facial data points are reconstructed from them by stereo vision in the patient space. Surface matching method is utilized to register the reconstructed sparse points and the facial surface reconstructed from preoperative images of the patient. Simulation experiments verified the feasibility of the proposed method. The proposed method it is a new low-cost and easy-to-use spatial registration method for IGNS, with good prospects for clinical application. PMID:26406100

  7. Insect-inspired high-speed motion vision system for robot control.

    PubMed

    Wu, Haiyan; Zou, Ke; Zhang, Tianguang; Borst, Alexander; Kühnlenz, Kolja

    2012-10-01

    The mechanism for motion detection in a fly's vision system, known as the Reichardt correlator, suffers from a main shortcoming as a velocity estimator: low accuracy. To enable accurate velocity estimation, responses of the Reichardt correlator to image sequences are analyzed in this paper. An elaborated model with additional preprocessing modules is proposed. The relative error of velocity estimation is significantly reduced by establishing a real-time response-velocity lookup table based on the power spectrum analysis of the input signal. By exploiting the improved velocity estimation accuracy and the simple structure of the Reichardt correlator, a high-speed vision system of 1 kHz is designed and applied for robot yaw-angle control in real-time experiments. The experimental results demonstrate the potential and feasibility of applying insect-inspired motion detection to robot control. PMID:22864467

  8. Extending enhanced-vision capabilities by integration of advanced surface movement guidance and control systems (A-SMGCS)

    NASA Astrophysics Data System (ADS)

    Hecker, Peter; Doehler, Hans-Ullrich; Korn, Bernd; Ludwig, T.

    2001-08-01

    DLR has set up a number of projects to increase flight safety and economics of aviation. Within these activities one field of interest is the development and validation of systems for pilot assistance in order to increase the situation awareness of the aircrew. All flight phases ('gate-to-gate') are taken into account, but as far as approaches, landing and taxiing are the most critical tasks in the field of civil aviation, special emphasis is given to these operations. As presented in previous contributions within SPIE's Enhanced and Synthetic Vision Conferences, DLR's Institute of Flight Guidance has developed an Enhanced Vision System (EVS) as a tool assisting especially approach and landing by improving the aircrew's situational awareness. The combination of forward looking imaging sensors (such as EADS's HiVision millimeter wave radar), terrain data stored in on-board databases plus information transmitted from ground or other aircraft via data link is used to help pilots handling these phases of flight especially under adverse weather conditions. A second pilot assistance module being developed at DLR is the Taxi And Ramp Management And Control - Airborne System (TARMAC-AS), which is part of an Advanced Surface Management Guidance and Control System (ASMGCS). By means of on-board terrain data bases and navigation data a map display is generated, which helps the pilot performing taxi operations. In addition to the pure map function taxi instructions and other traffic can be displayed as the aircraft is connected to TARMAC-planning and TARMAC-communication, navigation and surveillance modules on ground via data-link. Recent experiments with airline pilots have shown, that the capabilities of taxi assistance can be extended significantly by integrating EVS- and TARMAC-AS-functionalities. Especially an extended obstacle detection and warning coming from the Enhanced Vision System increases the safety of ground operations. The presented paper gives an overview

  9. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    PubMed

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material. PMID:24858457

  10. Machine vision guided sensor positioning system for leaf temperature assessment.

    PubMed

    Kim, Y; Ling, P P

    2001-01-01

    A sensor positioning system was developed for monitoring plants' well-being using a non-contact sensor. Image processing algorithms were developed to identify a target region on a plant leaf. A novel algorithm to recover view depth was developed by using a camera equipped with a computer-controlled zoom lens. The methodology has improved depth recovery resolution over a conventional monocular imaging technique. An algorithm was also developed to find a maximum enclosed circle on a leaf surface so the conical field-of-view of an infrared temperature sensor could be filled by the target without peripheral noise. The center of the enclosed circle and the estimated depth were used to define the sensor 3-D location for accurate plant temperature measurement. PMID:12088029

  11. Machine vision guided sensor positioning system for leaf temperature assessment

    NASA Technical Reports Server (NTRS)

    Kim, Y.; Ling, P. P.; Janes, H. W. (Principal Investigator)

    2001-01-01

    A sensor positioning system was developed for monitoring plants' well-being using a non-contact sensor. Image processing algorithms were developed to identify a target region on a plant leaf. A novel algorithm to recover view depth was developed by using a camera equipped with a computer-controlled zoom lens. The methodology has improved depth recovery resolution over a conventional monocular imaging technique. An algorithm was also developed to find a maximum enclosed circle on a leaf surface so the conical field-of-view of an infrared temperature sensor could be filled by the target without peripheral noise. The center of the enclosed circle and the estimated depth were used to define the sensor 3-D location for accurate plant temperature measurement.

  12. Machine vision system for inner-wall surface inspection

    NASA Astrophysics Data System (ADS)

    Zhuang, Bao Hua; Zhang, Wenwei

    1998-02-01

    A non-contact laser sensor based on the circular optical cutting image is developed to measure the size and the profile of the pipe inner wall. The sensor consists of a laser diode light source, an optical ring pattern generator and a CCD camera. The circular light from the optical ring pattern generator projects onto the pipe inner wall, which is then viewed by the CCD camera. The adapt weighted average value filter and subpixel technique in several step computing and half Gauss fitting are put forward to obtain the edge and the center of the circular image in order to filter the noise of the image and raise the resolution of the measuring system. The experimental results show that the principle is correct and the techniques are realizable.

  13. WELDSMART: A vision-based expert system for quality control

    NASA Astrophysics Data System (ADS)

    Andersen, Kristinn; Barnett, Robert Joel; Springfield, James F.; Cook, George E.

    1992-09-01

    This work was aimed at exploring means for utilizing computer technology in quality inspection and evaluation. Inspection of metallic welds was selected as the main application for this development and primary emphasis was placed on visual inspection, as opposed to other inspection methods, such as radiographic techniques. Emphasis was placed on methodologies with the potential for use in real-time quality control systems. Because quality evaluation is somewhat subjective, despite various efforts to classify discontinuities and standardize inspection methods, the task of using a computer for both inspection and evaluation was not trivial. The work started out with a review of the various inspection techniques that are used for quality control in welding. Among other observations from this review was the finding that most weld defects result in abnormalities that may be seen by visual inspection. This supports the approach of emphasizing visual inspection for this work. Quality control consists of two phases: (1) identification of weld discontinuities (some of which may be severe enough to be classified as defects), and (2) assessment or evaluation of the weld based on the observed discontinuities. Usually the latter phase results in a pass/fail judgement for the inspected piece. It is the conclusion of this work that the first of the above tasks, identification of discontinuities, is the most challenging one. It calls for sophisticated image processing and image analysis techniques, and frequently ad hoc methods have to be developed to identify specific features in the weld image. The difficulty of this task is generally not due to limited computing power. In most cases it was found that a modest personal computer or workstation could carry out most computations in a reasonably short time period. Rather, the algorithms and methods necessary for identifying weld discontinuities were in some cases limited. The fact that specific techniques were finally developed and

  14. WELDSMART: A vision-based expert system for quality control

    NASA Technical Reports Server (NTRS)

    Andersen, Kristinn; Barnett, Robert Joel; Springfield, James F.; Cook, George E.

    1992-01-01

    This work was aimed at exploring means for utilizing computer technology in quality inspection and evaluation. Inspection of metallic welds was selected as the main application for this development and primary emphasis was placed on visual inspection, as opposed to other inspection methods, such as radiographic techniques. Emphasis was placed on methodologies with the potential for use in real-time quality control systems. Because quality evaluation is somewhat subjective, despite various efforts to classify discontinuities and standardize inspection methods, the task of using a computer for both inspection and evaluation was not trivial. The work started out with a review of the various inspection techniques that are used for quality control in welding. Among other observations from this review was the finding that most weld defects result in abnormalities that may be seen by visual inspection. This supports the approach of emphasizing visual inspection for this work. Quality control consists of two phases: (1) identification of weld discontinuities (some of which may be severe enough to be classified as defects), and (2) assessment or evaluation of the weld based on the observed discontinuities. Usually the latter phase results in a pass/fail judgement for the inspected piece. It is the conclusion of this work that the first of the above tasks, identification of discontinuities, is the most challenging one. It calls for sophisticated image processing and image analysis techniques, and frequently ad hoc methods have to be developed to identify specific features in the weld image. The difficulty of this task is generally not due to limited computing power. In most cases it was found that a modest personal computer or workstation could carry out most computations in a reasonably short time period. Rather, the algorithms and methods necessary for identifying weld discontinuities were in some cases limited. The fact that specific techniques were finally developed and

  15. Novel method of calibration with restrictive constraints for stereo-vision system

    NASA Astrophysics Data System (ADS)

    Cui, Jiashan; Huo, Ju; Yang, Ming

    2016-05-01

    Regarding the calibration of a stereo vision measurement system, this paper puts forward a new bundle adjustment algorithm based on the stereo vision camera calibration method. Multiple-view geometric constraints and a bundle adjustment algorithm are used to optimize the inner and outer parameters of the camera accurately. A fixed relative constraint relationship between cameras is introduced. We have improved the normal equation construction process of the traditional bundle adjustment method, so that each iteration process occurs just outside the parameters of two images that are taken by a camera that has been optimized to better integrate two cameras bound together as one camera. The relationship between the fixed relative constraints can effectively increase the number of superfluous observations of the adjustment system and optimize higher accuracy while reducing the dimension of the normal matrix; it means that each iteration will reduce the time required. Simulation and actual experimental results show the superior performance of the proposed approach in terms of robustness and accuracy, and our approach also can be extended to stereo-vision system with more than two cameras.

  16. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

    PubMed Central

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-01-01

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553

  17. NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.

  18. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  19. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  20. Science requirements for PRoViScout, a robotics vision system for planetary exploration

    NASA Astrophysics Data System (ADS)

    Hauber, E.; Pullan, D.; Griffiths, A.; Paar, G.

    2011-10-01

    The robotic exploration of planetary surfaces, including missions of interest for geobiology (e.g., ExoMars), will be the precursor of human missions within the next few decades. Such exploration will require platforms which are much more self-reliant and capable of exploring long distances with limited ground support in order to advance planetary science objectives in a timely manner. The key to this objective is the development of planetary robotic onboard vision processing systems, which will enable the autonomous on-site selection of scientific and mission-strategic targets, and the access thereto. The EU-funded research project PRoViScout (Planetary Robotics Vision Scout) is designed to develop a unified and generic approach for robotic vision onboard processing, namely the combination of navigation and scientific target selection. Any such system needs to be "trained", i.e. it needs (a) scientific requirements which the system needs to address, and (b) a data base of scientifically representative target scenarios which can be analysed. We present our preliminary list of science requirements, based on previous experience from landed Mars missions.

  1. The study of dual camera 3D coordinate vision measurement system using a special probe

    NASA Astrophysics Data System (ADS)

    Liu, Shugui; Peng, Kai; Zhang, Xuefei; Zhang, Haifeng; Huang, Fengshan

    2006-11-01

    Due to high precision and convenient operation, the vision coordinate measurement machine with one probe has become the research focus in visual industry. In general such a visual system can be setup conveniently with just one CCD camera and probe. However, the price of the system will surge up too high to accept while the top performance hardware, such as CCD camera, image captured card and etc, have to be applied in the system to obtain the high axis-oriented measurement precision. In this paper, a new dual CCD camera vision coordinate measurement system based on redundancy principle is proposed to achieve high precision by moderate price. Since two CCD cameras are placed with the angle of camera axis like about 90 degrees to build the system, two sub-systems can be built by each CCD camera and the probe. With the help of the probe the inner and outer parameters of camera are first calibrated, the system by use of redundancy technique is set up now. When axis-oriented error is eliminated within the two sub-systems, which is so large and always exits in the single camera system, the high precision measurement is obtained by the system. The result of experiment compared to that from CMM shows that the system proposed is more excellent in stableness and precision with the uncertainty beyond +/-0.1 mm in xyz orient within the distance of 2m using two common CCD cameras.

  2. ADVANCED SOLID STATE SENSORS FOR VISION 21 SYSTEMS

    SciTech Connect

    C.D. Stinespring

    2005-04-28

    Silicon carbide (SiC) is a high temperature semiconductor with the potential to meet the gas and temperature sensor needs in both present and future power generation systems. These devices have been and are currently being investigated for a variety of high temperature sensing applications. These include leak detection, fire detection, environmental control, and emissions monitoring. Electronically these sensors can be very simple Schottky diode structures that rely on gas-induced changes in electrical characteristics at the metal-semiconductor interface. In these devices, thermal stability of the interfaces has been shown to be an essential requirement for improving and maintaining sensor sensitivity and lifetime. In this report, we describe device fabrication and characterization studies relevant to the development of SiC based gas and temperature sensors. Specifically, we have investigated the use of periodically stepped surfaces to improve the thermal stability of the metal semiconductor interface for simple Pd-SiC Schottky diodes. These periodically stepped surfaces have atomically flat terraces on the order of 200 nm wide separated by steps of 1.5 nm height. It should be noted that 1.5 nm is the unit cell height for the 6H-SiC (0001) substrates used in these studies. These surfaces contrast markedly with the ''standard'' SiC surfaces normally used in device fabrication. Obvious scratches and pots as well as subsurface defects characterize these standard surfaces. This research involved ultrahigh vacuum deposition and characterization studies to investigate the thermal stability of Pd-SiC Schottky diodes on both the stepped and standard surfaces, high temperature electrical characterization of these device structures, and high temperature electrical characterization of diodes under wet and dry oxidizing conditions. To our knowledge, these studies have yielded the first electrical characterization of actual sensor device structures fabricated under ultrahigh

  3. A Hybrid Synthetic Vision System for the Tele-operation of Unmanned Vehicles

    NASA Technical Reports Server (NTRS)

    Delgado, Frank; Abernathy, Mike

    2004-01-01

    A system called SmartCam3D (SC3D) has been developed to provide enhanced situational awareness for operators of a remotely piloted vehicle. SC3D is a Hybrid Synthetic Vision System (HSVS) that combines live sensor data with information from a Synthetic Vision System (SVS). By combining the dual information sources, the operators are afforded the advantages of each approach. The live sensor system provides real-time information for the region of interest. The SVS provides information rich visuals that will function under all weather and visibility conditions. Additionally, the combination of technologies allows the system to circumvent some of the limitations from each approach. Video sensor systems are not very useful when visibility conditions are hampered by rain, snow, sand, fog, and smoke, while a SVS can suffer from data freshness problems. Typically, an aircraft or satellite flying overhead collects the data used to create the SVS visuals. The SVS data could have been collected weeks, months, or even years ago. To that extent, the information from an SVS visual could be outdated and possibly inaccurate. SC3D was used in the remote cockpit during flight tests of the X-38 132 and 131R vehicles at the NASA Dryden Flight Research Center. SC3D was also used during the operation of military Unmanned Aerial Vehicles. This presentation will provide an overview of the system, the evolution of the system, the results of flight tests, and future plans. Furthermore, the safety benefits of the SC3D over traditional and pure synthetic vision systems will be discussed.

  4. FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision

    PubMed Central

    Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069

  5. Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System

    PubMed Central

    García-Garrido, Miguel A.; Ocaña, Manuel; Llorca, David F.; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance. PMID:22438704

  6. Infrared machine vision system for the automatic detection of olive fruit quality.

    PubMed

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements. PMID:24148491

  7. Complete vision-based traffic sign recognition supported by an I2V communication system.

    PubMed

    García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance. PMID:22438704

  8. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    PubMed Central

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  9. SAD-based stereo vision machine on a System-on-Programmable-Chip (SoPC).

    PubMed

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  10. System for synthetic vision and augmented reality in future flight decks

    NASA Astrophysics Data System (ADS)

    Behringer, Reinhold; Tam, Clement K.; McGee, Joshua H.; Sundareswaran, Venkataraman; Vassiliou, Marius S.

    2000-06-01

    Rockwell Science Center is investigating novel human-computer interface techniques for enhancing the situational awareness in future flight decks. One aspect is to provide intuitive displays which provide the vital information and the spatial awareness by augmenting the real world with an overlay of relevant information registered to the real world. Such Augmented Reality (AR) techniques can be employed during bad weather scenarios to permit flying in Visual Flight Rules (VFR) in conditions which would normally require Instrumental Flight Rules (IFR). These systems could easily be implemented on heads-up displays (HUD). The advantage of AR systems vs. purely synthetic vision (SV) systems is that the pilot can relate the information overlay to real objects in the world, whereas SV systems provide a constant virtual view, where inconsistencies can hardly be detected. The development of components for such a system led to a demonstrator implemented on a PC. A camera grabs video images which are overlaid with registered information, Orientation of the camera is obtained from an inclinometer and a magnetometer, position is acquired from GPS. In a possible implementation in an airplane, the on-board attitude information can be used for obtaining correct registration. If visibility is sufficient, computer vision modules can be used to fine-tune the registration by matching visual clues with database features. Such technology would be especially useful for landing approaches. The current demonstrator provides a frame-rate of 15 fps, using a live video feed as background and an overlay of avionics symbology in the foreground. In addition, terrain rendering from a 1 arc sec. digital elevation model database can be overlaid to provide synthetic vision in case of limited visibility. For true outdoor testing (on ground level), the system has been implemented on a wearable computer.

  11. An Automated Mouse Tail Vascular Access System by Vision and Pressure Feedback

    PubMed Central

    Chang, Yen-Chi; Berry-Pusey, Brittany; Yasin, Rashid; Vu, Nam; Maraglia, Brandon; Chatziioannou, Arion X.; Tsao, Tsu-Chin

    2015-01-01

    This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system. PMID:26478693

  12. Air and Water System (AWS) Design and Technology Selection for the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Jones, Harry; Kliss, Mark

    2005-01-01

    This paper considers technology selection for the crew air and water recycling systems to be used in long duration human space exploration. The specific objectives are to identify the most probable air and water technologies for the vision for space exploration and to identify the alternate technologies that might be developed. The approach is to conduct a preliminary first cut systems engineering analysis, beginning with the Air and Water System (AWS) requirements and the system mass balance, and then define the functional architecture, review the International Space Station (ISS) technologies, and discuss alternate technologies. The life support requirements for air and water are well known. The results of the mass flow and mass balance analysis help define the system architectural concept. The AWS includes five subsystems: Oxygen Supply, Condensate Purification, Urine Purification, Hygiene Water Purification, and Clothes Wash Purification. AWS technologies have been evaluated in the life support design for ISS node 3, and in earlier space station design studies, in proposals for the upgrade or evolution of the space station, and in studies of potential lunar or Mars missions. The leading candidate technologies for the vision for space exploration are those planned for Node 3 of the ISS. The ISS life support was designed to utilize Space Station Freedom (SSF) hardware to the maximum extent possible. The SSF final technology selection process, criteria, and results are discussed. Would it be cost-effective for the vision for space exploration to develop alternate technology? This paper will examine this and other questions associated with AWS design and technology selection.

  13. Vision problems

    MedlinePlus

    ... which nothing can be seen) Vision loss and blindness are the most severe vision problems. Causes Vision ... that look faded. The most common cause of blindness in people over age 60. Eye infection, inflammation, ...

  14. Broad Band Antireflection Coating on Zinc Sulphide Window for Shortwave infrared cum Night Vision System

    NASA Astrophysics Data System (ADS)

    Upadhyaya, A. S.; Bandyopadhyay, P. K.

    2012-11-01

    In state of art technology, integrated devices are widely used or their potential advantages. Common system reduces weight as well as total space covered by its various parts. In the state of art surveillance system integrated SWIR and night vision system used for more accurate identification of object. In this system a common optical window is used, which passes the radiation of both the regions, further both the spectral regions are separated in two channels. ZnS is a good choice for a common window, as it transmit both the region of interest, night vision (650 - 850 nm) as well as SWIR (0.9 - 1.7 μm). In this work a broad band anti reflection coating is developed on ZnS window to enhance the transmission. This seven layer coating is designed using flip flop design method. After getting the final design, some minor refinement is done, using simplex method. SiO2 and TiO2 coating material combination is used for this work. The coating is fabricated by physical vapour deposition process and the materials were evaporated by electron beam gun. Average transmission of both side coated substrate from 660 to 1700 nm is 95%. This coating also acts as contrast enhancement filter for night vision devices, as it reflect the region of 590 - 660 nm. Several trials have been conducted to check the coating repeatability, and it is observed that transmission variation in different trials is not very much and it is under the tolerance limit. The coating also passes environmental test for stability.

  15. Aspects of Synthetic Vision Display Systems and the Best Practices of the NASA's SVS Project

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Jones, Denise R.; Young, Steven D.; Arthur, Jarvis J.; Prinzel, Lawrence J.; Glaab, Louis J.; Harrah, Steven D.; Parrish, Russell V.

    2008-01-01

    NASA s Synthetic Vision Systems (SVS) Project conducted research aimed at eliminating visibility-induced errors and low visibility conditions as causal factors in civil aircraft accidents while enabling the operational benefits of clear day flight operations regardless of actual outside visibility. SVS takes advantage of many enabling technologies to achieve this capability including, for example, the Global Positioning System (GPS), data links, radar, imaging sensors, geospatial databases, advanced display media and three dimensional video graphics processors. Integration of these technologies to achieve the SVS concept provides pilots with high-integrity information that improves situational awareness with respect to terrain, obstacles, traffic, and flight path. This paper attempts to emphasize the system aspects of SVS - true systems, rather than just terrain on a flight display - and to document from an historical viewpoint many of the best practices that evolved during the SVS Project from the perspective of some of the NASA researchers most heavily involved in its execution. The Integrated SVS Concepts are envisagements of what production-grade Synthetic Vision systems might, or perhaps should, be in order to provide the desired functional capabilities that eliminate low visibility as a causal factor to accidents and enable clear-day operational benefits regardless of visibility conditions.

  16. A vision-based dynamic rotational angle measurement system for large civil structures.

    PubMed

    Lee, Jong-Jae; Ho, Hoai-Nam; Lee, Jong-Han

    2012-01-01

    In this paper, we propose a vision-based rotational angle measurement system for large-scale civil structures. Despite the fact that during the last decade several rotation angle measurement systems were introduced, they however often required complex and expensive equipment. Therefore, alternative effective solutions with high resolution are in great demand. The proposed system consists of commercial PCs, commercial camcorders, low-cost frame grabbers, and a wireless LAN router. The calculation of rotation angle is obtained by using image processing techniques with pre-measured calibration parameters. Several laboratory tests were conducted to verify the performance of the proposed system. Compared with the commercial rotation angle measurement, the results of the system showed very good agreement with an error of less than 1.0% in all test cases. Furthermore, several tests were conducted on the five-story modal testing tower with a hybrid mass damper to experimentally verify the feasibility of the proposed system. PMID:22969348

  17. A Vision-Based Dynamic Rotational Angle Measurement System for Large Civil Structures

    PubMed Central

    Lee, Jong-Jae; Ho, Hoai-Nam; Lee, Jong-Han

    2012-01-01

    In this paper, we propose a vision-based rotational angle measurement system for large-scale civil structures. Despite the fact that during the last decade several rotation angle measurement systems were introduced, they however often required complex and expensive equipment. Therefore, alternative effective solutions with high resolution are in great demand. The proposed system consists of commercial PCs, commercial camcorders, low-cost frame grabbers, and a wireless LAN router. The calculation of rotation angle is obtained by using image processing techniques with pre-measured calibration parameters. Several laboratory tests were conducted to verify the performance of the proposed system. Compared with the commercial rotation angle measurement, the results of the system showed very good agreement with an error of less than 1.0% in all test cases. Furthermore, several tests were conducted on the five-story modal testing tower with a hybrid mass damper to experimentally verify the feasibility of the proposed system. PMID:22969348

  18. Approach to reduce the computational image processing requirements for a computer vision system using sensor preprocessing and the Hotelling transform

    NASA Astrophysics Data System (ADS)

    Schei, Thomas R.; Wright, Cameron H. G.; Pack, Daniel J.

    2005-03-01

    We describe a new development approach to computer vision for a compact, low-power, real-time system such as mobile robots. We take advantage of preprocessing in a biomimetic vision sensor and employ a computational strategy using subspace methods and the Hotelling transform in an effort to reduce the computational imaging load. The combination, while providing an overall reduction in the computational imaging requirements, is not optimized to each other and requires additional investigation.

  19. Night vision imaging system design, integration and verification in spacecraft vacuum thermal test

    NASA Astrophysics Data System (ADS)

    Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing

    2015-08-01

    The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.

  20. Microwave vision for robots

    NASA Technical Reports Server (NTRS)

    Lewandowski, Leon; Struckman, Keith

    1994-01-01

    Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.

  1. Computer vision

    SciTech Connect

    Not Available

    1982-01-01

    This paper discusses material from areas such as artificial intelligence, psychology, computer graphics, and image processing. The intent is to assemble a selection of this material in a form that will serve both as a senior/graduate-level academic text and as a useful reference to those building vision systems. This book has a strong artificial intelligence flavour, emphasising the belief that both the intrinsic image information and the internal model of the world are important in successful vision systems. The book is organised into four parts, based on descriptions of objects at four different levels of abstraction. These are: generalised images-images and image-like entities; segmented images-images organised into subimages that are likely to correspond to interesting objects; geometric structures-quantitative models of image and world structures; relational structures-complex symbolic descriptions of image and world structures. The book contains author and subject indexes.

  2. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss

    PubMed Central

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788

  3. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  4. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  5. Down-to-the-runway enhanced flight vision system (EFVS) approach test results

    NASA Astrophysics Data System (ADS)

    McKinley, John B.; Heidhausen, Eric; Cramer, James A.; Krone, Norris J., Jr.

    2008-04-01

    Flight tests where conducted at Cambridge-Dorchester Airport (KCGE) and Easton Municipal Airport / Newnam Field (KESN) in a Cessna 402B aircraft using a head-up display (HUD) and a Kollsman Enhanced Vision System (EVS-I) infrared camera. These tests were sponsored by the MITRE Corporation's Center for Advanced Aviation System Development (CAASD) and the Federal Aviation Administration. Imagery of the EVS-I infrared camera, HUD guidance cues, and out-the-window video were each separately recorded at an engineering workstation for each approach, roll-out, and taxi operation. The EVS-I imagery was displayed on the HUD with guidance cues generated by the mission computer. Also separately recorded was the inertial flight path data. Enhanced Flight Vision System (EFVS) approaches were conducted from the final approach fix to runway flare, touchdown, roll-out and taxi using the HUD and EVS-I sensor as the only visual reference. Flight conditions included two-pilot crew, day, night, non-precision course offset approaches, ILS approach, crosswind approaches, and missed approaches. Results confirmed the feasibility for safe conduct of down-to-the-runway precision approaches in low visibility to runways with and without precision approach systems, when consideration is given to proper aircraft instrumentation, pilot training, and acceptable procedures. Operational benefits include improved runway occupancy rates, and reduced delays and diversions.

  6. Advanced electro-mechanical micro-shutters for thermal infrared night vision imaging and targeting systems

    NASA Astrophysics Data System (ADS)

    Durfee, David; Johnson, Walter; McLeod, Scott

    2007-04-01

    Un-cooled microbolometer sensors used in modern infrared night vision systems such as driver vehicle enhancement (DVE) or thermal weapons sights (TWS) require a mechanical shutter. Although much consideration is given to the performance requirements of the sensor, supporting electronic components and imaging optics, the shutter technology required to survive in combat is typically the last consideration in the system design. Electro-mechanical shutters used in military IR applications must be reliable in temperature extremes from a low temperature of -40°C to a high temperature of +70°C. They must be extremely light weight while having the ability to withstand the high vibration and shock forces associated with systems mounted in military combat vehicles, weapon telescopic sights, or downed unmanned aerial vehicles (UAV). Electro-mechanical shutters must have minimal power consumption and contain circuitry integrated into the shutter to manage battery power while simultaneously adapting to changes in electrical component operating parameters caused by extreme temperature variations. The technology required to produce a miniature electro-mechanical shutter capable of fitting into a rifle scope with these capabilities requires innovations in mechanical design, material science, and electronics. This paper describes a new, miniature electro-mechanical shutter technology with integrated power management electronics designed for extreme service infra-red night vision systems.

  7. An Integrated Vision-Based System for Spacecraft Attitude and Topology Determination for Formation Flight Missions

    NASA Technical Reports Server (NTRS)

    Rogers, Aaron; Anderson, Kalle; Mracek, Anna; Zenick, Ray

    2004-01-01

    With the space industry's increasing focus upon multi-spacecraft formation flight missions, the ability to precisely determine system topology and the orientation of member spacecraft relative to both inertial space and each other is becoming a critical design requirement. Topology determination in satellite systems has traditionally made use of GPS or ground uplink position data for low Earth orbits, or, alternatively, inter-satellite ranging between all formation pairs. While these techniques work, they are not ideal for extension to interplanetary missions or to large fleets of decentralized, mixed-function spacecraft. The Vision-Based Attitude and Formation Determination System (VBAFDS) represents a novel solution to both the navigation and topology determination problems with an integrated approach that combines a miniature star tracker with a suite of robust processing algorithms. By combining a single range measurement with vision data to resolve complete system topology, the VBAFDS design represents a simple, resource-efficient solution that is not constrained to certain Earth orbits or formation geometries. In this paper, analysis and design of the VBAFDS integrated guidance, navigation and control (GN&C) technology will be discussed, including hardware requirements, algorithm development, and simulation results in the context of potential mission applications.

  8. Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke

    NASA Astrophysics Data System (ADS)

    Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro

    Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.

  9. A real-time surface inspection system for precision steel balls based on machine vision

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen

    2016-07-01

    Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s‑1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.

  10. A Vision-Based System for Object Identification and Information Retrieval in a Smart Home

    NASA Astrophysics Data System (ADS)

    Grech, Raphael; Monekosso, Dorothy; de Jager, Deon; Remagnino, Paolo

    This paper describes a hand held device developed to assist people to locate and retrieve information about objects in a home. The system developed is a standalone device to assist persons with memory impairments such as people suffering from Alzheimer's disease. A second application is object detection and localization for a mobile robot operating in an ambient assisted living environment. The device relies on computer vision techniques to locate a tagged object situated in the environment. The tag is a 2D color printed pattern with a detection range and a field of view such that the user may point from a distance of over 1 meter.

  11. Development of an aviator's helmet-mounted night-vision goggle system

    NASA Astrophysics Data System (ADS)

    Wilson, Gerry H.; McFarlane, Robert J.

    1990-10-01

    Helmet Mounted Systems (HMS) must be lightweight, balanced and compatible with life support and head protection assemblies. This paper discusses the design of one particular HMS, the GEC Ferranti NITE-OP/NIGHTBIRD aviator's Night Vision Goggle (NVG) developed under contracts to the Ministry of Defence for all three services in the United Kingdom (UK) for Rotary Wing and fast jet aircraft. The existing equipment constraints, safety, human factor and optical performance requirements are discussed before the design solution is presented after consideration of these material and manufacturing options.

  12. IR measurements and image processing for enhanced-vision systems in civil aviation

    NASA Astrophysics Data System (ADS)

    Beier, Kurt R.; Fries, Jochen; Mueller, Rupert M.; Palubinskas, Gintautas

    2001-08-01

    A series of IR measurements with a FLIR (Forward Looking Infrared) system during landing approaches to various airports have been performed. A real time image processing procedure to detect and identify the runway and eventual obstacles is discussed and demonstrated. It is based on IR image segmentation and information derived from synthetic vision data. Thhe extracted information from IR images will be combined with the appropriate information from a MMW (millimeter wave) radar sensor in the subsequent fusion processor. This fused information aims to increase the pilot's situation awareness.

  13. Automatic inspection of analog and digital meters in a robot vision system

    NASA Technical Reports Server (NTRS)

    Trivedi, Mohan M.; Marapane, Suresh; Chen, Chuxin

    1988-01-01

    A critical limitation of most of the robots utilized in industrial environments arises due to their inability to utilize sensory feedback. This forces robot operation into totally preprogrammed or teleoperation modes. In order to endow the new generation of robots with higher levels of autonomy techniques for sensing of their work environments and for accurate and efficient analysis of the sensory data must be developed. In this paper detailed development of vision system modules for inspecting various types of meters, both analog and digital, encountered in a robotic inspection and manipulation tasks are described. These modules are tested using industrial robots having multisensory input capability.

  14. Synthesized night vision goggle

    NASA Astrophysics Data System (ADS)

    Zhou, Haixian

    2000-06-01

    A Synthesized Night Vision Goggle that will be described int his paper is a new type of night vision goggle with multiple functions. It consists of three parts: main observing system, picture--superimposed system (or Cathode Ray Tube system) and Charge-Coupled Device system.

  15. Integration of a Multi-Camera Vision System and Strapdown Inertial Navigation System (SDINS) with a Modified Kalman Filter

    PubMed Central

    Parnian, Neda; Golnaraghi, Farid

    2010-01-01

    This paper describes the development of a modified Kalman filter to integrate a multi-camera vision system and strapdown inertial navigation system (SDINS) for tracking a hand-held moving device for slow or nearly static applications over extended periods of time. In this algorithm, the magnitude of the changes in position and velocity are estimated and then added to the previous estimation of the position and velocity, respectively. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. The proposed Kalman filter removes the effect of the gravitational force in the state-space model. As a result, the resulting error is eliminated and the resulting position is smoother and ripple-free. PMID:22219667

  16. Integration of a multi-camera vision system and strapdown inertial navigation system (SDINS) with a modified Kalman filter.

    PubMed

    Parnian, Neda; Golnaraghi, Farid

    2010-01-01

    This paper describes the development of a modified Kalman filter to integrate a multi-camera vision system and strapdown inertial navigation system (SDINS) for tracking a hand-held moving device for slow or nearly static applications over extended periods of time. In this algorithm, the magnitude of the changes in position and velocity are estimated and then added to the previous estimation of the position and velocity, respectively. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. The proposed Kalman filter removes the effect of the gravitational force in the state-space model. As a result, the resulting error is eliminated and the resulting position is smoother and ripple-free. PMID:22219667

  17. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers

    PubMed Central

    Olivares-Mendez, Miguel A.; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F.; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-01-01

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597

  18. Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers.

    PubMed

    Olivares-Mendez, Miguel A; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual

    2015-01-01

    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597

  19. Self-calibration of a binocular vision system based on a one-dimensional target

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Li, Weimin

    2014-10-01

    This paper proposes a method of self-calibration of a binocular vision system based on a one-dimensional (1D) target. This method only needs a 1D target with two feature points and the distance between the two points is unknown. During the process of computation, the distance value can be set an arbitrary value which is near the actual distance value. Using the method proposed in this paper, we can get the parameters of the binocular vision system including internal parameters of the two cameras and the external parameters (but there exists a non-zero scale factor in the translation vector which is connected to the initial distance value we set), the distortion parameters of cameras and the three-dimensional coordinates of the two points in different positions. In this paper, we determine theoretically that the initial distance value will not influence the results, and also the results of numerical simulation and experiment example are shown to demonstrate the method. Most importantly, this method is insensitive to the initial distance value, and it is the biggest advantage. In a practical application, we can use a 1D target with unknown distance to calibrate the binocular system conveniently; also we can use this method to calibrate the camera in a large field of view with a small 1D target.

  20. A simple machine vision-driven system for measuring optokinetic reflex in small animals.

    PubMed

    Shirai, Yoshihiro; Asano, Kenta; Takegoshi, Yoshihiro; Uchiyama, Shu; Nonobe, Yuki; Tabata, Toshihide

    2013-09-01

    The optokinetic reflex (OKR) is useful to monitor the function of the visual and motor nervous systems. However, OKR measurement is not open to all because dedicated commercial equipment or detailed instructions for building in-house equipment is rarely offered. Here we describe the design of an easy-to-install/use yet reliable OKR measuring system including a computer program to visually locate the pupil and a mathematical procedure to estimate the pupil azimuth from the location data. The pupil locating program was created on a low-cost machine vision development platform, whose graphical user interface allows one to compose and operate the program without programming expertise. Our system located mouse pupils at a high success rate (~90 %), estimated their azimuth precisely (~94 %), and detected changes in OKR gain due to the pharmacological modulation of the cerebellar flocculi. The system would promote behavioral assessment in physiology, pharmacology, and genetics. PMID:23824466

  1. A vehicle photoelectric detection system based on guidance of machine vision

    NASA Astrophysics Data System (ADS)

    Wang, Yawei; Liu, Yu; Chen, Wei; Chen, Jing; Guo, Jia; Zhou, Lijun; Zheng, Haotian; Zhang, Xuantao

    2015-04-01

    A vehicle photoelectric detection system based on guidance of machine vision is described in detail, which is composed of electric-optic turret, distributed perception module, position orientation system and data process terminal, etc. Simultaneously, a target detection method used in the system based on visual guide is also discussed in this paper. This method, based on the initial alignment of camera position and the precise alignment of target location, realizes the target acquisition and measurement by using the high-definition cameras of distributed perception module installed around the vehicle as the human eyes to guide the line of sight of optoelectronic devices on the turret to the field of view of one camera quickly and carry on small-scale target alignment operations. Simulation results show that the method could achieve the intelligent dynamic guide of photoelectric detection system, and improve the detection efficiency and accuracy.

  2. Alaskan flight trials of a synthetic vision system for instrument landings of a piston twin aircraft

    NASA Astrophysics Data System (ADS)

    Barrows, Andrew K.; Alter, Keith W.; Jennings, Chad W.; Powell, J. D.

    1999-07-01

    Stanford University has developed a low-cost prototype synthetic vision system and flight tested it onboard general aviation aircraft. The display aids pilots by providing an 'out the window' view, making visualization of the desired flight path a simple task. Predictor symbology provides guidance on straight and curved paths presented in a 'tunnel- in-the-sky' format. Based on commodity PC hardware to achieve low cost, the Tunnel Display system uses differential GPS (typically from Stanford prototype Wide Area Augmentation System hardware) for positioning and GPS-aided inertial sensors for attitude determination. The display has been flown onboard Piper Dakota and Beechcraft Queen Air aircraft at several different locations. This paper describes the system, its development, and flight trials culminating with tests in Alaska during the summer of 1998. Operational experience demonstrated the Tunnel Display's ability to increase flight- path following accuracy and situational awareness while easing the task instrument flying.

  3. Developing Crew Health Care and Habitability Systems for the Exploration Vision

    NASA Technical Reports Server (NTRS)

    Laurini, Kathy; Sawin, Charles F.

    2006-01-01

    This paper will discuss the specific mission architectures associated with the NASA Exploration Vision and review the challenges and drivers associated with developing crew health care and habitability systems to manage human system risks. Crew health care systems must be provided to manage crew health within acceptable limits, as well as respond to medical contingencies that may occur during exploration missions. Habitability systems must enable crew performance for the tasks necessary to support the missions. During the summer of 2005, NASA defined its exploration architecture including blueprints for missions to the moon and to Mars. These mission architectures require research and technology development to focus on the operational risks associated with each mission, as well as the risks to long term astronaut health. This paper will review the highest priority risks associated with the various missions and discuss NASA s strategies and plans for performing the research and technology development necessary to manage the risks to acceptable levels.

  4. Triangle orientation discrimination performance model for a multiband IR imaging system with human vision

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Wang, Xiaorui; Zhang, Jianqi; Bai, Honggang

    2011-08-01

    In support of multiband imaging system performance forecasting, an equation-based triangle orientation discrimination (TOD) model is developed. Specifically, with the characteristic of the test pattern related to spectrum, the mathematical equations for predicting the TOD threshold of the system with distributed fusion architecture in the IR spectrum band are derived based on human vision with the ``k/N'' fusion rule, with emphasis on the impacts of fusion on the threshold. Furthermore, a figure of merit Q related to the TOD calculation results is introduced to analyze the relation of the discrimination performance of multiband imaging system to the size and the spectral difference of test pattern. The preliminary validation with the experiment results suggests that our proposed model can provide a reasonable prediction of the performance for a multiband imaging system.

  5. Commercial Flight Crew Decision-Making during Low-Visibility Approach Operations Using Fused Synthetic/Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III

    2007-01-01

    NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.

  6. Calibration method for a vision guiding-based laser-tracking measurement system

    NASA Astrophysics Data System (ADS)

    Shao, Mingwei; Wei, Zhenzhong; Hu, Mengjie; Zhang, Guangjun

    2015-08-01

    Laser-tracking measurement systems (laser trackers) based on a vision-guiding device are widely used in industrial fields, and their calibration is important. As conventional methods typically have many disadvantages, such as difficult machining of the target and overdependence on the retroreflector, a novel calibration method is presented in this paper. The retroreflector, which is necessary in the normal calibration method, is unnecessary in our approach. As the laser beam is linear, points on the beam can be obtained with the help of a normal planar target. In this way, we can determine the function of a laser beam under the camera coordinate system, while its corresponding function under the laser-tracker coordinate system can be obtained from the encoder of the laser tracker. Clearly, when several groups of functions are confirmed, the rotation matrix can be solved from the direction vectors of the laser beams in different coordinate systems. As the intersection of the laser beams is the origin of the laser-tracker coordinate system, the translation matrix can also be determined. Our proposed method not only achieves the calibration of a single laser-tracking measurement system but also provides a reference for the calibration of a multistation system. Simulations to evaluate the effects of some critical factors were conducted. These simulations show the robustness and accuracy of our method. In real experiments, the root mean square error of the calibration result reached 1.46 mm within a range of 10 m, even though the vision-guiding device focuses on a point approximately 5 m away from the origin of its coordinate system, with a field of view of approximately 200 mm  ×  200 mm.

  7. Pose measurement base on machine vision for the aircraft model in wire-driven parallel suspension system

    NASA Astrophysics Data System (ADS)

    Chen, Yi-feng; Wu, Liao-ni; Yue, Sui-lu; Lin, Qi

    2013-03-01

    In wind tunnel tests, the pose of the aircraft model in wire-driven parallel suspension system (WDPSS) is determined by driving several wires. Pose measurement is very important for the study of WDPSS. Using machine vision technology, Monocular Vision Measurement System has been constructed to estimate the pose of the aircraft model by applying a camera calibration, by extracting corresponding control points for the aircraft model, and by applying several homogeneous transformations. This article describes the programs of the measurement system, measurement principle and data processing methods which is based on HALCON to achieve the Solution of the pose of aircraft model. Through experiments, practical feasibility of the system is validated.

  8. Overview of passive and active vision techniques for hand-held 3D data acquistion

    NASA Astrophysics Data System (ADS)

    Mada, Sreenivasa K.; Smith, Melvyn L.; Smith, Lyndon N.; Midha, Prema S.

    2003-03-01

    The digitization of the 3D shape of real objects is a rapidly expanding discipline, with a wide variety of applications, including shape acquisition, inspection, reverse engineering, gauging and robot navigation. Developments in computer product design techniques, automated production, and the need for close manufacturing tolerances will be facts of life for the foreseeable future. A growing need exists for fast, accurate, portable, non-contact 3D sensors. However, in order for 3D scanning to become more commonplace, new methods are needed for easily, quickly and robustly acquiring accurate full geometric models of complex objects using low cost technology. In this paper, a brief survey is presented of current scanning technologies available for acquiring range data. An overview is provided of current 3D-shape acquisition using both active and passive vision techniques. Each technique is explained in terms of its configuration, principle of operation, and the inherent advantages and limitations. A separate section then focuses on the implications of scannerless scanning for hand held technology, after which the current status of 3D acquisition using handheld technology, together with related issues concerning implementation, is considered more fully. Finally, conclusions for further developments in handheld devices are discussed. This paper may be of particular benefit to new comers in this field.

  9. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles.

    PubMed

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-01-01

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft's nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft's nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path. PMID:26184213

  10. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    NASA Astrophysics Data System (ADS)

    Castellini, P.; Cecchini, S.; Stroppa, L.; Paone, N.

    2015-02-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes.

  11. A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context

    PubMed Central

    Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco

    2014-01-01

    Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services. PMID:24854209

  12. Terrain Portrayal for Synthetic Vision Systems Head-Down Displays Evaluation Results: Compilation of Pilot Transcripts

    NASA Technical Reports Server (NTRS)

    Hughes, Monica F.; Glaab, Louis J.

    2007-01-01

    The Terrain Portrayal for Head-Down Displays (TP-HDD) simulation experiment addressed multiple objectives involving twelve display concepts (two baseline concepts without terrain and ten synthetic vision system (SVS) variations), four evaluation maneuvers (two en route and one approach maneuver, plus a rare-event scenario), and three pilot group classifications. The TP-HDD SVS simulation was conducted in the NASA Langley Research Center's (LaRC's) General Aviation WorkStation (GAWS) facility. The results from this simulation establish the relationship between terrain portrayal fidelity and pilot situation awareness, workload, stress, and performance and are published in the NASA TP entitled Terrain Portrayal for Synthetic Vision Systems Head-Down Displays Evaluation Results. This is a collection of pilot comments during each run of the TP-HDD simulation experiment. These comments are not the full transcripts, but a condensed version where only the salient remarks that applied to the scenario, the maneuver, or the actual research itself were compiled.

  13. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    PubMed Central

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-01-01

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path. PMID:26184213

  14. Automated vision system for fabric defect inspection using Gabor filters and PCNN.

    PubMed

    Li, Yundong; Zhang, Cheng

    2016-01-01

    In this study, an embedded machine vision system using Gabor filters and Pulse Coupled Neural Network (PCNN) is developed to identify defects of warp-knitted fabrics automatically. The system consists of smart cameras and a Human Machine Interface (HMI) controller. A hybrid detection algorithm combing Gabor filters and PCNN is running on the SOC processor of the smart camera. First, Gabor filters are employed to enhance the contrast of images captured by a CMOS sensor. Second, defect areas are segmented by PCNN with adaptive parameter setting. Third, smart cameras will notice the controller to stop the warp-knitting machine once defects are found out. Experimental results demonstrate that the hybrid method is superior to Gabor and wavelet methods on detection accuracy. Actual operations in a textile factory verify the effectiveness of the inspection system. PMID:27386251

  15. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology.

    PubMed

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-01-01

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40-50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production. PMID:27571078

  16. A Respiratory Movement Monitoring System Using Fiber-Grating Vision Sensor for Diagnosing Sleep Apnea Syndrome

    NASA Astrophysics Data System (ADS)

    Takemura, Yasuhiro; Sato, Jun-Ya; Nakajima, Masato

    2005-01-01

    A non-restrictive and non-contact respiratory movement monitoring system that finds the boundary between chest and abdomen automatically and detects the vertical movement of each part of the body separately is proposed. The system uses a fiber-grating vision sensor technique and the boundary position detection is carried out by calculating the centers of gravity of upward moving and downward moving sampling points, respectively. In the experiment to evaluate the ability to detect the respiratory movement signals of each part and to discriminate between obstructive and central apneas, detected signals of the two parts and their total clearly showed the peculiarities of obstructive and central apnea. The cross talk between the two categories classified automatically according to several rules that reflect the peculiarities was ≤ 15%. This result is sufficient for discriminating central sleep apnea syndrome from obstructive sleep apnea syndrome and indicates that the system is promising as screening equipment. Society of Japan

  17. Flight study of on-board enhanced vision system for all-weather aircraft landing

    NASA Astrophysics Data System (ADS)

    Akopdjanan, Yuri A.; Machikhin, Alexander S.; Bilanchuk, Vyacheslav V.; Drynkin, Vladimir N.; Falkov, Eduard Y.; Tsareva, Tatiana I.; Fomenko, Anatoly I.

    2014-11-01

    On-board enhanced vision system for all-weather aircraft navigation and landing which is currently under development in State research institute of aviation systems is described. The system is based on combination of three imagers sensitive in visible, short wave infrared (SWIR) and long wave infrared (LWIR) spectral ranges and demonstrating to the pilot only the most informative images from the time-aligned multi-sensor data. The results of flight tests at glissade trajectories of the light aircraft OR-5 MO obtained at various weather conditions are presented. It is shown that each spectral range may be informative under certain conditions of observation. In adverse and poor-visibility conditions, such as fog, high humidity and low clouds, SWIR range has the biggest information content.

  18. Lambda Vision

    NASA Astrophysics Data System (ADS)

    Czajkowski, Michael

    2014-06-01

    There is an explosion in the quantity and quality of IMINT data being captured in Intelligence Surveillance and Reconnaissance (ISR) today. While automated exploitation techniques involving computer vision are arriving, only a few architectures can manage both the storage and bandwidth of large volumes of IMINT data and also present results to analysts quickly. Lockheed Martin Advanced Technology Laboratories (ATL) has been actively researching in the area of applying Big Data cloud computing techniques to computer vision applications. This paper presents the results of this work in adopting a Lambda Architecture to process and disseminate IMINT data using computer vision algorithms. The approach embodies an end-to-end solution by processing IMINT data from sensors to serving information products quickly to analysts, independent of the size of the data. The solution lies in dividing up the architecture into a speed layer for low-latent processing and a batch layer for higher quality answers at the expense of time, but in a robust and fault-tolerant way. This approach was evaluated using a large corpus of IMINT data collected by a C-130 Shadow Harvest sensor over Afghanistan from 2010 through 2012. The evaluation data corpus included full motion video from both narrow and wide area field-of-views. The evaluation was done on a scaled-out cloud infrastructure that is similar in composition to those found in the Intelligence Community. The paper shows experimental results to prove the scalability of the architecture and precision of its results using a computer vision algorithm designed to identify man-made objects in sparse data terrain.

  19. Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    PubMed Central

    2010-01-01

    Background Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands. Methods The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand). The controller, termed cognitive vision system (CVS), mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1) the user triggers the system and controls the orientation of the hand; 2) a high-level controller automatically selects the grasp type and size; and 3) an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances. Results The system correctly estimated grasp type and size (nine commands in total) in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only). Conclusions The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties) and autonomous decision making (i.e., selecting the grasp type and size). The automatic

  20. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  1. Robotic vision. [process control applications

    NASA Technical Reports Server (NTRS)

    Williams, D. S.; Wilf, J. M.; Cunningham, R. T.; Eskenazi, R.

    1979-01-01

    Robotic vision, involving the use of a vision system to control a process, is discussed. Design and selection of active sensors employing radiation of radio waves, sound waves, and laser light, respectively, to light up unobservable features in the scene are considered, as are design and selection of passive sensors, which rely on external sources of illumination. The segmentation technique by which an image is separated into different collections of contiguous picture elements having such common characteristics as color, brightness, or texture is examined, with emphasis on the edge detection technique. The IMFEX (image feature extractor) system performing edge detection and thresholding at 30 frames/sec television frame rates is described. The template matching and discrimination approach to recognize objects are noted. Applications of robotic vision in industry for tasks too monotonous or too dangerous for the workers are mentioned.

  2. Development of a Machine-Vision System for Recording of Force Calibration Data

    NASA Astrophysics Data System (ADS)

    Heamawatanachai, Sumet; Chaemthet, Kittipong; Changpan, Tawat

    This paper presents the development of a new system for recording of force calibration data using machine vision technology. Real time camera and computer system were used to capture images of the reading from the instruments during calibration. Then, the measurement images were transformed and translated to numerical data using optical character recognition (OCR) technique. These numerical data along with raw images were automatically saved to memories as the calibration database files. With this new system, the human error of recording would be eliminated. The verification experiments were done by using this system for recording the measurement results from an amplifier (DMP 40) with load cell (HBM-Z30-10kN). The NIMT's 100-kN deadweight force standard machine (DWM-100kN) was used to generate test forces. The experiments setup were done in 3 categories; 1) dynamics condition (record during load changing), 2) statics condition (record during fix load), and 3) full calibration experiments in accordance with ISO 376:2011. The captured images from dynamics condition experiment gave >94% without overlapping of number. The results from statics condition experiment were >98% images without overlapping. All measurement images without overlapping were translated to number by the developed program with 100% accuracy. The full calibration experiments also gave 100% accurate results. Moreover, in case of incorrect translation of any result, it is also possible to trace back to the raw calibration image to check and correct it. Therefore, this machine-vision-based system and program should be appropriate for recording of force calibration data.

  3. Machine vision system: a tool for quality inspection of food and agricultural products.

    PubMed

    Patel, Krishna Kumar; Kar, A; Jha, S N; Khan, M A

    2012-04-01

    Quality inspection of food and agricultural produce are difficult and labor intensive. Simultaneously, with increased expectations for food products of high quality and safety standards, the need for accurate, fast and objective quality determination of these characteristics in food products continues to grow. However, these operations generally in India are manual which is costly as well as unreliable because human decision in identifying quality factors such as appearance, flavor, nutrient, texture, etc., is inconsistent, subjective and slow. Machine vision provides one alternative for an automated, non-destructive and cost-effective technique to accomplish these requirements. This inspection approach based on image analysis and processing has found a variety of different applications in the food industry. Considerable research has highlighted its potential for the inspection and grading of fruits and vegetables, grain quality and characteristic examination and quality evaluation of other food products like bakery products, pizza, cheese, and noodles etc. The objective of this paper is to provide in depth introduction of machine vision system, its components and recent work reported on food and agricultural produce. PMID:23572836

  4. Calibration target reconstruction for 3-D vision inspection system of large-scale engineering objects

    NASA Astrophysics Data System (ADS)

    Yin, Yongkai; Peng, Xiang; Guan, Yingjian; Liu, Xiaoli; Li, Ameng

    2010-11-01

    It is usually difficult to calibrate the 3-D vision inspection system that may be employed to measure the large-scale engineering objects. One of the challenges is how to in-situ build-up a large and precise calibration target. In this paper, we present a calibration target reconstruction strategy to solve such a problem. First, we choose one of the engineering objects to be inspected as a calibration target, on which we paste coded marks on the object surface. Next, we locate and decode marks to get homologous points. From multiple camera images, the fundamental matrix between adjacent images can be estimated, and then the essential matrix can be derived with priori known camera intrinsic parameters and decomposed to obtain camera extrinsic parameters. Finally, we are able to obtain the initial 3D coordinates with binocular stereo vision reconstruction, and then optimize them with the bundle adjustment by considering the lens distortions, leading to a high-precision calibration target. This reconstruction strategy has been applied to the inspection of an industrial project, from which the proposed method is successfully validated.

  5. Base program interim phase test procedure - Coherent Laser Vision System (CLVS). Final report, September 27, 1994--January 30, 1997

    SciTech Connect

    1997-05-01

    The purpose of the CLVS research project is to develop a prototype fiber-optic based Coherent Laser Vision System suitable for DOE`s EM Robotics program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update geometrics on the order of once per second. The CLVS project plan required implementation in two phases of the contract, a Base Contract and a continuance option. This is the Test Procedure and test/demonstration results presenting a proof-of-concept for a system providing three-dimensional (3D) vision with the performance capability required to update geometrics on the order of once per second.

  6. Vision-based measuring system for rider's pose estimation during motorcycle riding

    NASA Astrophysics Data System (ADS)

    Cheli, F.; Mazzoleni, P.; Pezzola, M.; Ruspini, E.; Zappa, E.

    2013-07-01

    Inertial characteristics of the human body are comparable with the vehicle ones in motorbike riding: the study of a rider's dynamic is a crucial step in system modeling. An innovative vision based system able to measure the six degrees of freedom of the driver with respect to the vehicle is proposed here: the core of the proposed approach is an image acquisition and processing technique capable of reconstructing the position and orientation of a target fixed on the rider's back. The technique is firstly validated in laboratory tests comparing measured and imposed target motion laws and successively tested in a real case scenario during track tests with amateur and professional drivers. The presented results show the capability of the technique to correctly describe the driver's dynamic, his interaction with the vehicle as well as the possibility to use the new measuring technique in the comparison of different driving styles.

  7. Artificial human vision camera

    NASA Astrophysics Data System (ADS)

    Goudou, J.-F.; Maggio, S.; Fagno, M.

    2014-10-01

    In this paper we present a real-time vision system modeling the human vision system. Our purpose is to inspire from human vision bio-mechanics to improve robotic capabilities for tasks such as objects detection and tracking. This work describes first the bio-mechanical discrepancies between human vision and classic cameras and the retinal processing stage that takes place in the eye, before the optic nerve. The second part describes our implementation of these principles on a 3-camera optical, mechanical and software model of the human eyes and associated bio-inspired attention model.

  8. Exercise for People with Low Vision

    MedlinePlus

    ... Be a Partner Exercise for People with Low Vision People with low vision can be active in many ways! Before you ... your orientation. Learn more about living with low vision from the National Eye Institute on NIH . Find ...

  9. Study of Synthetic Vision Systems (SVS) and Velocity-vector Based Command Augmentation System (V-CAS) on Pilot Performance

    NASA Technical Reports Server (NTRS)

    Liu, Dahai; Goodrich, Ken; Peak, Bob

    2006-01-01

    This study investigated the effects of synthetic vision system (SVS) concepts and advanced flight controls on single pilot performance (SPP). Specifically, we evaluated the benefits and interactions of two levels of terrain portrayal, guidance symbology, and control-system response type on SPP in the context of lower-landing minima (LLM) approaches. Performance measures consisted of flight technical error (FTE) and pilot perceived workload. In this study, pilot rating, control type, and guidance symbology were not found to significantly affect FTE or workload. It is likely that transfer from prior experience, limited scope of the evaluation task, specific implementation limitations, and limited sample size were major factors in obtaining these results.

  10. Sensor fusion to enable next generation low cost Night Vision systems

    NASA Astrophysics Data System (ADS)

    Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.

    2010-04-01

    The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be

  11. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. PMID:26924646

  12. Overview of computer vision

    SciTech Connect

    Gevarter, W.B.

    1982-09-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  13. Spectral matching technology of a low-light-level night-vision system with a laser illuminator.

    PubMed

    Liu, Lei; Wang, Xin; Chen, Jilu

    2010-01-20

    According to the spectral distribution of a laser illuminator and the reflectivity of the objects, the reflective spectral distributions of dark green paint, rough concrete, and green vegetation under laser radiation are deduced for a low-light-level night-vision system with a laser illuminator. The spectral-matching factors of Super S(25) and New S(25) photocathodes for dark green paint, rough concrete, and green vegetation are calculated and compared. The results show that the evaluation of visual range for a night-vision system with a laser illuminator under field circumstances is greatly influenced by the spectral-matching factor. PMID:20090790

  14. Active optical zoom system

    DOEpatents

    Wick, David V.

    2005-12-20

    An active optical zoom system changes the magnification (or effective focal length) of an optical imaging system by utilizing two or more active optics in a conventional optical system. The system can create relatively large changes in system magnification with very small changes in the focal lengths of individual active elements by leveraging the optical power of the conventional optical elements (e.g., passive lenses and mirrors) surrounding the active optics. The active optics serve primarily as variable focal-length lenses or mirrors, although adding other aberrations enables increased utility. The active optics can either be LC SLMs, used in a transmissive optical zoom system, or DMs, used in a reflective optical zoom system. By appropriately designing the optical system, the variable focal-length lenses or mirrors can provide the flexibility necessary to change the overall system focal length (i.e., effective focal length), and therefore magnification, that is normally accomplished with mechanical motion in conventional zoom lenses. The active optics can provide additional flexibility by allowing magnification to occur anywhere within the FOV of the system, not just on-axis as in a conventional system.

  15. Vision based interface system for hands free control of an intelligent wheelchair

    PubMed Central

    Ju, Jin Sun; Shin, Yunhee; Kim, Eun Yi

    2009-01-01

    Background Due to the shift of the age structure in today's populations, the necessities for developing the devices or technologies to support them have been increasing. Traditionally, the wheelchair, including powered and manual ones, is the most popular and important rehabilitation/assistive device for the disabled and the elderly. However, it is still highly restricted especially for severely disabled. As a solution to this, the Intelligent Wheelchairs (IWs) have received considerable attention as mobility aids. The purpose of this work is to develop the IW interface for providing more convenient and efficient interface to the people the disability in their limbs. Methods This paper proposes an intelligent wheelchair (IW) control system for the people with various disabilities. To facilitate a wide variety of user abilities, the proposed system involves the use of face-inclination and mouth-shape information, where the direction of an IW is determined by the inclination of the user's face, while proceeding and stopping are determined by the shapes of the user's mouth. Our system is composed of electric powered wheelchair, data acquisition board, ultrasonic/infra-red sensors, a PC camera, and vision system. Then the vision system to analyze user's gestures is performed by three stages: detector, recognizer, and converter. In the detector, the facial region of the intended user is first obtained using Adaboost, thereafter the mouth region is detected based on edge information. The extracted features are sent to the recognizer, which recognizes the face inclination and mouth shape using statistical analysis and K-means clustering, respectively. These recognition results are then delivered to the converter to control the wheelchair. Result & conclusion The advantages of the proposed system include 1) accurate recognition of user's intention with minimal user motion and 2) robustness to a cluttered background and the time-varying illumination. To prove these

  16. Estimation of 3D reconstruction errors in a stereo-vision system

    NASA Astrophysics Data System (ADS)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  17. LES SOFTWARE FOR THE DESIGN OF LOW EMISSION COMBUSTION SYSTEMS FOR VISION 21 PLANTS

    SciTech Connect

    Clifford E. Smith; Steven M. Cannon; Virgil Adumitroaie; David L. Black; Karl V. Meredith

    2005-01-01

    In this project, an advanced computational software tool was developed for the design of low emission combustion systems required for Vision 21 clean energy plants. Vision 21 combustion systems, such as combustors for gas turbines, combustors for indirect fired cycles, furnaces and sequestrian-ready combustion systems, will require innovative low emission designs and low development costs if Vision 21 goals are to be realized. The simulation tool will greatly reduce the number of experimental tests; this is especially desirable for gas turbine combustor design since the cost of the high pressure testing is extremely costly. In addition, the software will stimulate new ideas, will provide the capability of assessing and adapting low-emission combustors to alternate fuels, and will greatly reduce the development time cycle of combustion systems. The revolutionary combustion simulation software is able to accurately simulate the highly transient nature of gaseous-fueled (e.g. natural gas, low BTU syngas, hydrogen, biogas etc.) turbulent combustion and assess innovative concepts needed for Vision 21 plants. In addition, the software is capable of analyzing liquid-fueled combustion systems since that capability was developed under a concurrent Air Force Small Business Innovative Research (SBIR) program. The complex physics of the reacting flow field are captured using 3D Large Eddy Simulation (LES) methods, in which large scale transient motion is resolved by time-accurate numerics, while the small scale motion is modeled using advanced subgrid turbulence and chemistry closures. In this way, LES combustion simulations can model many physical aspects that, until now, were impossible to predict with 3D steady-state Reynolds Averaged Navier-Stokes (RANS) analysis, i.e. very low NOx emissions, combustion instability (coupling of unsteady heat and acoustics), lean blowout, flashback, autoignition, etc. LES methods are becoming more and more practical by linking together tens

  18. Combined three-dimensional computer vision and epi-illumination fluorescence imaging system

    NASA Astrophysics Data System (ADS)

    Gorpas, Dimitris; Yova, Dido; Politopoulos, Kostas

    2012-03-01

    Most of the reported fluorescence imaging methods and systems highlight the need for three-dimensional information of the inspected region surface geometry. The scope of this manuscript is to introduce an epi-illumination fluorescence imaging system, which has been enhanced with a binocular machine vision system for the translation of the inverse problem solution to the global coordinates system. The epi-illumination fluorescence imaging system is consisted of a structured scanning excitation source, which increases the spatial differentiation of the measured data, and a telecentric lens, which increases the angular differentiation. On the other hand, the binocular system is based on the projection of a structured light pattern on the inspected area, for the solution of the correspondence problem between the stereo pair. The functionality of the system has been evaluated on tissue phantoms and calibration objects. The reconstruction accuracy of the fluorophores distribution, as resulted from the root mean square error between the actual distribution and the outcome of the forward solver, was more than 80%. On the other hand, the surface three-dimensional reconstruction of the inspected region presented 0.067+/-0.004 mm accuracy, as resulted from the mean Euclidean distance between the three-dimensional position of the real world points and those reconstructed.

  19. Development and evaluation of a vision based poultry debone line monitoring system

    NASA Astrophysics Data System (ADS)

    Usher, Colin T.; Daley, W. D. R.

    2013-05-01

    Efficient deboning is key to optimizing production yield (maximizing the amount of meat removed from a chicken frame while reducing the presence of bones). Many processors evaluate the efficiency of their deboning lines through manual yield measurements, which involves using a special knife to scrape the chicken frame for any remaining meat after it has been deboned. Researchers with the Georgia Tech Research Institute (GTRI) have developed an automated vision system for estimating this yield loss by correlating image characteristics with the amount of meat left on a skeleton. The yield loss estimation is accomplished by the system's image processing algorithms, which correlates image intensity with meat thickness and calculates the total volume of meat remaining. The team has established a correlation between transmitted light intensity and meat thickness with an R2 of 0.94. Employing a special illuminated cone and targeted software algorithms, the system can make measurements in under a second and has up to a 90-percent correlation with yield measurements performed manually. This same system is also able to determine the probability of bone chips remaining in the output product. The system is able to determine the presence/absence of clavicle bones with an accuracy of approximately 95 percent and fan bones with an accuracy of approximately 80%. This paper describes in detail the approach and design of the system, results from field testing, and highlights the potential benefits that such a system can provide to the poultry processing industry.

  20. LES SOFTWARE FOR THE DESIGN OF LOW EMISSION COMBUSTION SYSTEMS FOR VISION 21 PLANTS

    SciTech Connect

    Clifford E. Smith

    2005-04-01

    Vision 21 combustion systems will require innovative low emission designs and low development costs if Vision 21 goals are to be realized. In this three-year project, an advanced computational software tool will be developed for the design of low emission combustion systems required for Vision 21 clean energy plants. The combustion Large Eddy Simulation (LES) software will be able to accurately simulate the highly transient nature of gaseous-fueled turbulent combustion so that innovative concepts can be assessed and developed with fewer high-cost experimental tests. During the first year, the project included the development and implementation of improved chemistry (reduced GRI mechanism), subgrid turbulence (localized dynamic), and subgrid combustion-turbulence interaction (Linear Eddy) models into the CFDACE+ code. University expertise (Georgia Tech and UC Berkeley) was utilized to help develop and implement these advanced submodels into the unstructured, parallel CFD flow solver, CFD-ACE+. Efficient numerical algorithms that rely on in situ look-up tables or artificial neural networks were implemented for chemistry calculations. In the second year, the combustion LES software was evaluated and validated using experimental data from lab-scale and industrial test configurations. This code testing (i.e., alpha testing) was performed by CFD Research Corporation's engineers. During the third year, six industrial and academic partners used the combustion LES code and exercised it on problems of their choice (i.e., beta testing). Final feedback and optimizations were then be implemented in the final release version of the combustion LES software that will be licensed to the general public. An additional one-year task was added for the fourth year of this program entitled, ''LES Simulations of SIMVAL Results''. For this task, CFDRC performed LES calculations of selected SIMVAL cases, and compared predictions with measurements. In addition to comparisons with NO{sub x

  1. VAS: A Vision Advisor System combining agents and object-oriented databases

    NASA Technical Reports Server (NTRS)

    Eilbert, James L.; Lim, William; Mendelsohn, Jay; Braun, Ron; Yearwood, Michael

    1994-01-01

    A model-based approach to identifying and finding the orientation of non-overlapping parts on a tray has been developed. The part models contain both exact and fuzzy descriptions of part features, and are stored in an object-oriented database. Full identification of the parts involves several interacting tasks each of which is handled by a distinct agent. Using fuzzy information stored in the model allowed part features that were essentially at the noise level to be extracted and used for identification. This was done by focusing attention on the portion of the part where the feature must be found if the current hypothesis of the part ID is correct. In going from one set of parts to another the only thing that needs to be changed is the database of part models. This work is part of an effort in developing a Vision Advisor System (VAS) that combines agents and objected-oriented databases.

  2. Self-calibration of monocular vision system based on planar points

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Xu, Lichao

    2015-02-01

    This paper proposes a method of self-calibration of monocular vision system which is based on planar points. Using the method proposed in this paper we can get the initial value of the three-dimensional (3D) coordinates of the feature points in the scene easily, although there is a nonzero factor between the initial value and the real value of the 3D coordinates of the feature points. From different viewpoints, we can shoot different pictures, and calculate the initial external parameters of these pictures. Finally, through the overall optimization, we can get all the parameters including the internal parameters, the distortion parameters, the external parameters of each picture and the 3D coordinates of the feature points. According to the experimental results, in about 100mm×200mm field of view, the mean error and the variance of 3D coordinates of the feature points is less than 10μm.

  3. Real-time model-based vision system for object acquisition and tracking

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd

    1987-01-01

    A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.

  4. Fast-camera calibration of stereo vision system using BP neural networks

    NASA Astrophysics Data System (ADS)

    Cai, Huimin; Li, Kejie; Liu, Meilian; Song, Ping

    2010-10-01

    In position measurements by far-range photogrammetry, the scale between object and image has to be calibrated. It means to get the parameters of the perspective projection matrix. Because the image sensor of fast-camera is CMOS, there are many uncertain distortion factors. It is hard to describe the scale between object and image for the traditional calibration based on the mathematical model. In this paper, a new method for calibrating stereo vision systems with neural networks is described. A linear method is used for 3D position estimation and its error is corrected by neural networks. Compared with DLT (Direct Linear Transformation) and direct mapping by neural networks, the accuracy is improved. We have used this method in the drop point measurement of an object in high speed successfully.

  5. Research on the gray distortion and calibration of machine vision system

    NASA Astrophysics Data System (ADS)

    Ye, Yucheng; Wang, Jianping; Ying, Yibin; Rao, Xiuqin

    2004-11-01

    The laws of gray distortion of machine vision system were discussed, and a method for gray calibration was presented. Five standard templates with unanimous gray value were used as the research objects. The average gray values of X direction and Y direction of the standard template images were obtained according to row and column. The gray distortion models were developed with moving average model of two image pixels. The models of five standard templates were developed separately, and the correlation coefficients of each model were above 0.96. The parameters of the gray distortion model were independent to the templates themselves. The gray calibration models of row and column were developed based on the gray distortion models separately, and the image gray values of other templates were proportion to the true value after gray calibration with the gray calibration models. The test verified the method.

  6. Research on on-line grading system for pearl defect based on machine vision

    NASA Astrophysics Data System (ADS)

    Zhou, Jilin; Ma, Li

    2008-03-01

    A novel method for automated defect detection of pearls based on machine vision is proposed. Firstly, a dome-shaped light source with diffused light illumination was designed to improve image quality and reduce light-spot size. And a novel quasi-synchronous multi-images grabbing scheme from different views is then designed based on pearl' free-falling motion. Then a nonlinear filter based on space geometry is given to enhance defect contrasts following by a region-grow method for extracting all suspicious defects, including highlight-halation regions. Furthermore, the highlight-halation regions were removed using morphological method based on the spatial distributive model of the highlight-halation. At last, shape and texture features of defect regions are extracted and SVM method was used for defect grading. Experiments show that the acquired images included the complete information of pearl surfaces and the system correctness was over 93.3% .

  7. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    PubMed Central

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  8. Transition of Attention in Terminal Area NextGen Operations Using Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle K. E.; Kramer, Lynda J.; Shelton, Kevin J.; Arthur, Shelton, J. J., III; Prinzel, Lance J., III; Norman, Robert M.

    2011-01-01

    This experiment investigates the capability of Synthetic Vision Systems (SVS) to provide significant situation awareness in terminal area operations, specifically in low visibility conditions. The use of a Head-Up Display (HUD) and Head-Down Displays (HDD) with SVS is contrasted to baseline standard head down displays in terms of induced workload and pilot behavior in 1400 RVR visibility levels. Variances across performance and pilot behavior were reviewed for acceptability when using HUD or HDD with SVS under reduced minimums to acquire the necessary visual components to continue to land. The data suggest superior performance for HUD implementations. Improved attentional behavior is also suggested for HDD implementations of SVS for low-visibility approach and landing operations.

  9. A survey of autonomous vision-based See and Avoid for Unmanned Aircraft Systems

    NASA Astrophysics Data System (ADS)

    Mcfadyen, Aaron; Mejias, Luis

    2016-01-01

    This paper provides a comprehensive review of the vision-based See and Avoid problem for unmanned aircraft. The unique problem environment and associated constraints are detailed, followed by an in-depth analysis of visual sensing limitations. In light of such detection and estimation constraints, relevant human, aircraft and robot collision avoidance concepts are then compared from a decision and control perspective. Remarks on system evaluation and certification are also included to provide a holistic review approach. The intention of this work is to clarify common misconceptions, realistically bound feasible design expectations and offer new research directions. It is hoped that this paper will help us to unify design efforts across the aerospace and robotics communities.

  10. Omnidirectional stereo vision sensor based on single camera and catoptric system.

    PubMed

    Zhou, Fuqiang; Chai, Xinghua; Chen, Xin; Song, Ya

    2016-09-01

    An omnidirectional stereo vision sensor based on one single camera and catoptric system is proposed. As crucial components, one camera and two pyramid mirrors are used for imaging. The omnidirectional measurement towards different directions in the horizontal field can be performed by four pairs of virtual cameras, with a consummate synchronism and an improved compactness. Moreover, the perspective projection invariance is ensured in the imaging process, which avoids the imaging distortion reflected by the curved mirrors. In this paper, the structure model of the sensor was established and a sensor prototype was designed. The influences of the structural parameters on the field of view and the measurement accuracy were also discussed. In addition, real experiments and analyses were performed to evaluate the performance of the proposed sensor in the measurement application. The results proved the feasibility of the sensor, and exhibited a considerable accuracy in 3D coordinate reconstruction. PMID:27607253

  11. Mathematical leadership vision.

    PubMed

    Hamburger, Y A

    2000-11-01

    This article is an analysis of a new type of leadership vision, the kind of vision that is becoming increasingly pervasive among leaders in the modern world. This vision appears to offer a new horizon, whereas, in fact it delivers to its target audience a finely tuned version of the already existing ambitions and aspirations of the target audience. The leader, with advisors, has examined the target audience and has used the results of extensive research and statistical methods concerning the group to form a picture of its members' lifestyles and values. On the basis of this information, the leader has built a "vision." The vision is intended to create an impression of a charismatic and transformational leader when, in fact, it is merely a response. The systemic, arithmetic, and statistical methods employed in this operation have led to the coining of the terms mathematical leader and mathematical vision. PMID:11092414

  12. Can Effective Synthetic Vision System Displays be Implemented on Limited Size Display Spaces?

    NASA Technical Reports Server (NTRS)

    Comstock, J. Raymond, Jr.; Glaab, Lou J.; Prinzel, Lance J.; Elliott, Dawn M.

    2004-01-01

    The Synthetic Vision Systems (SVS) element of the NASA Aviation Safety Program is striving to eliminate poor visibility as a causal factor in aircraft accidents, and to enhance operational capabilities of all types or aircraft. To accomplish these safety and situation awareness improvements, the SVS concepts are designed to provide a clear view of the world ahead through the display of computer generated imagery derived from an onboard database of terrain, obstacle and airport information. An important issue for the SVS concept is whether useful and effective Synthetic Vision System (SVS) displays can be implemented on limited size display spaces as would be required to implement this technology on older aircraft with physically smaller instrument spaces. In this study, prototype SVS displays were put on the following display sizes: (a) size "A' (e.g. 757 EADI), (b) form factor "D" (e.g. 777 PFD), and (c) new size "X" (Rectangular flat-panel, approximately 20 x 25 cm). Testing was conducted in a high-resolution graphics simulation facility at NASA Langley Research Center. Specific issues under test included the display size as noted above, the field-of-view (FOV) to be shown on the display and directly related to FOV is the degree of minification of the displayed image or picture. Using simulated approaches with display size and FOV conditions held constant no significant differences by these factors were found. Preferred FOV based on performance was determined by using approaches during which pilots could select FOV. Mean preference ratings for FOV were in the following order: (1) 30 deg., (2) Unity, (3) 60 deg., and (4) 90 deg., and held true for all display sizes tested. Limitations of the present study and future research directions are discussed.

  13. Development of a vision-based pH reading system

    NASA Astrophysics Data System (ADS)

    Hur, Min Goo; Kong, Young Bae; Lee, Eun Je; Park, Jeong Hoon; Yang, Seung Dae; Moon, Ha Jung; Lee, Dong Hoon

    2015-10-01

    pH paper is generally used for pH interpretation in the QC (quality control) process of radiopharmaceuticals. pH paper is easy to handle and useful for small samples such as radio-isotopes and radioisotope (RI)-labeled compounds for positron emission tomography (PET). However, pHpaper-based detecting methods may have some errors due limitations of eye sight and inaccurate readings. In this paper, we report a new device for pH reading and related software. The proposed pH reading system is developed with a vision algorithm based on the RGB library. The pH reading system is divided into two parts. First is the reading device that consists of a light source, a CCD camera and a data acquisition (DAQ) board. To improve the accuracy of the sensitivity, we utilize the three primary colors of the LED (light emission diode) in the reading device. The use of three colors is better than the use of a single color for a white LED because of wavelength. The other is a graph user interface (GUI) program for a vision interface and report generation. The GUI program inserts the color codes of the pH paper into the database; then, the CCD camera captures the pH paper and compares its color with the RGB database image in the reading mode. The software captures and reports information on the samples, such as pH results, capture images, and library images, and saves them as excel files.

  14. Automatic vision system for analysis of microscopic behavior of flow and transport in porous media

    SciTech Connect

    Rashidi, M.; Dehmeshid, J.; Dickenson, E.; Daemi, F.

    1997-07-01

    This paper describes the development of a novel automated and efficient vision system to obtain velocity and concentration measurements within a porous medium. An aqueous fluid laced with a fluorescent dye or microspheres flows through a transparent, reflective-index-matched column packed with a transparent crystals. For illumination purposes, a planar sheet of lasers passes through the column as a CCD camera records all the laser illuminated planes. Detailed microscopic velocity and concentration fluids have been computed within a 3D volume of the column. For measuring velocities, while the aqueous fluid, laced with fluorescent microspheres, flows though the transparent medium, a CCD camera records the motions of the fluorescing particles by a video cassette recorder.The recorder images are acquired frame by frame and transferred to the computer foe processing by using a frame grabber and written relevant algorithms through an RD-232 interface. Since the grabbed image is poor in this stage, some preprocessings are used to enhance particles within images. Finally, these measurement, while the aqueous fluid, laced with a fluorescent organic dye, flows through the transparent medium, a CCD camera sweeps back and forth across the column and records concentration slices on the planes illuminated by the laser beam traveling simultaneously with the camera. Subsequently, these recorded images are transferred to the computer for processing in similar fashion to the velocity measurement. In order to have a fully automatic vision system, several detailed image processing techniques are developed to match exact imaged (at difference times during the experiments) that have different intensities values but the same topological characteristics. This results in normalized interstitial chemical concentration as a function of time within the porous column.

  15. The model and its solution's uniqueness of a portable 3D vision coordinate measuring system

    NASA Astrophysics Data System (ADS)

    Huang, Fengshan; Qian, Huifen

    2009-11-01

    The portable three-dimensional vision coordinate measuring system, which consists of a light pen, a CCD camera and a laptop computer, can be widely applied in most coordinate measuring fields especially on the industrial spots. On the light pen there are at least three point-shaped light sources (LEDs) acting as the measured control characteristic points and a touch trigger probe with a spherical stylus which is used to contact the point to be measured. The most important character of this system is that three light sources and the probe stylus are aligned in one line with known positions. In building and studying this measuring system, how to construct the system's mathematical model is the most key problem called Perspective of Three-Collinear-points Problem, which is a particular case of Perspective of Three-points Problem (P3P). On the basis of P3P and spatial analytical geometry theory, the system's mathematical model is established. What's more, it is verified that Perspective of Three-Collinear-points Problem has a unique solution. And the analytical equations of the measured point's coordinates are derived by using the system's mathematical model and the restrict condition that three light sources and the probe stylus are aligned in one line. Finally, the effectiveness of the mathematical model is confirmed by experiments.

  16. DARPA super resolution vision system (SRVS) robust turbulence data collection and analysis

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Leonard, Kevin R.; Thompson, Roger; Tofsted, David; D'Arcy, Sean

    2014-05-01

    Atmospheric turbulence degrades the range performance of military imaging systems, specifically those intended for long range, ground-to-ground target identification. The recent Defense Advanced Research Projects Agency (DARPA) Super Resolution Vision System (SRVS) program developed novel post-processing system components to mitigate turbulence effects on visible and infrared sensor systems. As part of the program, the US Army RDECOM CERDEC NVESD and the US Army Research Laboratory Computational & Information Sciences Directorate (CISD) collaborated on a field collection and atmospheric characterization of a two-handed weapon identification dataset through a diurnal cycle for a variety of ranges and sensor systems. The robust dataset is useful in developing new models and simulations of turbulence, as well for providing as a standard baseline for comparison of sensor systems in the presence of turbulence degradation and mitigation. In this paper, we describe the field collection and atmospheric characterization and present the robust dataset to the defense, sensing, and security community. In addition, we present an expanded model validation of turbulence degradation using the field collected video sequences.

  17. Development of a vision non-contact sensing system for telerobotic applications

    NASA Astrophysics Data System (ADS)

    Karkoub, M.; Her, M.-G.; Ho, M.-I.; Huang, C.-C.

    2013-08-01

    The study presented here describes a novel vision-based motion detection system for telerobotic operations such as distant surgical procedures. The system uses a CCD camera and image processing to detect the motion of a master robot or operator. Colour tags are placed on the arm and head of a human operator to detect the up/down, right/left motion of the head as well as the right/left motion of the arm. The motion of the colour tags are used to actuate a slave robot or a remote system. The determination of the colour tags' motion is achieved through image processing using eigenvectors and colour system morphology and the relative head, shoulder and wrist rotation angles through inverse dynamics and coordinate transformation. A program is used to transform this motion data into motor control commands and transmit them to a slave robot or remote system through wireless internet. The system performed well even in complex environments with errors that did not exceed 2 pixels with a response time of about 0.1 s. The results of the experiments are available at: http://www.youtube.com/watch?v=yFxLaVWE3f8 and http://www.youtube.com/watch?v=_nvRcOzlWHw

  18. Low Vision

    MedlinePlus

    ... Cases of Low Vision (in thousands) by Age, Gender, and Race/Ethnicity Table for 2010 U.S. Prevalent ... Cases of Low Vision (in thousands) by Age, Gender, and Race/Ethnicity Table for 2000 U.S. Prevalent ...

  19. Accurate calibration of a stereo-vision system in image-guided radiotherapy.

    PubMed

    Liu, Dezhi; Li, Shidong

    2006-11-01

    Image-guided radiotherapy using a three-dimensional (3D) camera as the on-board surface imaging system requires precise and accurate registration of the 3D surface images in the treatment machine coordinate system. Two simple calibration methods, an analytical solution as three-point matching and a least-squares estimation method as multipoint registration, were introduced to correlate the stereo-vision surface imaging frame with the machine coordinate system. Both types of calibrations utilized 3D surface images of a calibration template placed on the top of the treatment couch. Image transformational parameters were derived from corresponding 3D marked points on the surface images to their given coordinates in the treatment room coordinate system. Our experimental results demonstrated that both methods had provided the desired calibration accuracy of 0.5 mm. The multipoint registration method is more robust particularly for noisy 3D surface images. Both calibration methods have been used as our weekly QA tools for a 3D image-guided radiotherapy system. PMID:17153416

  20. Accurate calibration of a stereo-vision system in image-guided radiotherapy

    SciTech Connect

    Liu Dezhi; Li Shidong

    2006-11-15

    Image-guided radiotherapy using a three-dimensional (3D) camera as the on-board surface imaging system requires precise and accurate registration of the 3D surface images in the treatment machine coordinate system. Two simple calibration methods, an analytical solution as three-point matching and a least-squares estimation method as multipoint registration, were introduced to correlate the stereo-vision surface imaging frame with the machine coordinate system. Both types of calibrations utilized 3D surface images of a calibration template placed on the top of the treatment couch. Image transformational parameters were derived from corresponding 3D marked points on the surface images to their given coordinates in the treatment room coordinate system. Our experimental results demonstrated that both methods had provided the desired calibration accuracy of 0.5 mm. The multipoint registration method is more robust particularly for noisy 3D surface images. Both calibration methods have been used as our weekly QA tools for a 3D image-guided radiotherapy system.